call_end

    • Pl chevron_right

      Jakub Steiner: 12 months instead of 12 minutes

      news.movim.eu / PlanetGnome • 21 November • 2 minutes

    Hey Kids! Other than raving about GNOME.org being a static HTML , there’s one more aspect I’d like to get back to in this writing exercise called a blog post.

    Share card gets updated every release too

    I’ve recently come across an apalling genAI website for a project I hold deerly so I thought I’d give a glimpse on how we used to do things in the olden days. It is probably not going to be done this way anymore in the enshittified timeline we ended up in. The two options available these days are — a quickly generated slop website or no website at all, because privately owned social media is where it’s at.

    The wanna-be-catchy title of this post comes from the fact the website underwent numerous iterations (iterations is the core principle of good design) spanning over a year before we introduced the redesign.

    So how did we end up with a 3D model of a laptop for the hero image on the GNOME website, rather than something generated in a couple of seconds and a small town worth of drinking water or a simple SVG illustration?

    The hero image is static now, but used to be a scroll based animation at the early days. It could have become a simple vector style illustration, but I really enjoy the light interaction of the screen and the laptop, especially between the light and dark variants. Toggling dark mode has been my favorite fidget spinner.

    Creating light/dark variants is a bit tedious to do manually every release, but automating still a bit too hard to pull off (the taking screenshots of a nightly OS bit). There’s also the fun of picking a theme for the screenshot rather than doing the same thing over and over. Doing the screenshooting manually meant automating the rest, as a 6 month cycle is enough time to forget how things are done. The process is held together with duct tape, I mean a python script, that renders the website image assets from the few screenshots captured using GNOME OS running inside Boxes . Two great invisible things made by amazing individuals that could go away in an instant and that thought gives me a dose of anxiety.

    light.webp

    This does take a minute to render on a laptop (CPU only Cycles), but is a matter of a single invocation and a git commit. So far it has survived a couple of Blender releases, so fingers crossed for the future.

    Sophie has recently been looking into translations , so we might reconsider that 3D approach if translated screenshots become viable (and have them contained in an SVG similar to how os.gnome.org is done). So far the 3D hero has always been in sync with the release, unlike in our Wordpress days. Fingers crossed.

    • Pl chevron_right

      Philip Withnall: Parental controls screen time limits backend

      news.movim.eu / PlanetGnome • 19 November • 4 minutes

    Ignacy blogged recently about all the parts of the user interface for screen time limits in parental controls in GNOME. He’s been doing great work pulling that all together, while I have been working on the backend side of things. We’re aiming for this screen time limits feature to appear in GNOME 50.

    High level design

    There’s a design document which is the canonical reference for the design of the backend, but to summarise it at a high level: there’s a stateless daemon, malcontent-timerd , which receives logs of the child user’s time usage of the computer from gnome-shell in the child’s session. For example, when the child stops using the computer, gnome-shell will send the start and end times of the most recent period of usage. The daemon deduplicates/merges and stores them. The parent has set a screen time policy for the child, which says how much time they’re allowed on the computer per day (for example, 4h at most; or only allowed to use the computer between 15:00 and 17:00). The policy is stored against the child user in accounts-service .

    malcontent-timerd applies this policy to the child’s usage information to calculate an ‘estimated end time’ for the child’s current session, assuming that they continue to use the computer without taking a break. If they stop or take a break, their usage – and hence the estimated end time – is updated.

    The child’s gnome-shell is notified of changes to the estimated end time and, once it’s reached, locks the child’s session (with appropriate advance warning).

    Meanwhile, the parent can query the child’s computer usage via a separate API to malcontent-timerd . This returns the child’s total screen time usage per day, which allows the usage chart to be shown to the parent in the parental controls user interface ( malcontent-control ). The daemon imposes access controls on which users can query for usage information. Because the daemon can be accessed by the child and by the parent, and needs to be write-only for the child and read-only for the parent, it has to be a system daemon.

    There’s a third API flow which allows the child to request an extension to their screen time for the day, but that’s perhaps a topic for a separate post.

    IPC diagram of screen time limits support in malcontent. Screen time limit extensions are shown in dashed arrows.

    So, at its core, malcontent-timerd is a time range store with some policy and a couple of D-Bus interfaces built on top.

    Per-app time limits

    Currently it only supports time limits for login sessions, but it is built in such a way that adding support for time limits for specific apps would be straightforward to add to malcontent-timerd in future. The main work required for that would be in gnome-shell — recording usage on a per-app basis (for apps which have limits applied), and enforcing those limits by freezing or blocking access to apps once the time runs out. There are some interesting user experience questions to think about there before anyone can implement it — how do you prevent a user from continuing to use an app without risking data loss (for example, by killing it)? How do you unambiguously remind the user they’re running out of time for a specific app? Can we reliably find all the windows associated with a certain app? Can we reliably instruct apps to save their state when they run out of time, to reduce the risk of data loss? There are a number of bits of architecture we’d need to get in place before per-app limits could happen.

    Wrapping up

    As it stands though, the grant funding for parental controls is coming to an end. Ignacy will be continuing to work on the UI for some more weeks, but my time on it is basically up. With the funding, we’ve managed to implement digital wellbeing (screen time limits and break reminders for adults) including a whole UI for it in gnome-control-center and a fairly complex state machine for tracking your usage in gnome-shell ; a refreshed UI for parental controls; parental controls screen time limits as described above; the backend for web filtering (but more on that in a future post); and everything is structured so that the extra features we want in future should bolt on nicely.

    While the features may be simple to describe, the implementation spans four projects, two buses, contains three new system daemons, two new system data stores, and three fairly unique new widgets. It’s tackled all sorts of interesting user design questions (and continues to do so). It’s fully documented, has some unit tests (but not as many as I’d like), and can be integration tested using sysexts . The new widgets are localisable, accessible, and work in dark and light mode. There are even man pages. I’m quite pleased with how it’s all come together.

    It’s been a team effort from a lot of people! Code, design, input and review (in no particular order): Ignacy , Allan , Sam , Florian , Sebastian , Matthijs , Felipe , Rob . Thank you Endless for the grant and the original work on parental controls. Administratively, thank you to everyone at the GNOME Foundation for handling the grant and paperwork; and thank you to the freedesktop.org admins for providing project hosting for malcontent!

    • Pl chevron_right

      Lennart Poettering: Mastodon Stories for systemd v258

      news.movim.eu / PlanetGnome • 17 November • 2 minutes

    Already on Sep 17 we released systemd v258 into the wild .

    In the weeks leading up to that release I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd258 hash tag. It was my intention to post a link list here on this blog right after completing that series, but I simply forgot! Hence, in case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:

    I intend to do a similar series of serieses of posts for the next systemd release (v259), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

    We intend to shorten the release cycle a bit for the future, and in fact managed to tag v259-rc1 already yesterday, just 2 months after v258. Hence, my series for v259 will begin soon, under the #systemd259 hash tag.

    In case you are interested, here is the corresponding blog story for systemd v257 , and here for v256 .

    • Pl chevron_right

      Code of Conduct Committee: Transparency report for May 2025 to October 2025

      news.movim.eu / PlanetGnome • 15 November • 2 minutes

    GNOME’s Code of Conduct is our community’s shared standard of behavior for participants in GNOME. This is the Code of Conduct Committee’s periodic summary report of its activities from May 2025 to October 2025.

    The current members of the CoC Committee are:

    • Anisa Kuci
    • Carlos Garnacho
    • Christopher Davis
    • Federico Mena Quintero
    • Michael Downey
    • Rosanna Yuen

    All the members of the CoC Committee have completed Code of Conduct Incident Response training provided by Otter Tech, and are professionally trained to handle incident reports in GNOME community events.

    The committee has an email address that can be used to send reports: conduct@gnome.org as well as a website for report submission: https://conduct.gnome.org/

    Reports

    Since May 2025, the committee has received reports on a total of 25 possible incidents. Many of these were not actionable; all the incidents listed here were resolved during the reporting period.

    • Report on a conspiracy theory, closed as not actionable.
    • Report that was not actionable.
    • Report about a blog post; not a CoC violation and not actionable.
    • Report about interactions in GitLab; not a CoC violation and not actionable.
    • Report about a blog post; not a CoC violation and not actionable.
    • Question about an Export Control Classification Number (ECCN) for GDM; redirected to discourse.gnome.org.
    • Report about a reply in GitLab; not a CoC violation; pointed out resources about unpaid/volunteer work in open source.
    • Report about a reply in GitLab; not a CoC violation but using language against the community guidelines; sent a reminder to the reported person to use non-violent communication.
    • Two reports about a GNOME Shell extension; recommended actions to take to the extension reviewers.
    • Report about another GNOME Shell extension; recommended actions to take to the extension reviewers.
    • Multiple reports about a post on planet.gnome.org; removed the post from the feed and its site.
    • Report with a fake attribution; closed as not actionable.
    • Report with threats; closed as not actionable.
    • Report with a fake attribution; closed as not actionable.
    • Report that was not actionable.
    • Support request; advised reporter to direct their question to the infrastructure team.
    • Report closed due to not being actionable; gave the reporter advice on how to deal with their issue.
    • Report about a reply in GitLab; reminded both the reporter and reported person how to communicate appropriately.
    • Report during GUADEC about an incident during the conference; in-person reminder to the reported individual to mind their behavior.
    • Report about a long-standing GitLab interaction; sent a request for a behavior change to the reported person.
    • Report on a conspiracy theory, closed as not actionable.
    • Report about a Mastodon post, closed as it is not a CoC violation.
    • Report closed due to not being actionable, and not a CoC violation.
    • Report closed due to not being actionable, and not a CoC violation.
    • Report closed due to not being actionable, and not a CoC violation.

    Meetings of the CoC committee

    The CoC committee has two meetings each month for general updates, and weekly ad-hoc meetings when they receive reports. There are also in-person meetings during GNOME events.

    Ways to contact the CoC committee

    • https://conduct.gnome.org – contains the GNOME Code of Conduct and a reporting form.
    • conduct@gnome.org – incident reports, questions, etc.
    • Pl chevron_right

      Gedit Technology blog: Mid-November News

      news.movim.eu / PlanetGnome • 14 November • 2 minutes

    Misc news about the gedit text editor , mid-November edition!

    Website: new design

    Probably the highlight this month is the new design of the gedit website .

    If it looks familiar to some of you, it's normal, it's because it's an adaptation of the previous GtkSourceView website that was developed in the old gnomeweb-wml repository. gnomeweb-wml (projects.gnome.org) is what predates all the wiki pages for Apps and Projects . The wiki has been retired, so another solution had to be found.

    For the timeline, projects.gnome.org was available until 2013/2014 where all the content had been migrated to the wiki. Then the wiki has been retired in 2024.

    Note that there are still rough edges on the gedit website, and more importantly some efforts still need to be done to bring the old CSS stylesheet forward to the new(-ish) responsive web design world.

    For the most nostalgic of you:

    And for the least nostalgic of you:

    • gedit website from last month (October 2025)

      Screenshot of gedit website in October 2025

    What we can say is that the gedit project has stood the test of time!

    Enter TeX: improved search and replace

    Some context: I would like some day to unify the search and replace feature between Enter TeX and gedit. It needs to retain the best of each.

    In Enter TeX it's a combined horizontal bar, something that I would like in gedit too to replace the dialog window that occludes part of the text.

    In gedit the strengths include: the search-as-you-type possibility, and a history of past searches. Both are missing in Enter TeX. (These are not the only things that need to be retained; the same workflows, keyboard shortcuts etc. are also an integral part of the functionality).

    So to work towards that goal, I started in Enter TeX. I merged around 50 commits in the git repository for this change already, rewriting in C (from Vala) some parts and improving the UI along the way. The code needs to be in C because it'll be moved to libgedit-tepl so that it can be consumed by gedit easily.

    Here is how it looks:

    Screenshot of the search and replace in Enter TeX

    Internal refactoring for GeditWindow and its statusbar

    GeditWindow is what we can call a god class. It is too big, both in the number of lines and the number of instance variables.

    So this month I've continued to refactor it, to extract a GeditWindowStatus class. There was already a GeditStatusbar class, but its features have now been moved to libgedit-tepl as TeplStatusbar.

    GeditWindowStatus takes up the responsibility to create the TeplStatusbar, to fill it with the indicators and other buttons, and to make the connection with GeditWindow and the current tab/document.

    So as a result, GeditWindow is a little less omniscient ;-)

    As a conclusion

    gedit does not materialize out of empty space; it takes time to develop and maintain. To demonstrate your appreciation of this piece of software and help its future development, remember that you can fund the project . Your support is critical and much appreciated.

    • Pl chevron_right

      Martin Pitt: Learning about MCP servers and AI-assisted GitHub issue triage

      news.movim.eu / PlanetGnome • 14 November

    Today is another Red Hat day of learning. I’ve been hearing about MCP (Model Context Protocol) servers for a while now – the idea of giving AI assistants standardized “eyes and arms” to interact with external tools and data sources. I tried it out, starting with a toy example and then moving on to something actually useful for my day job. First steps: Querying photo EXIF data I started with a local MCP server to query my photo library.
    • Pl chevron_right

      Andy Wingo: the last couple years in v8's garbage collector

      news.movim.eu / PlanetGnome • 13 November • 9 minutes

    Let’s talk about memory management! Following up on my article about 5 years of developments in V8’s garbage collector , today I’d like to bring that up to date with what went down in V8’s GC over the last couple years.

    methodololology

    I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it’s mostly a Google affair.

    Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8’s GC over this time, and I’m going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let’s take a deeper look at each of these in turn.

    the sandbox

    There was a nice blog post in June last year summarizing the sandbox effort : basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

    But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables . Some objects that should never be directly referenced by user code go in a separate “trusted space”, which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages , there are “shared” variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

    I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox : sandboxed code is prevented by the hardware from writing memory outside the sandbox.

    Leaning into the “attacker can write anything in their address space” threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can’t trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

    The best and most amusing instance of this phenomenon is related to integers. Google’s style guide recommends signed integers by default , so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code , which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative . Good times!

    oilpan

    It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

    Last time I described how V8 would like to add support for generational garbage collection to Oilpan , but that for that, you’d need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery , but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection , but it didn’t work out . Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

    Instead, V8 added support for pinning to the Scavenger copying nursery implementation . If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

    What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone “direct handles” in V8 itself.

    The funny thing is that I don’t think any of this is shipping yet; or, if it is, it’s only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic , and it would be a delight to see some actual numbers.

    shared-memory multi-threading

    JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don’t know what the state of shared-memory multi-threading is in JS , but in WebAssembly it seems to be moving apace , and Wasm uses the JS GC. Maybe I am overstating the effort here—probably it doesn’t come to 20%—but wiring this up has been a whole thing .

    I will mention just one patch here that I found to be funny. So with pointer compression, an object’s fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

    Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

    side quests

    Right, we’ve covered what to me are the main stories of V8’s GC over the past couple years. But let me mention a few funny side quests that I saw.

    the heuristics two-step

    This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

    mutex mayhem

    Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8’s use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms . Then... removed it all and switched to abseil .

    Personally, I am delighted to see this patch series, I wouldn’t have thought that there was juice to squeeze in V8’s use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

    ta-ta, third-party heap

    It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago .

    fin

    So what’s next? I don’t know; it’s been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker’s full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

    In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

    • Pl chevron_right

      Jiri Eischmann: How We Streamed OpenAlt on Vhsky.cz

      news.movim.eu / PlanetGnome • 13 November • 8 minutes

    The blog post was originally published on my Czech blog .

    When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it’s a skill we should maintain ourselves.

    To be honest, it’s bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it’s common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

    This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt —a conference I care about and help organize. The first small step was uploading videos from previous years . Throughout the year, we experimented with streaming from OpenAlt meetups . We found that it worked, but a single stream isn’t quite the stress test needed to prove we could handle broadcasting an entire conference.

    For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don’t have insight into this part of the process, so I won’t focus on it. Michal’s job was to get the streams to our server; our job was to get them to the viewers.

    OpenAlt’s AV background with running streams. Author: Michal Stanke.

    Stress Test

    We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it’s not entirely dedicated to PeerTube . It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

    We hadn’t been limited by performance until then, but seven 1440p streams were truly at the edge of the server’s capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don’t change the resolution, you still need to transcode the video to leverage useful distribution features, which I’ll cover later. The 480p resolution was intended for mobile devices and slow connections.

    Remote Runner

    We knew the Vhsky.cz server alone couldn’t handle it. Fortunately, PeerTube allows for the use of “remote runners” . The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it’s not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

    I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn’t tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

    It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we’d better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

    Load on the runner server during the stress test. Author: Adam Štrauch.

    Smart Video Distribution

    Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn’t much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC .

    Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn’t grow at the same rate.

    A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD . Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server’s bandwidth; for a live stream, it was 75%:

    If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

    Live Operation

    On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

    In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

    Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don’t think is a problem. After all, we’re not broadcasting a sports event.

    Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

    Minor Problems

    We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

    Subjectively, even 480p wasn’t a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn’t even have noticed as a problem if I wasn’t focusing on it. I could imagine streaming only in 480p if necessary. But it’s clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

    Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

    When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn’t support bulk uploads. However, tools exist for this, and we’d like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn’t the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

    On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

    A small interlude – my talk about PeerTube at this year’s OpenAlt. Streamed, of course, via PeerTube:

    Thanks and Support

    I think that for our very first time doing this, it turned out very well, and I’m glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

    If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud’s accounts , under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google’s infrastructure, but it doesn’t run for free either.

    • Pl chevron_right

      Ignacy Kuchciński: Digital Wellbeing Contract: Screen Time Limits

      news.movim.eu / PlanetGnome • 10 November • 4 minutes

    It’s been four months since my last Digital Wellbeing update . In that previous post I talked about the goals of the Digital Wellbeing project. I also described our progress improving and extending the functionality of the GNOME Parental Controls application, as well as redesigning the application to meet the current design guidelines.

    Introducing Screen Time Limits

    Following our work on the Parental Controls app, the next major work item was to implement screen time limits functionality, offering the parents ability to check the child’s screen time usage, set the time limits, and lock the child account outside of a specified curfew. This feature actually spanned across *three* different GNOME projects:

    • Parental Controls: Screen Time page was added to the Parental Controls app, so that parents can view child screen usage an set the time limits, it also includes a detailed bar chart
    • Settings: Wellbeing panel needed to make its own time limits settings impossible to change when the child has parental controls session limits enabled, since they are not in effect in such situation. There’s now also a banner with an explanation that guides to the Parental Controls app
    • Shell: Child session needed to actually lock when the limit was reached

    Out of all of the three above, the Parental Controls and Shell changes have been already merged, while the Settings integration has been through unwritten review during the bi-weekly Settings meeting and adjusted to the feedback, so it’s only a matter of time now before it reaches the main branch as well. You can find the screenshots of the added functionalities below, and the reference designs can be find in the app-mockups and os-mockups tickets.

    Child screen usage

    When viewing a managed account, a summary of screen time is shown with actions for changing further settings, as well as actions to access additional settings for restrictions and filtering.

    Child account view with added screen time overview and action for more options

    The Screen Time view shows an overview of the child’s account’s screen time as well as controls which mirror those of the Settings panel to control screen limits and downtime for the child.

    Screen Time page with detailed screen time records and time limit controls

    Settings integration

    On the Settings side, a child account will see a banner in the Wellbeing panel that lets them know some settings cannot be changed, with a link to the Parental Controls app.

    Wellbeing panel with a banner informing that limits can only be changed in Parental Controls

    Screen limits in GNOME Shell

    We have implemented the locking mechanism in GNOME Shell. When a Screen Time limit is reached, the session locks, so that the child can’t use the computer for the rest of the day.

    Following is a screen cast of the Shell functionality:

    Preventing children from unlocking has not been implemented yet. However, fortunately, the hardest part was implementing the framework for the rest of the code, so hopefully the easier graphical change will take less to implement and the next update will be much sooner than this one.

    GNOME OS images

    You don’t have to take my word for it, especially since one can notice I’ve had to cut the recording at one point (forgot that one can’t switch users in the lock screen :P) – you can check out all of these features in the very same GNOME OS live image I’ve used in the recording, that you can either run in GNOME Boxes , or try on your hardware if you know what you’re doing 🙂

    Malcontent changes

    While all of these user facing changes look cool, none of them would be actually possible without the malcontent backend, which Philip Withnall has been working on. While the daily schedule had already been implemented, the daily limit session limit had to be added, as well as malcontent timer daemon API for Shell to use. There has been many other improvements, web filtering daemon has been added, which I’ll use in the future for implementing Web Filtering page in Parental Controls app.

    Conclusion

    Our work for the GNOME Foundation is funded by Endless and Dalio Philanthropies, so kudos to them! I want to thank Florian Müllner for his patience too, during the very educative for me merge request review, and answering to all of my Shell development wonderings. I also want to thank Matthijs Velsink and Felipe Borges for finding time to review the Settings integration.

    Now that this foundation has been made, we’ll be focusing on finishing the last remaining bit of the session limits support in Shell, which is tweaking the appearance of lock screen when the limit is reached, and implementing the ignore button for extending screen limit, as well as notifications, followed by Web Filtering support in Parental Controls. Until next update!