call_end

    • chevron_right

      Andy Wingo: the last couple years in v8's garbage collector

      news.movim.eu / PlanetGnome • 13 November 2025 • 9 minutes

    Let’s talk about memory management! Following up on my article about 5 years of developments in V8’s garbage collector , today I’d like to bring that up to date with what went down in V8’s GC over the last couple years.

    methodololology

    I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it’s mostly a Google affair.

    Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8’s GC over this time, and I’m going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let’s take a deeper look at each of these in turn.

    the sandbox

    There was a nice blog post in June last year summarizing the sandbox effort : basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

    But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables . Some objects that should never be directly referenced by user code go in a separate “trusted space”, which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages , there are “shared” variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

    I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox : sandboxed code is prevented by the hardware from writing memory outside the sandbox.

    Leaning into the “attacker can write anything in their address space” threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can’t trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

    The best and most amusing instance of this phenomenon is related to integers. Google’s style guide recommends signed integers by default , so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code , which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative . Good times!

    oilpan

    It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

    Last time I described how V8 would like to add support for generational garbage collection to Oilpan , but that for that, you’d need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery , but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection , but it didn’t work out . Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

    Instead, V8 added support for pinning to the Scavenger copying nursery implementation . If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

    What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone “direct handles” in V8 itself.

    The funny thing is that I don’t think any of this is shipping yet; or, if it is, it’s only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic , and it would be a delight to see some actual numbers.

    shared-memory multi-threading

    JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don’t know what the state of shared-memory multi-threading is in JS , but in WebAssembly it seems to be moving apace , and Wasm uses the JS GC. Maybe I am overstating the effort here—probably it doesn’t come to 20%—but wiring this up has been a whole thing .

    I will mention just one patch here that I found to be funny. So with pointer compression, an object’s fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

    Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

    side quests

    Right, we’ve covered what to me are the main stories of V8’s GC over the past couple years. But let me mention a few funny side quests that I saw.

    the heuristics two-step

    This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

    mutex mayhem

    Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8’s use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms . Then... removed it all and switched to abseil .

    Personally, I am delighted to see this patch series, I wouldn’t have thought that there was juice to squeeze in V8’s use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

    ta-ta, third-party heap

    It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago .

    fin

    So what’s next? I don’t know; it’s been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker’s full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

    In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

    • chevron_right

      Jiri Eischmann: How We Streamed OpenAlt on Vhsky.cz

      news.movim.eu / PlanetGnome • 13 November 2025 • 8 minutes

    The blog post was originally published on my Czech blog .

    When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it’s a skill we should maintain ourselves.

    To be honest, it’s bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it’s common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

    This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt —a conference I care about and help organize. The first small step was uploading videos from previous years . Throughout the year, we experimented with streaming from OpenAlt meetups . We found that it worked, but a single stream isn’t quite the stress test needed to prove we could handle broadcasting an entire conference.

    For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don’t have insight into this part of the process, so I won’t focus on it. Michal’s job was to get the streams to our server; our job was to get them to the viewers.

    OpenAlt’s AV background with running streams. Author: Michal Stanke.

    Stress Test

    We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it’s not entirely dedicated to PeerTube . It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

    We hadn’t been limited by performance until then, but seven 1440p streams were truly at the edge of the server’s capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don’t change the resolution, you still need to transcode the video to leverage useful distribution features, which I’ll cover later. The 480p resolution was intended for mobile devices and slow connections.

    Remote Runner

    We knew the Vhsky.cz server alone couldn’t handle it. Fortunately, PeerTube allows for the use of “remote runners” . The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it’s not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

    I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn’t tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

    It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we’d better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

    Load on the runner server during the stress test. Author: Adam Štrauch.

    Smart Video Distribution

    Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn’t much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC .

    Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn’t grow at the same rate.

    A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD . Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server’s bandwidth; for a live stream, it was 75%:

    If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

    Live Operation

    On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

    In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

    Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don’t think is a problem. After all, we’re not broadcasting a sports event.

    Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

    Minor Problems

    We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

    Subjectively, even 480p wasn’t a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn’t even have noticed as a problem if I wasn’t focusing on it. I could imagine streaming only in 480p if necessary. But it’s clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

    Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

    When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn’t support bulk uploads. However, tools exist for this, and we’d like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn’t the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

    On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

    A small interlude – my talk about PeerTube at this year’s OpenAlt. Streamed, of course, via PeerTube:

    Thanks and Support

    I think that for our very first time doing this, it turned out very well, and I’m glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

    If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud’s accounts , under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google’s infrastructure, but it doesn’t run for free either.

    • chevron_right

      Ignacy Kuchciński: Digital Wellbeing Contract: Screen Time Limits

      news.movim.eu / PlanetGnome • 10 November 2025 • 4 minutes

    It’s been four months since my last Digital Wellbeing update . In that previous post I talked about the goals of the Digital Wellbeing project. I also described our progress improving and extending the functionality of the GNOME Parental Controls application, as well as redesigning the application to meet the current design guidelines.

    Introducing Screen Time Limits

    Following our work on the Parental Controls app, the next major work item was to implement screen time limits functionality, offering the parents ability to check the child’s screen time usage, set the time limits, and lock the child account outside of a specified curfew. This feature actually spanned across *three* different GNOME projects:

    • Parental Controls: Screen Time page was added to the Parental Controls app, so that parents can view child screen usage an set the time limits, it also includes a detailed bar chart
    • Settings: Wellbeing panel needed to make its own time limits settings impossible to change when the child has parental controls session limits enabled, since they are not in effect in such situation. There’s now also a banner with an explanation that guides to the Parental Controls app
    • Shell: Child session needed to actually lock when the limit was reached

    Out of all of the three above, the Parental Controls and Shell changes have been already merged, while the Settings integration has been through unwritten review during the bi-weekly Settings meeting and adjusted to the feedback, so it’s only a matter of time now before it reaches the main branch as well. You can find the screenshots of the added functionalities below, and the reference designs can be find in the app-mockups and os-mockups tickets.

    Child screen usage

    When viewing a managed account, a summary of screen time is shown with actions for changing further settings, as well as actions to access additional settings for restrictions and filtering.

    Child account view with added screen time overview and action for more options

    The Screen Time view shows an overview of the child’s account’s screen time as well as controls which mirror those of the Settings panel to control screen limits and downtime for the child.

    Screen Time page with detailed screen time records and time limit controls

    Settings integration

    On the Settings side, a child account will see a banner in the Wellbeing panel that lets them know some settings cannot be changed, with a link to the Parental Controls app.

    Wellbeing panel with a banner informing that limits can only be changed in Parental Controls

    Screen limits in GNOME Shell

    We have implemented the locking mechanism in GNOME Shell. When a Screen Time limit is reached, the session locks, so that the child can’t use the computer for the rest of the day.

    Following is a screen cast of the Shell functionality:

    Preventing children from unlocking has not been implemented yet. However, fortunately, the hardest part was implementing the framework for the rest of the code, so hopefully the easier graphical change will take less to implement and the next update will be much sooner than this one.

    GNOME OS images

    You don’t have to take my word for it, especially since one can notice I’ve had to cut the recording at one point (forgot that one can’t switch users in the lock screen :P) – you can check out all of these features in the very same GNOME OS live image I’ve used in the recording, that you can either run in GNOME Boxes , or try on your hardware if you know what you’re doing 🙂

    Malcontent changes

    While all of these user facing changes look cool, none of them would be actually possible without the malcontent backend, which Philip Withnall has been working on. While the daily schedule had already been implemented, the daily limit session limit had to be added, as well as malcontent timer daemon API for Shell to use. There has been many other improvements, web filtering daemon has been added, which I’ll use in the future for implementing Web Filtering page in Parental Controls app.

    Conclusion

    Our work for the GNOME Foundation is funded by Endless and Dalio Philanthropies, so kudos to them! I want to thank Florian Müllner for his patience too, during the very educative for me merge request review, and answering to all of my Shell development wonderings. I also want to thank Matthijs Velsink and Felipe Borges for finding time to review the Settings integration.

    Now that this foundation has been made, we’ll be focusing on finishing the last remaining bit of the session limits support in Shell, which is tweaking the appearance of lock screen when the limit is reached, and implementing the ignore button for extending screen limit, as well as notifications, followed by Web Filtering support in Parental Controls. Until next update!

    • chevron_right

      Luis Villa: Three LLM-assisted projects

      news.movim.eu / PlanetGnome • 10 November 2025 • 7 minutes

    Some notes on my first serious coding projects in something like 20 years, possibly longer. If you’re curious what these projects mean , more thoughts over on the OpenML.fyi newsletter.

    TLDR

    A GitHub contribution graph, showing a lot of activity in the past three weeks after virtually none the rest of the year.

    News, Fixed

    The “ Fix The News ” newsletter is a pillar of my mental health these days, bringing me news that the world is not entirely going to hell in a handbasket. And my 9yo has repeatedly noted that our family news diet is “broken” in exactly the way Fix The News is supposed to fix—hugely negative, hugely US-centric. So I asked Claude to create a “newspaper” version of FTN — a two page pdf of some highlights. It was a hit.

    So I’ve now been working with Claude Code to create and gradually improve a four-days-a-week “News, Fixed” newspaper. This has been super-fun for the whole family—my wife has made various suggestions over my shoulder, my son devours it every morning, and it’s the first serious coding project I’ve tackled in ages. It is almost entirely strictly personal (it still has hard-coded Duke Basketball schedules) but nevertheless is public and FOSS . (It is even my first usage of reuse.software —and also of SonarQube Server !)

    Example newspaper here .

    No matter how far removed you are from practical coding experience, I cannot recommend enough finding a simple, fun project like this that scratches a human itch in your life, and using the project to experiment with the new code tools.

    Getting Things Done assistant

    While working on News, Fixed a friend pointed out Steve Yegge’s “ beads ”, which reimagines software issue tracking as an LLM-centric activity — json-centric, tracked in git, etc. At around the same time, I was also pointed at Superpowers —essentially, canned “skills” like “teach the LLM, temporarily, how to brainstorm”.

    The two of these together in my mind screamed “do this for your overwhelmed todo list”. I’ve long practiced various bastardized versions of Getting Things Done, but one of the hangups has been that I’m inconsistent about doing the daily/weekly/nth-ly reviews that good GTD really relies on. I might skip a step, or not look through all my huge “someday-maybe” list, or… any of many reasons one can be tired and human when faced with a wall of text. Also, while there are many tools out there to do GTD, in my experience they either make some of the hardest parts (like the reviews) your problem, or they don’t quite fit with how I want to do GTD, or both. Hacking on my own prompts to manage the reviews seems to fit these needs to a T.

    I currently use Amazing Marvin as my main GTD tool. It is funky and weird and I’ve stuck with it much longer than any other task tracker I’ve ever used. So what I’ve done so far:

    • wrapped the Marvin API to extract json
    • discovered the Marvin API is very flaky, so done some caching and validation
    • written a lot of prompts for the various phases/tasks in GTD. These work to varying degrees and I really want to figure out how to collaborate with others on them, because I suspect that as more tools offer LLM-ish APIs ( whoa, todoist! ) these prompts are where the real fun and action will be.

    This is all read-only right now because of limitations in the Marvin API but for various reasons I’m not yet ready to embark on building my own entire UI. So this will do for now. But this code, therefore, is very limited to me. The prompts on the other hand…

    Note that my emphasis is not on “do tasks”, it is on helping me stay on priority. Less “chief of staff”, more “executive assistant”—both incredibly valuable when done well, but different roles. This is different from some of the use examples for Yegge’s Beads, which really are around agents.

    Also note: the results have been outstanding. I’m getting more easily into my doing zone, I think largely because I have less anxiety about staring at the Giant Wall of Tasks that defines the life of any high-level IC. And my projects are better organized and todos feel more accurate than they have been in a long time, possibly ever.

    a note on LLMs and issue/TODO tracking

    It is worth noting that while LLMs are probabilistic/lossy, so they can’t find the “perfect” next TODO to work on, that’s OK. Personal TODO and software issue tracking are inherently subjective, probabilistic activities—there is no objectively perfect “next best thing to work on”, “most important thing to work on”, etc. So the fact that an LLM is only probabilistic in identifying the next task to work on is fine—no human can do substantially better. In fact I’m pretty sure that once an issue list is past a certain point, the LLM is likely to be able to do better— if (and like many things LLM, this is a big if) you can provide it with documented standards explaining how you want to do prioritization. (Literally one of the first things I did at my first job was write standards on how to prioritize bugs—the forerunner of this doc —so I have strong opinions, and experience, here.)

    Skills for license “concluded”

    While at a recent Linux Foundation event, I was shocked to realize how many very smart people haven’t internalized the skills/prompts/context stuff. It’s either “you chat with it” or “you train a model”. This is not their fault; it is hard to keep up!

    Of course this came up most keenly in the context of the age-old problem of “how do I tell what license an open source project is under”. In other words, what is the difference between “I have scanned this” and “I have reached the zen state of SPDX’s ‘ concluded ’ field”.

    So … yes, I’ve started playing with scripts and prompts on this. It’s much less further along than the other two projects above, but I think it could be very fruitful if structured correctly. Some potentially big benefits above and beyond the traditional scanning and/or throw a lawyer at it approaches:

    • reporting: my very strong intuition, admittedly not yet tested, is that plain-English reports on factors below, plus links into repos, will be much easier for lawyers to use as a starting point than the UIs of traditional license-scanner tools. And I suspect ultimately more powerful as well, since they’ll be able to draw on some of the things below.
    • context sensitivity: unlike a regexp, an LLM can likely fairly reliably understand from context some of the big failures of traditional pattern matching like “this code mentions license X but doesn’t actually include it”.
    • issue analysis and change analysis: unlike traditional approaches, LLMs can look at the change history of key files like README and LICENSE and draw useful context from them. “oh hey README mentioned a license change on Nov. 9, 2025, here’s what the change was and let’s see if there are any corresponding issues and commit logs that explain this change” is something that an LLM really can do. (Also it can do that with much more patience than any human.)

    ClearlyDefined offers test data on this, by the way — I’m really looking forward to seeing if this can be made actually reliable or not. (And then we can hook up reuse.software on the backend to actually improve the upstream metadata…)

    But even then, I may not ever release this. There’s a lot of real risks here and I still haven’t thought them through enough to be comfortable with them. That’s true even though I think the industry has persistently overstated its ability to reach useful conclusions about licensing, since it so persistently insists on doing licensing analysis without ever talking to maintainers.

    More to come?

    I’m sure there will be more of these. That said, one of the interesting temptations of this is that it is very hard to say “something is done” because it is so easy to add more. (eg, once my personal homebrew News Fixed is done… why not turn it into a webapp? once my GTD scripts are done… why not port the backend? etc. etc.) So we’ll see how that goes.

    • chevron_right

      Victor Ma: Google Summer of Code final report

      news.movim.eu / PlanetGnome • 7 November 2025 • 6 minutes

    For Google Summer of Code 2025 , I worked on GNOME Crosswords . GNOME Crosswords is a project that consists of two apps:

    Links

    Here are links to everything that I worked on.

    Merge requests

    Merge requests related to the word suggestion algorithm:

    1. Improve word suggestion algorithm
    2. Add word-list-tests-utils.c
    3. Refactor clue-matches-tests.c by using a fixture
    4. Use better test assert macros
    5. Add macro to reduce boilerplate code in clue-matches-tests.c
    6. Add a macro to simplify the test_clue_matches calls
    7. Add more tests to clue-matches-tests.c
    8. Use string parameter in macro function
    9. Add performance tests to clue-matches-tests.c
    10. Make phase 3 of word_list_find_intersection() optional
    11. Improve print functions for WordArray and WordSet

    Other merge requests:

    1. Fix and refactor editor puzzle import
    2. Add MIME sniffing to downloader
    3. Add support for remaining divided cell types in svg.c
    4. Fix intersect sort
    5. Fix rebus intersection
    6. Use a single suggested words list for Editor

    Issues submitted

    Issues I submitted on GitLab.

    Design documents

    Other documents

    Development:

    Word suggestion algorithm:

    Competitive analysis:

    Other:

    Blog posts

    1. Introducing my GSoC 2025 project
    2. Coding begins
    3. A strange bug
    4. Bugs, bugs, and more bugs!
    5. My first design doc
    6. It’s alive!
    7. When is an optimization not optimal?
    8. This is a test post

    Journal

    I kept a daily journal of the things that I was working on.

    Project summary

    I improved GNOME Crossword Editor’s word suggestion algorithm, by re-implementing it as a forward-checking algorithm. Previously, our word suggestion algorithm only considered the constraints imposed by the intersection where the cursor is. This resulted in frequent dead-end word suggestions, which led to user frustration.

    To fix this problem, I re-implemented our word suggestion algorithm to consider the constraints imposed by every intersection in the current slot. This significantly reduces the number of dead-end word suggestions and leads to a better user experience.

    As part of this project, I also researched the field of constraint satisfaction problems and wrote a report on how we can use the AC-3 algorithm to further improve our word suggestion algorithm in the future.

    I also performed a competitive analysis of other crossword editors on the market and wrote a detailed report, to help identify missing features and guide future development.

    Word suggestion algorithm improvements

    The goal of any crossword editor software is to make it as easy as possible to create a good crossword puzzle. To that end, all crossword editors have a feature called a word suggestion list . This is a dynamic list of words that fit the current slot. It helps the user find words that fit the slots on their grid.

    In order to generate the word suggestion list, crossword editors use a word suggestion algorithm . The simplest example of a word suggestion algorithm considers two constraints:

    • The size of the current slot.
    • The letters in the current slot.

    So for example, if the current slot is C A _ S , then this basic word suggestion algorithm would return all four-letter words that start with CA and end in S —such as CATS or CABS , but not COTS .

    The problem

    There is a problem with this basic word suggestion algorithm, however. Consider the following grid:

    +---+---+---+---+
    | | | | Z |
    +---+---+---+---+
    | | | | E |
    +---+---+---+---+
    | | | | R |
    +---+---+---+---+
    | W | O | R | | < current slot
    +---+---+---+---+
    

    4-Down begins with ZER , so the only word it can be is ZERO . This constrains the bottom-right cell to the letter O .

    4-Across starts with WOR . We know that the bottom-right cell must be O , so that means that 4-Across must be WORO . But WORO is not a word. So, 4-Down and 4-Across are both unfillable, because no letter fits in the bottom-right cell. This means that there are no valid word suggestions for either 4-Across or 4-Down.

    Now, suppose that the current slot is 4-Across. The basic algorithm only considers the constraints imposed by the current slot, and so it returns all words that match the pattern W O R _ —such as WORD and WORM . But none of these word suggestions actually fit in the slot—they all cause 4-Down to become some nonsensical word.

    The problem is that the basic algorithm only looks at the current slot, 4-Across. It does not also look at other slots, like 4-Down. Because of that, the algorithm doesn’t realize that 4-Down causes 4-Across to be unfillable. And so, the algorithm generates incorrect word suggestions.

    Our word suggestion algorithm

    Our word suggestion algorithm was a bit more advanced than this basic algorithm. Our algorithm considered two constraints:

    • The constraints imposed by the current slot.
    • The constraints imposed by the intersecting slot where the cursor is.

    This means that our algorithm could actually handle the problematic grid properly if the cursor is on the bottom-right cell. But not if the cursor is on any other cell of 4-Across:

    Broken behaviour

    Consequences

    All this means that our word suggestion algorithm was prone to generating dead-end words —words that seem to fit a slot, but that actually lead to an unfillable grid.

    In the problematic grid example I gave, this unfillability is immediately obvious. The user fills 4-Across with a word like WORM , and they instantly see that this turns 4-Down into ZERM , a nonsense word. That makes this grid not so bad.

    The worst cases are the insidious ones, where the fact that a word suggestion leads to an unfillable grid is not obvious at first. This leads to a ton of wasted time and frustration for the user.

    My solution

    To fix this problem, I re-implemented our word suggestion algorithm to account for the constraints imposed by all the intersecting slots. Now, our word suggestion algorithm correctly handles the problematic grid example:

    Fixed behaviour

    Our new algorithm doesn’t eliminate dead-end words entirely. After all, it only checks the intersecting slots of the current slot—it does not also check the intersecting slots of the intersecting slots, etc.

    However, the constraints imposed by a slot onto the current slot become weaker, the more intersections-removed it is. Consider: in order for a slot that’s two intersections away from the current slot to constrain the current slot, it must first constrain a mutual slot (a slot that intersects both of them) enough for that mutual slot to then constrain the current slot.

    Compare that to a slot that is only one intersection away from the current slot. All it has to do is be constrained enough that it limits what letters the intersecting cell can be.

    And so, although my changes do not eliminate dead-end words entirely, they do significantly reduce their prevalence, resulting in a much better user experience.

    The end

    This concludes my Google Summer of Code 2025 project! I give my thanks to Jonathan Blandford for his invaluable mentorship and clear communication throughout the past six months. And I thank the GNOME Foundation for its participation in GSoC and commitment to open source.

    • chevron_right

      Bradley M. Kuhn: Managing Diabetes in Software Freedom

      news.movim.eu / PlanetGnome • 6 November 2025 • 4 minutes

    [ The below is a cross-post of an article that I published on my blog at Software Freedom Conservancy . ]

    Our member project representatives and others who collaborate with SFC on projects know that I've been on part-time medical leave this year. As I recently announced publicly on the Fediverse , I was diagnosed in March 2025 with early-stage Type 2 Diabetes. I had no idea that that the diagnosis would become a software freedom and users' rights endeavor.

    After the diagnosis, my doctor suggested immediately that I see the diabetes nurse-practitioner specialist in their practice. It took some time get an appointment with him, so I saw him first in mid-April 2025.

    I walked into the office, sat down, and within minutes the specialist asked me to “take out your phone and install the Freestyle Libre app from Abbott”. This is the first (but, will probably not be the only) time a medical practitioner asked me to install proprietary software as the first step of treatment.

    The specialist told me that in his experience, even early-stage diabetics like me should use a Continuous Glucose Monitor ( CGM ). CGM's are an amazing (relatively) recent invention that allows diabetics to sample their blood sugar level constantly. As we software developers and engineers know: great things happen when your diagnostic readout is as low latency as possible. CGMs lower the latency of readouts from 3–4 times a day to every five minutes . For example, diabetics can see what foods are most likely to cause blood sugar spikes for them personally. CGMs put patients on a path to manage this chronic condition well.

    But, the devices themselves, and the (default) apps that control them are hopelessly proprietary. Fortunately, this was (obviously) not my first time explaining FOSS from first principles. So, I read through the license and terms and conditions of the ironically named “Freestyle Libre” app, and pointed out to the specialist how patient-unfriendly the terms were. For example, Abbott (the manufacturer of my CGM ) reserves the right to collect your data (anonymously of course, to “improve the product”). They also require patients to agree that if they take any action to reverse engineer, modify, or otherwise do the normal things our community does with software, the patient must agree that such actions “constitute immediate, irreparable harm to Abbott, its affiliates, and/or its licensors”. I briefly explained to the specialist that I could not possibly agree. I began in real-time (still sitting with the specialist) a search for a FOSS solution.

    As I was searching, the specialist said: “Oh, I don't use any of it myself, but I think I've heard of this ‘open source’ thing — there is a program called xDrip+ that is for insulin-dependent diabetics that I've heard of and some patients report it is quite good”.

    While I'm (luckily) very far from insulin-dependency, I eventually found the FOSS Android app called Juggluco (a portmanteau for “Juggle glucose”). I asked the specialist to give me the prescription and I'd try Juggluco to see if it would work.

    CGM 's are very small and their firmware is (by obvious necessity) quite simple. As such, their interfaces are standard. CGM's are activated with Near Field Communication ( NFC ) — available on even quite old Android devices. The Android device sends a simple integer identifier via NFC that activates the CGM. Once activated — and through the 15-day life of the device — the device responds via Bluetooth with the patient's current glucose reading to any device presenting that integer.

    Fortunately, I quickly discovered that the FOSS community was already “on this”. The NFC activation worked just fine, even on the recently updated “Freestyle Libre 3 + ”. After the sixty minute calibration period, I had a continuous readout in Juggluco.

    CGM 's lower latency feedback enables diabetics to have more control of their illness management. one example among many: the patient can see (in real time) what foods most often cause blood sugar spikes for them personally . Diabetes hits everyone differently; data allows everyone to manage their own chronic condition better.

    My personal story with Juggluco will continue — as I hope (although not until after FOSDEM 2026 😆) to become an upstream contributor to Juggluco. Most importantly, I hope to help the app appear in F-Droid. (I must currently side-load or use Aurora Store to make it work on LineageOS.)

    Fitting with the history that many projects that interact with proprietary technology must so often live through, Juggluco has faced surreptitious removal from Google's Play Store . Abbott even accused Juggluco of using their proprietary libraries and encryption methods, but the so-called “encryption method” is literally sending an single integer as part of NFC activation.

    While Abbott backed off, this is another example of why the movement of patients taking control of the technology remains essential. FOSS fits perfectly with this goal. Software freedom gives control of technology to those who actually rely on it — rather than for-profit medical equipment manufacturers.

    When I returned to my specialist for a follow-up, we reviewed the data and graphs that I produced with Juggluco. I, of course, have never installed, used, or even agreed to Abbott's licenses and terms, so I have never seen what the Abbott app does. I was thus surprised when I showed my specialist Juggluco's summary graphs. He excitedly told me “this is much better reporting than the Abbott app gives you!”. We all know that sometimes proprietary software has better and more features than the FOSS equivalent, so it's a particularly great success when our community efforts outdoes a wealthy 200 billion-dollar megacorp on software features!


    Please do watch SFC's site in 2026 for more posts about my ongoing work with Juggluco, and please give generously as an SFC Sustainer to help this and our other work continue in 2026!

    • chevron_right

      Jordan Petridis: DHH and Omarchy: Midlife crisis

      news.movim.eu / PlanetGnome • 6 November 2025 • 11 minutes

    Couple weeks ago Cloudflare announced it would be sponsoring some Open Source projects. Throwing money at pet projects of random techbros would hardly be news, but there was a certain vibe behind them and the people leading them.

    In an unexpected turn of events, the millionaire receiving money from the billion-dollar company, thought it would be important to devote a whole blog post to random brokeboy from Athens that had an opinion on the Internet .

    I was astonished to find the blog post. Now that I moved from normal stalkers to millionaire stalkers, is it a sign that I made it? Have I become such a menace? But more importantly: Who the hell even is this guy?

    D-H-Who?

    When I was painting with crayons in a deteriorating kindergarten somewhere in Greece, DHH, David Heinemeier Hansson , was busy with dumping Ruby on Rails in the world and becoming a niche tech celebrity. His street cred for releasing Ruby on Rails would later be replaced by his writing on remote work. Famously authoring “Remote: Office Not Required”, a book based on his own company, 37signals.

    That cultural cache would go out the window in 2022 when he got in hot water with his own employees after an internal review process concluded that 37signals had been less than stellar when it came to handling race and diversity. Said review process culminated in a clash, where the employees were interested in further exploration of the topic, which DHH responded to them with “You are the person you are complaining about” (meaning: you, pointing out a problem, is the problem).

    No politics at work

    This incident lead the two founders of 37signals to the executive decision to forbid any kind of “ societal and political discussions ” inside the company, which, predictably, lead to a third of the company resigning in protest. This was a massive blow to 37signals. The company was famous for being extremely selective when hiring, as well as affording employees great benefits. Suddenly having a third of the workforce resign over disagreement with management sent a far more powerful message than anything they could have imagined.

    It would become the starting point for the downwards and radicalizing spiral along with the extended and very public crashout DHH will be going through in the coming years.

    Starting your own conference so you can never be banned from it

    Subsequently, DHH was uninvited from keynoting at RailsConf on the account of everyone being grossed out about the handling of the matter and in solidarity with the community members along the employees that quit in protest.

    That, in turn, would lead to the creation of the Rails Foundation and starting Rails World. A new conference about Rails that 100%-swear-to-god was not just about DHH having his own conference where he can keynote and would never be banned.

    In the following years DHH would go to explore and express all the spectrum of “down the alt-right pipeline” opinions, like:

    Omarchy

    You either log off a hero, or you see yourself create another linux distribution, and having failed the first part, DHH has been pouring his energy into creating a new project. While letting everyone know how he much prefers that than going to therapy . Thus, Omarchy was born, a set of copy pasted Window Manager and Vim configs turned distro. One of the two projects that Cloudflare will be proudly funding shortly. The only possible option for the compositor would be Hyprland , and even though it’s Wayland (bad!), it’s one of the good-non-woke ones. In a similar tone, the project website would be featuring the tight integration of Omarchy with SuperGrok .

    Rubygems

    On a parallel track, the entire Ruby community more or less collapsed in the last two months. Long story short, is that one of the major Ruby Central sponsors, Sidekiq, pulled out the funding after DHH was invited to speak at RailsConf 2025. Shopify, where DHH sits in the boards of directors, was quick to save the day and match the lost funding. Coincidentally an (allegedly) takeover of key parts of the Ruby Infrastructure was carried out by Ruby Central and placed under the control of Shopify in the following weeks.

    This story is ridiculous, and the entire ruby community is imploding following this. There’s an excellent write-up of the story so far here .

    In a similar note, and at the same time, we also find DHH drooling over Off-brand Peter Thiel and calling for an Anduril takeover of the Nix community in order to purge all the wokes.

    On Framework

    At the same time, Framework had been promoting Omarchy in their social media accounts for a good while. And DHH in turn has been posting about how great Framework hardware is and how the Framework CEO is contributing to his Arch Linux reskin. On October 8th, Framework announced its sponsorhip of the Hyprland project, following 37signal doing the same thing couple weeks earlier . On the same day they made another post promoting Omarchy yet again. This caused a huge backlash and overall PR nightmare, with the apex being a forum thread with over 1700 comments so far.

    The first reply in forum post, comes from Nirav, Framework’s CEO, with a very questionable choice of words:

    We support open source software (and hardware), and partner with developers and maintainers across the ecosystem. We deliberately create a big tent, because we want open source software to win. We don’t partner based on individual’s or organization’s beliefs, values, or political stances outside of their alignment with us on increasing the adoption of open source software.

    I definitely understand that not everyone will agree with taking a big tent approach, but we want to be transparent that bringing in and enabling every organization and community that we can across the Linux ecosystem is a deliberate choice.

    Mentioning twice a “big tent” as the official policy and response to complains about supporting Fascist and Racist shitheads, is nothing sort of digging a hole for yourself so deep it that it reemerges in another continent.

    Later on, Nirav would mention that they were finalizing sponsorship of the GNOME Foundation (12k/year) and KDE e.V. (10k/year). In the linked page you can also find a listing of Rails World (DHH’s personal conference) for a one time payment of 24k dollars.

    There has not been an update since, and at no point have they addressed their support and collaboration with DHH. Can’t lose the money cow and free twitter clout I guess.

    While I would personally would like to see the donation be rejected, I am not involved with the ongoing discussion on the GNOME Foundation side nor the Foundation itself. What I can say is that myself and others from the GNOME OS team, were involved in initial discussions with Framework, about future collaborations and hardware support. GNOME OS, much like the GNOME Flatpak runtime, is very useful as a reference point in order to identify if a bug, in hardware or software, is distro-specific or not.

    It’s been a month since the initial debacle with Framework. Regardless of what the GNOME Foundation plans on doing, the GNOME OS team certainly does not feel comfortable in further collaboration given how they have handled the situation so far. It’s sad because the people working there understand the issue, but this does not seem to be a trait shared by the management.

    A software midlife crisis

    During all this, DHH decided that his attention must be devoted to get into a mouth-off with a greek kid that called him a Nazi. Since this is not violence (see “ Words are not violence ” essay), he decided to respond in kind, by calling for violence against me (see “ Words are violence ” essay).

    To anyone who knows a nerd or two over the age of 35, all of the above is unsurprising. This is not some grand heel turn, or some brainwashing that DHH suffered. This is straight up a midlife crisis turned fash speedrun.

    Here’s a dude who barely had any time to confront the world before falling into an infinite money glitch in the form of Ruby on Rails, Jeff Bezos throwing him crazy money, Apple bundling his software as a highlighted feature, becoming a “new work” celebrity and Silicon Valley “Guru”. Is it any surprise that such a person later would find the most minuscule kind of opposition as an all-out attack on his self-image?

    DHH has never had the “best” opinions on a range of things, and they have been dutifully documented by others, but neither have many other developers that are also ignorant of topics outside of software. Being insecure about your hairline and masculine aesthetic to the point of adopting the Charles Manson haircut to cover your balding is one thing. However, it is entirely different to become a drop-shipped version of Elon, tweeting all day and stopping only to write opinion pieces that come off as proving others wrong rather than original thoughts.

    Case in point: DHH recently wrote about how “men who’d prefer to feel useful over being listened to”. The piece is unironically titled “ Building competency is better than therapy ”. It is an insane read, and I’ll speculate that it feels as if someone, who DHH can’t outright dismiss, suggested he goes to therapy. It’s a very “I’ll show you off in front of my audience” kind of text.

    Add to that a three year speedrun decrying the “theocracy of DEI” and the seemingly authoritarian powers of “the wokes”, all coincidentally starting after he could not get over his employees disagreeing with him on racial sensitivities.

    How can someone suggest his workers read Ta-Nehisi Coates’s “Between the World and Me” and Michelle Alexander’s “The New Jim Crow” in the aftermath of George Floyd’s killing and the BLM protests. While a couple of months later writing salivating blogposts after the EDL eugenics rally in England and giving the highest possible praise to Tommy Robinson ?

    Can these people be redeemed?

    It is certainly not going to help that niche celebrities, like DHH, still hold clout and financial power and are able to spout the worst possible takes without any backlash because of their position.

    A bunch of Ruby developers recently started a petition to get DHH distanced from the community, and it didn’t go far before getting brigaded by the worst people you didn’t need to know existed. This of course was amplified to oblivion by DHH and a bunch of sycophants chasing the clout provided by being retweeted by DHH. It would shortly be followed by yet another “ I’m never wrong ” piece.

    Is there any chance for these people, who are shielded by their well-paying jobs, their exclusively occupational media diet, and stimuli all happen to reinforce the default world view?

    I think there is hope, but it demands more voices in tech spaces to speak up about how having empathy for others, or valuing diversity is not some grand conspiracy but rather enrichment to our lives and spaces. This comes hand in hand with firmly shutting down concern trolling and ridiculous “extreme centrist” takes where someone is expected to find common ground with others advocating for their extermination.

    One could argue that the true spirit of FLOSS, which attracted much of the current midlife crisis developers in the first place, is about diversity and empathy for the varied circumstances and opinions that enriched our space.

    Conclusion

    I do not know if his heart is filled with hate or if he is incredibly lost, but it makes little difference since this is his output in the world.

    David, when you read this I hope it will be a wake-up call. It’s not too late, you only need to go offline and let people help you. Stop the pathetic TemuElon speedrun and go take care of your kids. Drop the anti-woke culture wars and pick up a Ta-Nehisi Coates book again.

    To everyone else: Push back against their vile and misanthropic rhetoric at every turn. Don’t let their poisonous roots fester into the ground. There is no place for their hate here. Don’t let them find comfort and spew their vomit in any public space.

    Crush Fascism . Free Palestine ✊ .

    • chevron_right

      Sebastian Wick: Flatpak Happenings

      news.movim.eu / PlanetGnome • 4 November 2025 • 2 minutes

    Yesterday I released Flatpak 1.17.0 . It is the first version of the unstable 1.17 series and the first release in 6 months. There are a few things which didn’t make it for this release, which is why I’m planning to do another unstable release rather soon, and then a stable release still this year.

    Back at LAS this year I talked about the Future of Flatpak and I started with the grim situation the project found itself in: Flatpak was stagnant, the maintainers left the project and PRs didn’t get reviewed.

    Some good news: things are a bit better now. I have taken over maintenance, Alex Larsson and Owen Taylor managed to set aside enough time to make this happen and Boudhayan Bhattcharya (bbhtt) and Adrian Vovk also got more involved. The backlog has been reduced considerably and new PRs get reviewed in a reasonable time frame.

    I also listed a number of improvements that we had planned, and we made progress on most of them:

    • It is now possible to define which Flatpak apps shall be pre-installed on a system, and Flatpak will automatically install and uninstall things accordingly. Our friends at Aurora and Bluefin already use this to ship core apps from Flathub on their bootc based systems (shout-out to Jorge Castro).
    • The OCI support in Flatpak has been enhanced to support pre-installing from OCI images and remotes, which will be used in RHEL 10
    • We merged the backwards-compatible permission system. This allows apps to use new, more restricting permissions, while not breaking compatibility when the app runs on older systems. Specifically access to input devices such as gamepads, and access to the USB portal can now be granted in this way. It will also help us to transition to PipeWire.
    • We have up-to-date docs for libflatpak again

    Besides the changes directly in Flatpak, there are a lot of other things happening around the wider ecosystem:

    • bbhtt released a new version of flatpak-builder
    • Enhanced License Compliance Tools for Flathub
    • Adrian and I have made plans for a service which allows querying running app instances (systemd-appd). This provides a new way of authenticating Flatpak instances and is a prerequisite for nested sandboxing, PipeWire support, and getting rid of the D-Bus proxy. My previous blog post went into a few more details.
    • Our friends at KDE have started looking into the XDG Intents spec, which will hopefully allow us to implement deep-linking, thumbnailing in Flatpak apps, and other interesting features
    • Adrian made progress on the session save/restore Portal
    • Some rather big refactoring work in the Portals frontend, and GDBus and libdex integration work which will reduce the complexity of asynchronous D-Bus

    What I have also talked about at my LAS talk is the idea of a Flatpak-Next project. People got excited about this, but I feel like I have to make something very clear:

    If we redid Flatpak now, it would not be significantly better than the current Flatpak! You could still not do nested sandboxing, you would still need a D-Bus proxy, you would still have a complex permission system, and so on.

    Those problems require work outside of Flatpak, but have to integrate with Flatpak and Flatpak-Next in the future. Some of the things we will be doing include:

    • Work on the systemd-appd concept
    • Make varlink a feasible alternative to D-Bus
    • D-Bus filtering in the D-Bus daemons
    • Network sandboxing via pasta
    • PipeWire policy for sandboxes
    • New Portals

    So if you’re excited about Flatpak-Next, help us to improve the Flatpak ecosystem and make Flatpak-Next more feasible!

    • chevron_right

      Rosanna Yuen: Farewell to these, but not adieu…

      news.movim.eu / PlanetGnome • 4 November 2025 • 6 minutes

    – from Farewell to Malta
    by Lord Byron

    Friday was my last day at the GNOME Foundation. I was informed by the Board a couple weeks ago that my position has been eliminated due to budgetary shortfalls. Obviously, I am sad that the Board felt this decision was necessary. That being said, I wanted to write a little note to say goodbye and share some good memories.

    It has been almost exactly twenty years since I started helping out at the GNOME Foundation. (My history with the GNOME Project is even older; I had code in GNOME 0.13 , released in March 1998.) Our first Executive Director had just left, and my husband was Board Treasurer at the time. He inherited a large pile of paperwork and an unhappy IRS. I volunteered to help him figure out how to put the pieces together and get our paperwork in order to get the Foundation back in good standing. After several months of this, the Board offered to pay me to keep it organized.

    Early on, I used to joke that my title should have been “General Dogsbody” as I often needed to help cover all the little things that needed doing. Over time, my responsibilities within the Foundation grew, but the sentiment remained. I was often responsible for making sure everything that needed doing was done, while putting in many of the processes and procedures Foundation uses to keep running.

    People often under-estimate how much hard work it is to keep an international non-profit like the GNOME Foundation going. There is a ton of minutia to be dealt with from ever-changing regulations, requirements, and community needs. Even simple-sounding things like paying people is surprisingly hard the moment it crosses borders. It requires dealing with different payment systems, bank rules, currencies, export regulations, and tax regimes. However, it is a necessary quagmire we have to navigate as it is a crucial tool to further the Foundation’s mission.

    Rosanna sitting behind a table at the GNOME booth. Many flyers on top of a blue tablecloth with the GNOME logo. To the left is a stand up banner with GNOME's mission Working a GNOME booth

    Over time, I have filled a multitude of different roles and positions (and had four different official titles doing so). I am proud of all the things I have done.

    • I have been the assistant to six different Executive Directors helping them onboard as they’ve started. I’ve been the bookkeeper, accounts receivable, and accounts payable — keeping our books in order, making sure people are paid, and tracking down funds. I’ve been Vice Treasurer helping put together our budgets, and created the financial slides for the Treasurer, Board, and AGM. I spent countless nights for almost a decade keeping our accounts updated in GnuCash. And every year for the past nineteen years I was responsible for making sure our taxes are done and 990 filed to keep our non-profit status secure.
      As someone who has always been deeply entrenched in GNOME’s finances, I have always been a responsible steward, looking for ways to spend money more prudently while enforcing budgets.
    • When the Foundation expanded after the Endless Grants, I had to help make the Foundation scale. I have done the jobs of Human Resources, Recruiter, Benefits coordinator, and managed the staff. I made sure the Board, Foundation, and staff are insured, and take their legally required training. I have also had to make sure people and contractors are paid and with all the legal formalities taken care of in all the different countries we operate in , so they only have to concern themselves with supporting GNOME’s mission.
    • I have had to be the travel coordinator buying tickets for people (and approving community travel). I have also done the jobs of Project Manager, Project Liaison to all our fiscally sponsored projects and subprojects, Shipping, and Receiving. I have been to countless conferences and tradeshows, giving talks and working booths. I have enjoyed meeting so many users and contributors at these events. I even spent many a weekend at the post-office filling out customs forms and shipping out mouse pads, mugs, and t-shirts to donors (back when we tried to do that in-house.) I tended the Foundation mailbox, logging all the checks we get from our donors and schlepping them to the bank.
    • I have served on five GNOME committees providing stability and continuity as volunteers came and went (Travel, Finance, Engagement, Executive, and Code of Conduct). I was on the team that created GNOME’s Code of Conduct, spending countless hours working with community members to help craft the final draft. I am particularly proud of this work, and I believe it has had a positive impact on our community.
    • Over the past year, I have also focused on providing what stability I could to the staff and Foundation, getting us through our second financial review, and started preparing for our first audit planned for next March.

    This was all while doing my best to hold to GNOME’s principles, vision, and commitment to free software.

    But it is the great people within this community that kept me loyally working with y’all year after year, and the appreciation of the amazing project y’all create that matters. I am grateful to the many community members who volunteer their time so selflessly through the years. Old-timers like Sri and Federico that have been on this journey with me since the very beginning. Other folks that I met through the years like Matthias, Christian, Meg, PTomato, and German. And Marina, who we all still miss. So many newcomers that add enthusiasm into the community like Deepesha, Michael, and Aaditya. So many Board members. There have been so many more names I could mention that I apologize if your name isn’t listed. Please know that I am grateful for what everyone has brought into the community. I have truly been blessed to know you all.

    I am also grateful for the folks on staff that have made GNOME such a wonderful place to work through the years. Our former Executive Directors Stormy, Karen, Neil, Holly, and Richard, all of whom have taught me so much. Other staff members that have come and gone through the years, such as Andrea (who is still volunteering), Molly, Caroline, Emmanuele, and Melissa. And, of course, the current staff of Anisa, Bart, and Kristi, in whose hands I know the Foundation will keep thriving.

    As I said, my job has always been to make sure things go as smoothly as possible. In my mind, what I do should quiet any waves so that the waves the Foundation makes go into providing the best programming we can — which is why a moment from GUADEC 2015 still pops up in my head.

    Picture this: we are all in Gothenburg, Sweden, in line registering for GUADEC. We start chatting in line as it was long. I introduce myself to the person behind me and he sputters, “Oh! You’re important!” That threw me for a loop. I had never seen myself that way. My intention has always been to make things work seamlessly for our community members behind the scenes, but it was always extremely gratifying to hear from folks who have been touched by my efforts.

    Dining room table covered in GNOME folders, letters, booth materials, and t-shirts, with a large suitcase in front filled with more things for the GNOME booths. GNOME things still to be transferred to the Board. Suitcase in front is full of items for staffing a GNOME Booth.

    What’s next for me? I have not had the time to figure this out yet as I have been spending my time transferring what I can to the Board. First things first; I need to figure out how to write a resumé again. I would love to continue working in the nonprofit space, and obviously have a love of free software. But I am open to exploring new ideas. If anyone has any thoughts or opportunities, I would love to hear them!

    This is not adieu; my heart will always be with GNOME. I still have my seat on the Code of Conduct committee and, while I plan on taking a month or so away to figure things out, do plan on returning to do my bit in keeping GNOME a safe place.

    If you’d like to drop me a line, I’d love to hear from you. Unfortunately the Board has to keep my current GNOME email address for a few months for the transfer, but I can be reached at <rosanna at gnome> for my personal mail. (Thanks, Bart!)

    Best of luck to the Foundation.