• Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 1 month from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      This Week in GNOME: #249 Quality Over Quantity

      news.movim.eu / PlanetGnome • 18 hours ago • 5 minutes

    Update on what happened across the GNOME project in the week from May 8 to May 15.

    GNOME Circle Apps and Libraries

    Graphs

    Plot and manipulate data

    Sjoerd Stendahl says

    This week we released Graphs 2.0.

    It’s been about two years since the last major feature-update, and this is by far our biggest update yet. On a technical level, the code base has known a major overhaul. Importing logic is written in a more modular way, making it possible to add parsers for new data types, and we rewrote a large part of the code-base from Python to Vala, which now stands for the majority of the code.

    For the people who follow TWIG, some of this might sound familiar from the announcement of the beta, but we finally added support for some major long-requested changes. Most significantly, we finally have proper symbolic equation-support. Meaning equations now span over an infinite range, and can be manipulated analytically (e.g. doing a derivative of 6x² will change the equation to 12x, and the line will be re-rendered accordingly). Item and figure settings, such as when changing the scaling or limits, no longer block the view of the main canvas. The style editor editor has been redesigned with a live preview of the changes, we revamped the import dialog, and imported data now supports error bars. Equations with infinite values in them such as y=tan(x) now also render properly with values being drawn all the way to infinity and without having a line going from plus to minus infinity. We’ve also added support for spreadsheet and SQLite database files, drag-and-drop importing, improved curve fitting with residuals and better confidence bands, and now have proper mobile support. Since the beta-release we managed to squeeze in some improvements in the code-base, and labels are now concatinated in a smart way based on the screen size, making Graphs more usuable on mobile interfaces.

    This release took a long time to get right, but we’re happy to get the new features to the public. Graphs is handcrafted by human hands, which takes more time than LLM-based slop. But the longer manual process does allow us to think through changes, and make intentional decisions with human care. I am very proud to say we are able to deliver something intentional where we can deliver the polish that both Graphs, as well as the users deserve. As always, thanks to anyone involved which includes everyone who has been providing feedback, reported issues, contributed with code, or helped in any other possible way. And of course especially to Christoph Matthias Kohnen who has been maintaining Graphs with me and is responsible for a large part of the architectural changes that made this release possible.

    See a more complete list of changes here: https://blogs.gnome.org/sstendahl/2026/05/15/graphs-2-0-is-out/ And get the latest release on Flathub !

    graphs_twig.CRLf_ikb_2fztyN.webp

    Third Party Projects

    Anil reports

    Codd is now available on Flathub!

    Codd is a lightweight PostgreSQL client for GNOME, built with Rust, GTK4, libadwaita, Relm4, GtkSourceView, and sqlx. It focuses on a clean, native, and lightweight interface for working with PostgreSQL databases.

    The initial release includes saved connections, SQL execution with syntax highlighting, result tables, query history, table browsing with pagination, filters and editable table cells.

    More features are planned, and feedback from real-world PostgreSQL workflows would be very welcome.

    Flathub: https://flathub.org/apps/io.github.anil_e.Codd Source: https://github.com/anil-e/codd

    codd-screenshot-query.eSmx-M8a_ZiC7LJ.webp

    codd-screenshot-table-browser.Qc2EzfJI_Z1nLI2n.webp

    Haydn Trowell says

    The latest version of Typesetter, the minimalist Typst editor, brings a bunch of improvements for package and template usage, most notably:

    • a GUI package manager for installing and removing custom Typst packages and templates;
    • a template selection popover in the header bar for creating new documents from built-in and user-installed templates;
    • and an initial set of built-in templates.

    Flathub: https://flathub.org/apps/net.trowell.typesetter Source: https://codeberg.org/haydn/typesetter/

    typesetter-templatepopover.Czjk_QrI_ZAccsd.webp

    Alain says

    Planify 4.19.2 is out! 🎉

    This release brings several new features and fixes across the board.

    On the backup side, Planify now supports automatic daily backups — backups trigger at midnight and can be copied to additional folders of your choice, including cloud-mounted directories.

    CalDAV keeps getting better: sections are now fully supported via VTODO List prefix, syncing bidirectionally with Nextcloud Tasks, Thunderbird, and other clients. Horde server compatibility is also improved. Past dates can now be selected in the date picker, with dimmed styling and a handy Today pill button to jump back to the current month.

    On the UI side, Planify now follows your GNOME system accent color, the Board view inbox section auto-hides when empty, and the multi-select label picker now correctly tracks only the changes you explicitly make.

    Several bug fixes land too: calendar events now update correctly when kept open past midnight, task counts in Board view are accurate after drag and drop, and moving tasks between different sources (e.g. CalDAV → Local) now works correctly.

    Get it on Flathub! 🚀

    New to Planify? It’s a beautiful, open source task manager for Linux with Todoist and Nextcloud sync, built with GTK4 and libadwaita. Never worry about forgetting things again — give it a try!

    planify_backup.DbzV1J5g_Z2wL928.webp

    planify_caldav-board.Cz7kzhLz_1nUgWQ.webp

    Christian reports

    🎉 Gitte 0.3.0 released!

    Gitte 0.3.0 has been released, bringing full merge support, accessibility improvements, a new compact UI mode, and official macOS support.

    The new merge workflow allows initiating, resolving, and completing merges directly from within the application. This release also adds a new compact UI mode, multi-selection support in the changed files list, and a release notes dialog with update notifications.

    The diff viewer received major improvements as well: diffs in the log viewer can now be selected and copied and large diffs are handled more gracefully.

    On the platform side, Gitte now supports macOS thanks to work by René de Hesselle. The app also received new GNOME-style icons by Jakub Steiner, expanded test coverage, CI integration, translation updates, and many internal refactorings and bug fixes.

    Get it on Flathub , for macOS or check the source code

    gitte-whats-new.BfLh7f6R_h0DSc.webp

    gitte-merge.Dl2LVUdm_ZO0n9p.webp

    gitte-compact.Vtc95XuU_1ndcYO.webp

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Sjoerd Stendahl: Graphs 2.0 is out!

      news.movim.eu / PlanetGnome • 20 hours ago • 6 minutes

    After two years of development, Graphs 2.0 is finally out!

    This will be a shorter blog, as the changelist of the new features have been discussed in the previous post in more detail, you can check this out here in more detail if you’re interested: https://blogs.gnome.org/sstendahl/2026/04/14/announcing-the-upcoming-graphs-2-0/

    For a quick overview, a quick reprise of the most interesting features can be found here in bullet-point format:

      • New Data Types with proper equations: In this new release, we finally have proper support for equations, where equations are manipulated analytically (e.g. a derivative on y = 12x² will result in y = 24x, and be rendered accordingly), limits are rendered infinitely, and equations can be changed after importing them. To accomodate this change, we now have three different data types. Equations, Imported Data, and Generated Data. Imported Data is regular data that you import from file. Generated Data behaves the same as Imported Data, but you generate the dataset using an equation. For generate data you can also change the equation after the fact, and rerender, or change limits or amount of generated datapoints.
      • New Style Editor: We revamped the style editor. One major change is that you can now easily import new styles based on matplotlib-style themes, and you can also export your style and share it with others. If you have a nice style you want to share with us, open an issue on the GitLab page and let us know. We’re looking to expand the default choice of styles :). Furthermore, when editing  a style, you finally see a preview of how this affects the canvas. This way you don’t need to guess, or go back and forth when finetuning a style.
      • UX-changes : Whilst the general look-and-feel of Graphs is still mostly the same, we did make some UX-changes with how we handle settings. Mainly, instead of showing a modal popup dialog, settings that affect the canvas itself (i.e. item and figure settings) are now shown in the sidebar instead. The reason for this, is that the popup dialog hid the figure that you’re editing, making it difficult to see how your changes actually affect your canvas.
      • Improved data import. We revamped the data import completely. First of all, we have made the codebase here much more modular, making it easier to write new parsers for other filetypes. We now added support for SQLite Database Files, Microsoft Excel Sheets and .ods files from LibreOffice Calc. It’s also now possible to import multiple files at once, and finetuning the settings for each file individually. Another nice feature here is that you can now import multiple datasets from the same file without having to reimport them. Finally, we added proper-support also for single-column imports where x-data can be generated using your own equation
      • Error bar support: Graphs now has proper support for error bars. The error bar style can be set globally in the new style editor, or individually for each item.
      • Reworked Curve Fitting : The curve fitting logic has almost been completely rewritten. Whilst it mostly still works the same, the confidence band that is shown is now calculated properly using the Delta-method, instead of using a naive way using the limits of the standard deviations. We also added support to show the residuals to verify your fits, and added more useful error messages when things go wrong. The results in the curve fitting dialog now also show the root mean squared error as a second goodness-of-fit figure. The parameter values themselves in the curve fitting dialog are no longer rounded (e.g. 421302 used to be rounded to 421000) and finally custom equations in the curve fitting dialog now have an apply button, greatly improving the smoothness when entering new equations
      • Proper Mobile Support: With this release, we now officially feel confident in stating proper mobile support. The entire UI is tested on real mobile phones using PostMarketOS, and everything works properly. Labels are now also ellipsized earlier on narrow displays, so that the UI remains useable.
      • Reworked Figure Exports: Figure exports are reworked. Instead of simply taking a snapshot of the current canvas, you can set the actual size in pixels when exporting a figure. This is vital when trying to create reproducable figures for e.g. publications. Of course, you can also easily still use the same canvas that you see in the application.
      • Quality of life changes : We haven’t even gone through each individual change we made in the blog, but here’s a quick fire-round of more quality-of-life changes:
        • It is now possible to have multiple instances of Graphs open at the same time
        • The style editor now also has the option to draw tick labels (so the numeric values) on all axes containing ticks, this is not supported by default in Matplotlib, so we had to add our own parameter here
        • Graphs now inhibits the session when unsaved data is still open, so you get a warning if you close your computer with unsaved data.
        • Added support for base-2 logarithmic scaling
        • Graphical fixes for the drag-drop animations, which used to look somewhat glitchy
        • Panning and zooming are now done consistently on all axes when using multiple axes
        • Data can now be imported by drag-and-drop into Graphs
        • The subtitle now also shows the full file path for Flatpaks
        • Limits can now easily be clicking on the numbers near the axes
        • The custom transformation has gained the following extra variables: x_mean, y_mean, x_median, y_median, x_std, y_std and counts
        • Warnings are now displayed when trying to open a project from a beta version
        • The many code refactors and reimplementations from Python to Vala makes the application more robust, and significantly more performance. Especially when working with larger datasets.

    This release took a long time to get right, but we’re happy to get the new features to the public. Graphs is handcrafted by human hands, which takes more time than LLM-based slop. But the longer manual process does allow us to think through changes, and make intentional decisions with human care. I am very proud to say we are able to deliver something intentional where we can deliver the polish that both Graphs, as well as the users deserve. As always, thanks to anyone involved which includes everyone who has been providing feedback, reported issues, contributed with code, or helped in any other possible way. And of course especially to Christoph who has been maintaining Graphs with me and is responsible for a large part of the architectural changes that made this release possible.

    Go get the new release from Flathub here !

    p.s. On a more personal note with a shameless plug, I will be speaking at GUADEC 2026 about my journey into app development, and how to get into this world as an outsider without a CS degree. Be sure to check that out if you are interested in starting with applications, and want to know how it is to join a project in the GNOME ecosystem, it’s a lot less scary than it may sounds 🙂

    I’ll be joining on-site, so say hi to me there if you have any questions or are up for a chat :). Otherwise the whole event will be livestreamed as well, and you can always reach me at sstendahl@gnome.org.

    • Pl chevron_right

      Allan Day: GNOME Foundation Update, 2026-05-15

      news.movim.eu / PlanetGnome • 22 hours ago • 2 minutes

    Welcome to another GNOME Foundation update post! Today’s installment covers highlights from what’s happened over the past two weeks.

    LAS 2026

    Linux Apps Summit 2026 starts tomorrow! The organizing team, which includes members from both GNOME and KDE, has been hard at work and is on the ground in Berlin making final preparations. The schedule looks great, and it promises to be a well-attended event.

    The talks are being streamed this year, so make sure to watch our social media for details, and tune in live to hear the talks.

    GUADEC 2026

    Preparations are continuing for July’s GUADEC. The call for Birds of a Feather sessions is currently open. If you want to hold an informal discussion or working session, please fill out the form before 5th June.

    Applications are still open for travel funding for GUADEC. The deadline for submissions is 24th May – that’s just over one week.

    Board meeting

    This week the Board of Directors had its regular meeting for May. A summary:

    • The Board authorized the closure of a bank account which we are no longer using.
    • I gave an update on operations over the past month, and got feedback from the Board
    • Felipe Borges from the Internship Committee joined, to give a report. The Board discussed how we can best support the committee.
    • Deepa gave a finance report, which included numbers from January and February. The main news here was that our finances are running close to what was projected for this year’s budget.
    • The Board discussed the draft of the report from the audit we recently underwent, as well as the draft of our latest annual tax filing. This is a routine part of the Board’s work, as it is required to perform a review before these documents are finalized.

    Office transitions

    Our long-running effort to enhance our internal accounting processes has continued over the past two weeks. A notable development has been the retirement of several finance platforms, which have been effectively replaced by the new payments platform that we adopted in January. This platform reduction will reduce operational complexity, as well as workloads. It is still ongoing – we have an additional two more platforms that are currently in the process of being retired.

    Another highlight has been the launch of a search for a new member to join our finance and operations team. This is a part-time, contract-based role, which has been shaped in close consultation with Dawn Matlak, who is supporting our finance and accounting operations on a temporary basis, and has already been factored into our budget projections.

    We are looking for someone at director level who brings substantial nonprofit finance experience — including audit preparation and compliance experience — which reflects how much the Foundation’s operational and regulatory requirements have grown, particularly in the run up to and following our audit last March, and will provide in-house expertise which will reduce our reliance on external consultants. You can read the full posting here .

    Thanks for reading, and see you in two week’s time!

    • Pl chevron_right

      Bart Piotrowski: How does Flathub even work? The CDN and caching layer

      news.movim.eu / PlanetGnome • 1 day ago • 4 minutes

    There is one specific way in which the non-corporate open source projects typically document how their infrastructure work: not at all, and Flathub is no different. The full picture likely lives only in my brain, and while it could be sorted out by anyone (especially in this LLM age, yay or nay), why should it only be me thinking at night about all the single points of failure?

    Like any system that evolved naturally, it's all over the place. It's tempting to tell its history chronologically, but even then, it's difficult to find a good entry point. Instead, this post focuses on what happens when users call flatpak install ; later entries will cover the website and, finally, the build infrastructure. Buckle up!

    CDN, caching proxies, the master server

    The secret of making computers work well is to have them not do anything at all, and that's the story behind serving Flathub's OSTree repository. Content-addressed objects are extremely cacheable as they are immutable, offloading the effort to the CDN provider.

    When the client connects to dl.flathub.org , you can be certain it hits some layer of cache. Almost all the heavy-lifting is done by Fastly. At the peak, when both EMEA and North America are awake and at computers, 50 million requests per hour are cache hits served by Fastly's infrastructure, with a modest 20 million being misses passed down to our servers. There would be no Flathub without Fastly; Fastly does it completely for free, not even for fake Internet points as we are incredibly bad at highlighting what our sponsors do for us.

    image

    image

    You can't do enough cache, and so various Fastly servers talk to Fastly-managed shield server which caches the most requested objects to avoid spilling over too much to us. For legit cache misses, the request will be served by one of 8 caching proxies we are running at different VPS providers. We use a chash director at Fastly which will pick the backend based on the path being requested. In the past, we used a dumb round-robin but as a result, each caching proxy had its own independent copy of the working set, wasting disk space and producing a higher miss rate against the master server. Hashing by URL behaves like one big cache instead of N copies.

    These days, the caching proxy fleet consists of 3 servers at Mythic Beasts, 2 servers at AWS, another 2 at NetCup and a single server at DigitalOcean. We don't collect overly detailed metrics, but on average, each proxy serves around 1 TB/month back to Fastly and pulls roughly 5 TB/month from origin. With only 100 GB of disk space per proxy against a multi-TB working set, we're not so much caching the long tail as smoothing it. In the ideal world, we would be retaining much more data at this layer, but it's not the world we live in.

    Each of these servers is running the latest stable Debian release. The requests are served by the usual nginx setup with proxy_cache enabled. There is some custom Lua code for invalidating certain paths after publishing new builds finishes (spoilers!). Vanilla nginx doesn't support the PURGE method, and third-party modules like ngx_cache_purge have not seen any maintenance for over 10 years. In the end, it was more maintainable to write Lua code to calculate the caching key of a URL and then run os.remove to "purge" it from the cache.

    There's also a systemd timer for refreshing the Fastly IP allowlist. We used to expose these servers publicly, but a vision of everything crumbling down due to a DDoS attack kept me awake at night so this had to change.

    On the far end of this setup sits a lonely physical server living in one of the Mythic Beasts' datacenters. This is The Server holding the entire Flathub repo on an equivalent of RAID10 in ZFS world: two 2-disk mirror vdevs on which ZFS stripes data across. There is more nuance to this setup, but the ultimate advantage is that we can tolerate a disk failure in each of the mirrors, while being less taxing to resilver after a swap. The entire reachable data set is around 4TB of data, with the remaining 6TB unused. There will be more about the repository maintenance later on!

    Ironically, it's the only server running Ubuntu. At the time, it was the easiest way to have support for ZFS readily available. We could re-provision it to Debian, but on the other hand, what for? It works fine that way. It has survived at least 2 major upgrades between LTS-es; if it ain't broke, don't fix it.

    The master server itself has to be partially public as it's where new builds are being uploaded. It no longer exposes the raw Flathub repository for the same reason caching proxies don't. This is accomplished with Tailscale and a lightweight ACL config ensuring caching proxies can talk only to the HTTP server running on the main repo server and vice versa (for issuing PURGE requests). Yes, all involved parties have public IP addresses assigned so this could technically be pure WireGuard setup but I prefer to make this someone else's concern, especially given how generous Tailscale's free plan is.

    Flathub CDN topology

    It's not much, but it's honest work. For how little we have, the file-serving half of Flathub's infrastructure works unreasonably well. Stay tuned for part 2!

    • Pl chevron_right

      Christian Hergert: Translating French

      news.movim.eu / PlanetGnome • 1 day ago • 1 minute

    I have been spending more time learning French lately, and as often happens, that turned into a small side project. liblingua is not intended to be a big mainstream translation platform. It is a fun GLib/GObject library for experimenting with local machine translation from applications.

    The library uses Bergamot from Mozilla for the translation backend. Instead of sending text to a web service, liblingua resolves the language pair you ask for, downloads the required language model into the local user cache, and then performs translation locally.

    The high-level API is built around a few small objects: LinguaRegistry discovers available translation profiles, LinguaProfile represents a model that can be loaded or downloaded, LinguaProgress reports download progress, and LinguaTranslator performs the translation.

    All potentially blocking work is exposed as DexFuture , so it fits naturally into libdex based applications. If you are already using fibers, the code can stay linear and easy to read with dex_await_object() .

    A Small Example

    Here is the basic shape of translating French into English:

    #include <liblingua.h>
    
    static void
    translate_example (void)
    {
      g_autoptr(LinguaProgress) progress = lingua_progress_new ();
      g_autoptr(LinguaRegistry) registry = NULL;
      g_autoptr(LinguaProfile) profile = NULL;
      g_autoptr(LinguaTranslator) translator = NULL;
      g_autoptr(LinguaTranslation) result = NULL;
      g_autoptr(GListModel) profiles = NULL;
      g_autoptr(GError) error = NULL;
    
      if ((registry = dex_await_object (lingua_registry_new (), &error)) &&
          (profiles = lingua_registry_resolve (registry, "fr", "en")) &&
          (profile = g_list_model_get_item (profiles, 0)) &&
          (translator = dex_await_object (lingua_profile_load (profile, progress), &error)) &&
          (result = dex_await_object (lingua_translator_translate (translator, "Bonjour"), &error)))
        g_print ("%s\n", lingua_translation_get_translation (result));
      else
        g_printerr ("Error: %s\n", error->message);
    }
    

    The first time a model is needed, loading the profile may download it. After that, the model is reused from the local cache. That makes liblingua useful for little tools, demos, and desktop experiments where local translation is preferable to wiring everything through a remote service.

    In the future this is probably the type of thing we would want as a desktop service to avoid duplicating caches amongst Flatpak applications. It would also be extremely useful to do live translation in Camera and Image Preview apps. I played a bit with that using Tesseract for OCR and it worked better than expected.

    • Pl chevron_right

      Nirbheek Chauhan: An Esoteric Type of Memory "Leak"

      news.movim.eu / PlanetGnome • 1 day ago • 2 minutes

    A little while ago, my colleague Sebastian started complaining about OOMs caused by Evolution taking up tens of gigabytes of memory. We discussed using sysprof to debug it, but it was too busy a time for Sebastian to set aside a few hours to do that.

    Funnily enough, the most efficient fix at the time was to buy more RAM, since rust-analyzer was also causing OOM issues.

    A few weeks went by. Restarting Evolution had become a daily ritual for Sebastian.

    Then, on a whim, I decided investigating this might be a good test for an LLM.

    I updated my Evolution git repo, built it, and started up Claude Code in the source root. This was the only prompt I supplied:

    Find memory leaks in Evolution, current sourcedir. Particularly leaks that could accumulate over several hours. A colleague has a leak that slowly accumulates memory usage to several GB over the course of a day, requiring a restart of Evolution. That is the main focus, but we can fix other leaks in the process.

    I wish I was lying, but that was all Claude Code needed to find the problem: Evolution just needed to call malloc_trim(0) from time to time .

    I refused to believe it at first. I was only convinced when we saw the memory drop after running gdb -p $(pidof evolution) -batch -ex "call malloc_trim(0)" -ex detach

    This seems absurd! Doesn't glibc reclaim freed memory from time to time?

    Yes, it does. It calls sbrk() to do that. However, sbrk() can only reclaim free memory at the top of the heap, since it simply moves the program break downward to do so. malloc_trim(0) calls sbrk() and then also calls madvise(..., MADV_DONTNEED) on the free pages, which allows the kernel to reclaim them.

    So if you have 10GB of unused memory followed by 4 bytes allocated at the top of the heap, your RSS is >10GB, even if you're using a few hundred megs. Till you call malloc_trim(0) .

    Note that you can only get into this situation if you have hundreds of thousands of small allocs/deallocs happening repeatedly. If your alloc is >128KB, mmap() is used for the allocation, and none of this applies.

    Coincidentally, GLib's use of GSlice for GObject allocations was masking this issue in the past, but GSlice has been a no-op for some time now (for good reasons). Ideally, Evolution should not be using GObject for such ephemeral objects.

    Lesson learned: if you have memory usage issues and you suspect fragmentation, try malloc_trim(0) before you go thinking about fancy allocators .

    • Pl chevron_right

      Christian Hergert: Limiters in libdex

      news.movim.eu / PlanetGnome • 2 days ago • 2 minutes

    Libdex now has DexLimiter , a small utility for bounding how much asynchronous work runs at once.

    This is useful when a workload can produce more parallelism than the underlying machine, subsystem, or service should actually handle. Common examples include indexing files, downloading URLs, generating thumbnails, parsing documents, or querying a service with a fixed concurrency budget.

    The usual API is dex_limiter_run() . It acquires a permit, starts a fiber, and releases the permit when that fiber finishes.

    static DexFuture *
    load_one_file (gpointer user_data)
    {
      GFile *file = user_data;
    
      return dex_file_load_contents_bytes (file);
    }
    
    DexLimiter *limiter = dex_limiter_new (8);
    DexFuture *future = dex_limiter_run (limiter,
                                         NULL,
                                         0,
                                         load_one_file,
                                         g_object_ref (file),
                                         g_object_unref);
    

    In this example, no more than eight file loads will run at the same time, regardless of how many files are queued. The returned DexFuture resolves or rejects with the result of the spawned fiber.

    One important detail is that dropping the returned future does not cancel a fiber that has already started. Once work has acquired a permit, it is allowed to complete so that the limiter can release the permit cleanly.

    For more specialized cases, DexLimiter also supports manual acquire and release:

    g_autoptr(GError) error = NULL;
    
    if (dex_await (dex_limiter_acquire (limiter), &amp;error))
      {
        do_limited_work ();
        dex_limiter_release (limiter);
      }
    

    This is useful when the limited section is not naturally represented by a single fiber. However, callers must release exactly once for every successful acquire. In most cases, dex_limiter_run() is preferable because it handles release on both success and failure paths.

    The limit should describe the constrained resource, not the number of items being processed. Remote APIs and databases may need a small limit. CPU-heavy work should usually be near the amount of useful worker parallelism. Local I/O can often tolerate a larger value, depending on the storage system. Separate resources should usually have separate limiters, so one workload does not consume another workload’s concurrency budget.

    Finally, dex_limiter_close() can be used during shutdown. Once closed, pending and future acquisitions reject with DEX_ERROR_SEMAPHORE_CLOSED . Work that already holds a permit may continue, but releasing after close does not make new permits available.

    The goal is to make bounded parallelism simple: queue as much asynchronous work as you need, but only run as much of it as the system should handle.

    • Pl chevron_right

      Christian Hergert: A Small Update from France

      news.movim.eu / PlanetGnome • 2 days ago • 2 minutes

    For about the past month, I have no longer been with Red Hat.

    That is a strange sentence to write after so many years, but life has a way of changing the scenery whether or not one has finished packing. My family and I have made it safely to France, and we are quite happy here. The light is different, the pace is different, and there is a great deal to learn. For now, that is exactly where our attention needs to be.

    I also think there is a broader lesson here for people whose safety, immigration status, or family stability may depend on employer flexibility. Do not assume that long tenure, remote work history, or prior verbal guidance will be enough. My own experience left me with the uncomfortable conclusion that these processes can become very narrow exactly when the human stakes are highest. Get things in writing, understand the policy surface area, and protect your family first.

    This also means that some of the things I wrote about in my earlier mid-life update remain unresolved. I am currently in France on a visitor visa, which does not authorize work here. Our focus is on integration, language learning, and getting ourselves properly settled for the long term. That takes time, patience, paperwork, and a certain tolerance for being a beginner again.

    As a result, I will not be taking on ongoing software maintenance responsibilities for the foreseeable future. I may still scratch the occasional itch where it directly improves my own computing life, but I am not currently able to provide the kind of broad, sustained stewardship that many projects deserve.

    That is not an easy thing to say. Open source is entering an especially difficult period. We are now seeing AI systems used not only to generate code, but also to probe, disrupt, and attack critical software infrastructure. I suspect this will have a negative effect on the maintenance burden for a lot of projects, particularly the foundational pieces that distributions, companies, and users all rely on without always seeing the people behind them.

    But there is a limit to what can reasonably be carried as unpaid labor, especially when the primary financial beneficiaries are downstream organizations with considerably more resources than the individual maintainers doing the work. At the moment, I need to prioritize my family, our stability, and the next chapter of our life here.

    That said, I am still reachable for appropriate professional inquiries, advisory conversations, or consulting opportunities where the structure and location make sense. The best address for that now is christian at sourceandstack dot com .

    For now, we are safe, settling in, and doing our best to build something durable out of a rather odd moment in the world. There are worse places to begin again than France.

    A calm Mediterranean shoreline at sunset, with gentle waves rolling onto a pebbled beach beneath a wide blue sky streaked with soft clouds and pastel pink light on the horizon.