• Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 1 month from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Nirbheek Chauhan: An Esoteric Type of Memory "Leak"

      news.movim.eu / PlanetGnome • 9 hours ago • 2 minutes

    A little while ago, my colleague Sebastian started complaining about OOMs caused by Evolution taking up tens of gigabytes of memory. We discussed using sysprof to debug it, but it was too busy a time for Sebastian to set aside a few hours to do that.

    Funnily enough, the most efficient fix at the time was to buy more RAM, since rust-analyzer was also causing OOM issues.

    A few weeks went by. Restarting Evolution had become a daily ritual for Sebastian.

    Then, on a whim, I decided investigating this might be a good test for an LLM.

    I updated my Evolution git repo, built it, and started up Claude Code in the source root. This was the only prompt I supplied:

    Find memory leaks in Evolution, current sourcedir. Particularly leaks that could accumulate over several hours. A colleague has a leak that slowly accumulates memory usage to several GB over the course of a day, requiring a restart of Evolution. That is the main focus, but we can fix other leaks in the process.

    I wish I was lying, but that was all Claude Code needed to find the problem: Evolution just needed to call malloc_trim(0) from time to time .

    I refused to believe it at first. I was only convinced when we saw the memory drop after running gdb -p $(pidof evolution) -batch -ex "call malloc_trim(0)" -ex detach

    This seems absurd! Doesn't glibc reclaim freed memory from time to time?

    Yes, it does. It calls sbrk() to do that. However, sbrk() can only reclaim free memory at the top of the heap, since it simply moves the program break downward to do so. malloc_trim(0) calls sbrk() and then also calls madvise(..., MADV_DONTNEED) on the free pages, which allows the kernel to reclaim them.

    So if you have 10GB of unused memory followed by 4 bytes allocated at the top of the heap, your RSS is >10GB, even if you're using a few hundred megs. Till you call malloc_trim(0) .

    Note that you can only get into this situation if you have hundreds of thousands of small allocs/deallocs happening repeatedly. If your alloc is >128KB, mmap() is used for the allocation, and none of this applies.

    Coincidentally, GLib's use of GSlice for GObject allocations was masking this issue in the past, but GSlice has been a no-op for some time now (for good reasons). Ideally, Evolution should not be using GObject for such ephemeral objects.

    Lesson learned: if you have memory usage issues and you suspect fragmentation, try malloc_trim(0) before you go thinking about fancy allocators .

    • Pl chevron_right

      Christian Hergert: Limiters in libdex

      news.movim.eu / PlanetGnome • 20 hours ago • 2 minutes

    Libdex now has DexLimiter , a small utility for bounding how much asynchronous work runs at once.

    This is useful when a workload can produce more parallelism than the underlying machine, subsystem, or service should actually handle. Common examples include indexing files, downloading URLs, generating thumbnails, parsing documents, or querying a service with a fixed concurrency budget.

    The usual API is dex_limiter_run() . It acquires a permit, starts a fiber, and releases the permit when that fiber finishes.

    static DexFuture *
    load_one_file (gpointer user_data)
    {
      GFile *file = user_data;
    
      return dex_file_load_contents_bytes (file);
    }
    
    DexLimiter *limiter = dex_limiter_new (8);
    DexFuture *future = dex_limiter_run (limiter,
                                         NULL,
                                         0,
                                         load_one_file,
                                         g_object_ref (file),
                                         g_object_unref);
    

    In this example, no more than eight file loads will run at the same time, regardless of how many files are queued. The returned DexFuture resolves or rejects with the result of the spawned fiber.

    One important detail is that dropping the returned future does not cancel a fiber that has already started. Once work has acquired a permit, it is allowed to complete so that the limiter can release the permit cleanly.

    For more specialized cases, DexLimiter also supports manual acquire and release:

    g_autoptr(GError) error = NULL;
    
    if (dex_await (dex_limiter_acquire (limiter), &error))
      {
        do_limited_work ();
        dex_limiter_release (limiter);
      }
    

    This is useful when the limited section is not naturally represented by a single fiber. However, callers must release exactly once for every successful acquire. In most cases, dex_limiter_run() is preferable because it handles release on both success and failure paths.

    The limit should describe the constrained resource, not the number of items being processed. Remote APIs and databases may need a small limit. CPU-heavy work should usually be near the amount of useful worker parallelism. Local I/O can often tolerate a larger value, depending on the storage system. Separate resources should usually have separate limiters, so one workload does not consume another workload’s concurrency budget.

    Finally, dex_limiter_close() can be used during shutdown. Once closed, pending and future acquisitions reject with DEX_ERROR_SEMAPHORE_CLOSED . Work that already holds a permit may continue, but releasing after close does not make new permits available.

    The goal is to make bounded parallelism simple: queue as much asynchronous work as you need, but only run as much of it as the system should handle.

    • Pl chevron_right

      Christian Hergert: A Small Update from France

      news.movim.eu / PlanetGnome • 21 hours ago • 2 minutes

    For about the past month, I have no longer been with Red Hat.

    That is a strange sentence to write after so many years, but life has a way of changing the scenery whether or not one has finished packing. My family and I have made it safely to France, and we are quite happy here. The light is different, the pace is different, and there is a great deal to learn. For now, that is exactly where our attention needs to be.

    I also think there is a broader lesson here for people whose safety, immigration status, or family stability may depend on employer flexibility. Do not assume that long tenure, remote work history, or prior verbal guidance will be enough. My own experience left me with the uncomfortable conclusion that these processes can become very narrow exactly when the human stakes are highest. Get things in writing, understand the policy surface area, and protect your family first.

    This also means that some of the things I wrote about in my earlier mid-life update remain unresolved. I am currently in France on a visitor visa, which does not authorize work here. Our focus is on integration, language learning, and getting ourselves properly settled for the long term. That takes time, patience, paperwork, and a certain tolerance for being a beginner again.

    As a result, I will not be taking on ongoing software maintenance responsibilities for the foreseeable future. I may still scratch the occasional itch where it directly improves my own computing life, but I am not currently able to provide the kind of broad, sustained stewardship that many projects deserve.

    That is not an easy thing to say. Open source is entering an especially difficult period. We are now seeing AI systems used not only to generate code, but also to probe, disrupt, and attack critical software infrastructure. I suspect this will have a negative effect on the maintenance burden for a lot of projects, particularly the foundational pieces that distributions, companies, and users all rely on without always seeing the people behind them.

    But there is a limit to what can reasonably be carried as unpaid labor, especially when the primary financial beneficiaries are downstream organizations with considerably more resources than the individual maintainers doing the work. At the moment, I need to prioritize my family, our stability, and the next chapter of our life here.

    That said, I am still reachable for appropriate professional inquiries, advisory conversations, or consulting opportunities where the structure and location make sense. The best address for that now is christian at sourceandstack dot com .

    For now, we are safe, settling in, and doing our best to build something durable out of a rather odd moment in the world. There are worse places to begin again than France.

    A calm Mediterranean shoreline at sunset, with gentle waves rolling onto a pebbled beach beneath a wide blue sky streaked with soft clouds and pastel pink light on the horizon.

    • Pl chevron_right

      Toluwaleke Ogundipe: Hello GNOME and GSoC, Again!

      news.movim.eu / PlanetGnome • 1 day ago • 1 minute

    I am delighted to announce that I am returning for Google Summer of Code 2026 to contribute to GNOME once again. Following my work on Crosswords last year, I will be shifting focus to the core of the desktop: Mutter . For what it’s worth, I never left; I’ve been working with Jonathan to improve things and add shiny new features in Crosswords.

    Mutter serves as the Wayland display server and compositor library for GNOME Shell . Currently, a GPU reset invalidates the EGL context and causes the loss of all allocated GPU memory, resulting in the entire desktop crashing or freezing.

    My project aims to implement a robust recovery mechanism for GPU resets to prevent these session-ending freezes, under the mentorship of Jonas Ådahl , Robert Mader , and Carlos Garnacho . Leveraging the GL_EXT_robustness extension, I will implement reset detection, context re-creation, and re-upload of essential GPU resources, such as client textures, glyph caches, and background images. This will allow the compositor to resume rendering seamlessly after hardware-level failures.

    Over the course of the project, I will share updates on the progress of these recovery mechanisms and the challenges of managing state restoration within the compositor.

    I am very grateful to my mentors, Jonas, Robert, and Carlos, for the opportunity to work on this critical part of the GNOME ecosystem. Also, a big shout-out to Federico , Hans Petter , and Jonathan for their continuous support. I look forward to another productive summer with the community. 🦾 ❤

    • Pl chevron_right

      Nick Richards: Agile Rates After Launch

      news.movim.eu / PlanetGnome • 3 days ago • 3 minutes

    Last summer I wrote up Octopus Agile Prices For Linux , a small GTK app to show the current Octopus Agile electricity price and the next day of half-hourly rates. It did one thing, which is a good number of things for a desktop utility to do.

    Since then the app has become a bit less narrow. But it now does enough more that the launch post undersells it, and in a couple of places sends people looking for the wrong name.

    The app is now called Agile Rates . The application ID is still com.nedrichards.octopusagile , because changing stable app IDs is not exciting for anyone, but the name changed because Agile is no longer the whole story. Thanks to code from Andy Piper, it can also work with Octopus Go and Intelligent Go tariffs. Intelligent Go needs an API key because those prices are account-specific, but plain Agile and Go can still be set up manually.

    That was the first larger change: setup had to become a thing.

    The original app assumed you knew your tariff and region, or at least were willing to rummage in preferences until the graph stopped being wrong. That is fine for a scratch-your-own-itch project and a bit rude for an app on Flathub. The current version opens with a setup assistant. You can connect an Octopus account with an API key and account number, in which case the app tries to detect the active electricity tariff. Or you can keep it simple and choose the tariff and region manually.

    The second change is the one I actually use most: finding the cheapest slot.

    The launch version showed a graph and left the planning to the human. That works for quick glances, but most of my real questions are more specific:

    When should the dishwasher run?
    When should the washing machine run?
    Is there a cheap three-hour block before tomorrow afternoon?
    

    So there is now a “find cheapest time” tool. Pick a duration and it searches the available forecast window for the cheapest continuous block. Later I added the average price for that block as well, because total cost is useful but average unit price is how my brain checks whether the answer feels plausible. The chart now scrolls to the chosen time instead of making you squint along the bars like you are reading a very dull railway timetable.

    The graph itself has had a lot of quiet work. It has grid lines, clearer day boundaries, better current-price highlighting, less terrible dark-mode contrast, and layout rules that behave on narrower screens. The preferences window and main window are adaptive now too. Handy if you split your screen or have a Linux phone.

    The biggest recent addition is usage history. If you connect an account, the app can fetch recent smart meter consumption data, cache it locally, and show a Usage view. That includes kWh history, a seven-day trend, an estimated monthly usage figure, and charts. It also tries to estimate spend by matching historical usage to tariff rates and standing charges.

    The app can only calculate what the API and the cached data allow it to calculate. It fetches consumption pages, works out the relevant tariff periods from the account data, then looks up historical unit rates and standing charges. When it has complete data it says so. When it has to fall back to averages it says that too. I would rather the app be slightly fussy about confidence than present a made-up precision to two decimal places and hope nobody notices.

    Underneath that, the project has become more like a real small application. There are unit tests for pricing, tariff selection, adaptive layout, usage insights, and historical cost calculation. The development Flatpak manifest runs the Meson tests inside the GNOME SDK, which catches the class of bugs where the host Python environment was accidentally being too kind. Ruff is in the loop for linting. The app moved to the GNOME 50 runtime. Screenshots, AppStream metadata, branding colours, and icons have all been tidied up.

    So the current honest description is: Agile Rates is a small GNOME app for UK Octopus Energy customers who want current and upcoming smart tariff rates, a cheap-time finder, and, if they connect their account, recent usage and estimated spend history. It is independent and is not affiliated with, endorsed by, or sponsored by Octopus Energy.

    The old launch post can stay as the origin story, but it is no longer the whole story. What started as “show me the next 24 hours of Agile prices” has become “help me make a small domestic electricity decision without opening a browser”. That is still a narrow job, which is why I like it. It just has a few more of the useful bits attached now.

    • Pl chevron_right

      Michael Catanzaro: Flatpak Sandbox Escape via Yelp

      news.movim.eu / PlanetGnome • 3 days ago • 1 minute

    Yelp 49.1 fixes a significant Flatpak sandbox escape related to last year’s CVE-2025-3155 . CVE assignment for this new issue is currently pending.

    This is not a bug in Flatpak. Flatpak allows sandboxed applications to open URIs or files, meaning the sandboxed application may use a URI or file path to launch another application to open the URI or file. This is brokered via the OpenURI portal. The portal or the app may decide to require user interaction to decide which app to launch, but user interaction is generally not required. This is necessary: you would get pretty frustrated if you were prompted to select which app to use every time you click on a link or try to open something! Accordingly, unsandboxed applications that are installed on the host system are somewhat risky: any malicious sandboxed app may launch an unsandboxed app using a malicious file, generally with no user interaction required. Unsandboxed applications installed on the host OS are inherently part of the attack surface of the Flatpak sandbox.

    In this case, a sandboxed application may launch Yelp to open a malicious help file. The help file can then exfiltrate arbitrary files from your host OS to a web server by using a CSS stylesheet embedded in an SVG. Suffice to say the attack is pretty clever, and certainly more impactful than the typical boring memory safety bugs I more commonly see.

    This bug was discovered by Codean Labs , which performed a security audit of Flatpak and several GNOME projects thanks to generous sponsorship by the Sovereign Tech Resilience program of Germany’s Sovereign Tech Agency.

    • Pl chevron_right

      Laura Kramolis: Computers are Terrible

      news.movim.eu / PlanetGnome • 5 days ago • 1 minute

    A slightly more collected version of originally 18 Signal messages. This is a simplification. I am evidently no expert in Unicode specifically or text encoding in general.

    I, for a long time, believed that while many modern standards are a mess of legacy compatibility built on legacy compatibility, Unicode was an exception. That the only compromise it made was ASCII-compatibility, but even that wasn’t such a big one given that its character set is the most common one in computing even to this day. I was wrong.

    I got a US keyboard so now I have 2 different ways of typing accented characters. I can either hold the A key until I get an option of à , á , â , ä , ǎ , etc., or I can press E and then A to get to á , combining ´ and a regular a . I started wondering… when typing it one way or the other, the results must be different, right? I looked for a website that showed me what code points I was typing, and… they were the same?

    Most systems (the OS/browser in this case) normalize all text either one way or the other. In this case, to a single code point. Unicode does have deprecation, so you would think that when they introduced combining characters, they would have deprecated the precomposed versions of characters that can be written using them, right? Nope!

    It’s arbitrary which way each systems normalizes text. Some do it composed ( á ) and some decomposed ( a + ◌́ ). Both are part of the standard. And of course, you need to treat them as equivalent when not normalized so you might as well do it when you can anyway.

    Precomposed characters are the legacy solution for representing many special letters in various character sets. In Unicode, they were included for compatibility with early encoding systems […].

    From Precomposed character - Wikipedia

    Oh well, my day is ruined. My new life goal is advocacy for the deprecation of all precomposed characters… or maybe I should just accept that all computing will be plagued by backwards compatibility headaches ’til the end of time.

    • Pl chevron_right

      Jakub Steiner: USS/FMS Carrier

      news.movim.eu / PlanetGnome • 6 days ago • 2 minutes

    I'm a sucker for pixel art and very constrained music grooveboxes. While I'm not into chiptunes, they sure are a cultural phenomenon.

    You heard me boast about the Dirtywave M8 numerous times, even in person, because it's my tool of choice for producing and performing music. Its genius lies in high sound quality and a workflow that grew out of the tiny screen and button constraints on the Nintendo Gameboy, the platform of choice for an app called LSDJ , which the M8 is modelled after. That, and the sheer amount of sound engines living in your pocket. Building on the shoulders of giants and all.

    The small M8 community has a few 'celebrities', such as Ess Mattisson . I first heard of Ess when I ran into an amazing single channel track called Wertstoffe . Ess has a great pedigree as the creator of the original Digitone FM synthesizer while working at Elektron . FM remains his forte, and after creating numerous plugins through Fors , he has now released a little 2-operator FM synth and sequencer for the platform of the future, Nintendo Gameboy Advance.

    Lo-bit Club logo animation FMS synth running on Gameboy Advance

    What makes FMS a bit crazy is what it's doing under the hood. The Gameboy Advance has no FM synthesis hardware at all. Its audio gives you two Direct Sound DMA channels of 8-bit signed PCM — that's 256 amplitude levels, roughly 48 dB of dynamic range. For comparison, a CD has 96 dB, in much finer fidelity. The CPU is an ARM7TDMI running at 16.78 MHz with 256 KB of RAM, and that's where all the FM math happens. Sine waves, modulation, mixing four channels, all in real time, in software, on a chip from 2001 that was designed to shuffle sprites around. The hiss you hear is just part of the deal: quantization noise from that 8-bit DAC. So few amplitude steps means everything that comes out has this fuzzy, slightly crushed quality. You can't get rid of it. It is the sound. And somehow there are four channels of 2-operator FM synthesis in there, each with envelopes and ratio control. On a Gameboy Advance.

    Picking GBA as a platform of choice in 2026 may be strange. Surprisingly, it can be used on a very large array of hardware. Not only can you plug a memory card into the original hardware or new fancy clones like the Analogue Pocket , you have an exponentially larger choice of dozens if not hundreds of Chinese emulator handhelds from Anbernic , Powkiddy , Miyoo or Retroid . You can also use the Steam Deck or any PC running one of the many emulators, RetroArch being the most popular one.

    FMS really touched me. Partly because I have a soft spot for the Nordic demo scene, but mainly for its novel approach to composition. Just like with the M8, creating basic building blocks and then applying transposition to break the looping monotony is my favorite workflow. This little thing has that in the form of pattern and trig transposition but also a novel take on "effects". Yes, you heard me right. There's a sorta-kinda-delay. Even does ping-pong stereo field.

    I will keep on trying to create something that … sounds good. The process has been amazing. I truly love some of the sequencing tricks and workflows. The sequencer is, however, so good it would be worth seeing it run on top of a higher quality sound engine too.