call_end

    • Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 3 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Matthew Garrett: SSH certificates and git signing

      news.movim.eu / PlanetGnome • 21 hours ago • 7 minutes

    When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

    git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account 1 , but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

    SSH Certificates

    And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

    And, wonderfully, you can use them in git! Let’s find out how.

    Local config

    There’s two main parameters you need to set. First,

    1
    
    git config set gpg.format ssh
    

    because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one . It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

    Validating signatures

    This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen 2 , which validates signatures against a file in a format that looks somewhat like authorized-keys . This lets you add something like:

    1
    
    * cert-authority ssh-rsa AAAA…
    

    which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

    Haha. No.

    Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own . It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS . Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

    In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

    Doing it in hardware

    Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

    So, obviously, I wrote my own . This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows 3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

    And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

    Wait, attestation?

    Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

    Conclusion

    Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it 4 and everyone wins.


    1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys ? Now you do ↩︎

    2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

    3. This is more difficult than it sounds ↩︎

    4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

    • Pl chevron_right

      Sam Thursfield: Status update, 21st March 2026

      news.movim.eu / PlanetGnome • 22 hours ago • 6 minutes

    Hello there,

    If you’re an avid reader of blogs, you’ll know this medium is basically dead now. Everyone switched to making YouTube videos, complete with cuts and costume changes every few seconds because, I guess, our brains work much faster now.

    The YouTube recommendation algorithm, problematic as it is, does turn up some interesting stuff, such this video entitled “Why Work is Starting to Look Medieval” :

    It is 15 minutes long, but it does include lots of short snippets and some snipping scissors, so maybe you’ll find it a fun 15 minutes. The key point, I guess, is that before we were wage slaves we used to be craftspeople, more deeply connected to our work and with a sense of purpose. The industrial revolution marked a shift from cottage industry, where craftspeople worked with their own tools in their own house or workshop, to modern capitalism where the owners of the tools are the 1%, and the rest of us are reduced to selling our labour at whatever is the going rate.

    Then she posits that, since the invention of the personal computer, influencers and independent content creators have begun to transcend the structures of 20th century capitalism, and are returning to a more traditional relationship with work. Hence, perhaps, why nearly everyone under 18 wants to be a YouTuber. Maybe that’s a stretch.

    This message resonated with me after 20 years in the open source software world, and hopefully you can see the link. Software development is a craft. And the Free Software movement has always been in tacit opposition to capitalism, with its implied message that anyone working on a computer should have some ownership of the software tools we use: let me use it, let me improve it, and let me share it.

    I’ve read many many takes on AI-generated code this year, and its really only March. I’m guilty one of these myself: AI Predictions for 2026 , in which I made a link between endless immersion in LLM-driven coding and more traditional drug addictions that has now been corroborated by Steve Yegge himself. See his update “The AI Vampire” (which is also something of a critique of capitalism).

    I’ve read several takes that the Free Software movement has won now because it is much easier to understand, share and modify programs than ever before. See, for example, this one from Bruce Perens on Linquedin : “The advent of AI and its capability to create software quickly, with human guidance, means that we can probably have almost anything we want as Free Software.” .

    I’ve also seen takes that, in fact, the capitalism has won. Such as the (fictional) MALUSCorp : “Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.”.

    One take I haven’t seen is what this means for people who love the craft of building software. Software is a craft, and our tools are the operating system and the compiler. Programmers working on open source, where code serves as reference material and can live in the open for decades, will show much more pride in their code than programmers in academia and industry, whose prototypes or products just need to get the job done. The programmer is a craftsperson, just like the seamstress, the luthier and the blacksmith. But unlike clothes, guitars and horseshoes, the stuff we build is intangible. Perhaps as a result, society sees us less like craftspeople and more like weird, unpopular wizards.

    I’ve spent a lot of my career building and testing open source operating systems, as you can see from these 30 different blog posts , which include the blockbuster “Some CMake Tips” , the satisfying “Tracker 💙 Meson” , and or largely obsolete “How BuildStream uses OSTree” .

    It’s really not that I have some deep-seated desire to rewrite all of the world’s Makefiles. My interest in operating systems and build tools has always came from a desire to democratize these here computers. To free us from being locked into fixed ways of working designed by Apple, Google, Microsoft. Open source tools are great, yes, but I’m more interested in whether someone can access the full power of their computer without needing a university education. This is why I’ve found GNOME interesting over the years: it’s accessible to non-wizards, and the code is right there in the open, for anyone to change. That said, I’ve always wished we GNOME focus more on customizability , and I don’t mean adding more preferences. Look, here’s me in 2009 discovering Nix for the first time and jumping straight to this: “So Nix could give us beautiful support for testing and hacking on bits of GNOME” .

    So what happened? Plenty has changed, but I feel that hacking on bits of GNOME hasn’t become meaningfully easier in the intervening 17 years. And perhaps we can put that largely down to the tech industry’s relentless drive to sell us new computers, and our own hunger to do everything faster and better. In the 1980s, an operating system could reasonably get away with running only one program at a time. In the 1990s, you had multitasking but there was still just the one CPU, at least in my PC. I don’t think there was any point in the 2000s when I owned a GPU. In the 2010s, my monitor was small enough that I never worried about fractional scaling. And so on. For every one person working to simplify, there are a hundred more paid to innovate. Nobody gets promoted for simplicity .

    I can see a steadily growing interest in tech from people who aren’t necessarily interested in programming. If you’re not tired of videos yet, here’s a harp player discussing the firmware of a digital guitar pedal (cleverly titled “What pedal makers don’t want you to see” ). Here’s another musician discussing STM32 chips and mesh networks under the title “Gadgets For People Who Don’t Trust The Government” . This one does not have costume changes every few seconds.

    So we’re at an inflection point.

    The billions pumped into the AI bubble come from a desire by rich men to take back control of computing. It’s a feature, not a bug, that you can’t run ChatGPT on a consumer GPU, and that AI companies need absolutely all of the DRAM . They could spend that money on a programme like Outreachy , supporting people to learn and understand today’s software tools … but you don’t consolidate power through education. (The book Careless People, which I recommend last year , will show you how much tech CEOs crave raw power).

    In another sense, AI models are a new kind of operating system, exposing the capabilities of a GPU in a radical new interface. The computer now contains a facility that can translate instructions in your native language into any well-known programming language. (Just don’t ask it to generate Whitespace ). By now you must know someone non-technical who has nevertheless automated parts of their job away by prompting ChatGPT to generate Excel macros. This is the future we were aiming for, guys!

    I’m no longer sure if the craft I care about is writing software, or getting computers to do things, or both. And I’m really not sure what this craft is going to look like in 10 or 20 years. What topics will be universally understood, what work will be open to individual craftspeople, and what tools will be available only to states and mega-corporations? Will basic computer tools be universally available and understood, like knives and saucepans in a kitchen? Will they require small scale investment and training, like a microbrewery? Or will the whole world come to depend on a few enourmous facilities in China?

    And most importantly, will I be able to share my passion for software without feeling like a weird, unpopular wizard any time soon?

    • Pl chevron_right

      Allan Day: GNOME Foundation Update, 2026-03-20

      news.movim.eu / PlanetGnome • 2 days ago • 4 minutes

    Hello and welcome to another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last update, and there’s been plenty going on, so let’s dive straight in.

    GNOME 50!

    My update wouldn’t be complete without mentioning this week’s GNOME 50 release . It looks like an amazing release with lots of great improvements! Many thanks to everyone who contributed and made it such a success.

    The Foundation plays a critical role in these releases, whether it’s providing development infrastructure, organising events where planning takes place, or providing development funding. If you are reading this and have the means, please consider signing up as a Friend of GNOME . Even small regular donations make a huge difference.

    Board Meeting

    The Board of Directors had its regular monthly meeting on March 9th, and we had a full agenda. Highlights from the meeting included:

    • The Board agreed to sign the Keep Android Open letter, as well as endorsing the United Nations Open Source Principles .
    • We heard reports from a number of committees, including the Executive Committee, Finance Committee, Travel Committee, and Code of Conduct Committee. Committee presentations are a new addition to the Board meeting format, with the goal of pushing more activity out to committees, with the Board providing high-level oversight and coordination.
    • Creation of a new bank account was authorized, which is needed as part of our ongoing finance and accounting development effort.
    • The main discussion topic was Flathub and what the organizational arrangements could be for it in the future. There weren’t any concrete decisions made here, but the Board indicated that it’s open to different options and sees Flathub’s success as the main priority rather than being attached to any particular organisation type or location.
    • The next regular Board meeting will be on April 13th.

    Travel

    The Travel Committee met both this week and last week, as it processed the initial batch of GUADEC sponsorship applications. As a result of this work the first set of approvals have been sent out. Documentation has also been provided for those who are applying for visas for their travel.

    The membership of the current committee is quite new and it is having to figure out processes and decision-making principals as it goes, which is making its work more intensive than might normally be the case. We are starting to write up guidelines for future funding rounds, to help smooth the process.

    Huge thanks to our committee members Asmit, Anisa, Julian, Maria, and Nirbeek, for taking on this important work.

    Conferences

    Planning and preparation for the 2026 editions of LAS and GUADEC have continued over the past fortnight. The call for papers for both events is a particular focus right now, and there are a couple of important deadlines to be aware of:

    • If you want to speak at LAS 2026 , the deadline for proposals is 23 March – that’s in just three days.
    • The GUADEC 2026 call for abstracts has been extended to 27 March , so there is one more week to submit a talk .

    There are teams behind each of these calls, reviewing and selecting proposals. Many thanks to the volunteers doing this work!

    We are also excited to have sponsors come forward to support GUADEC.

    Accounting

    The Foundation has been undertaking a program of improvements to our accounting and finance systems in recent months. Those were put on hold for the audit fieldwork that took place at the beginning of March, but now that’s done, attention has turned to the remaining work items there.

    We’ve been migrating to a new payments processing platform since the beginning of the year, and setup work has continued, including configuration to make it integrate correctly with our accounting software, migrating credit cards over from our previous solution, and creating new web forms which are going to be used for reimbursement requests in future.

    There are a number of significant advantages to the new system, like the accounting integration, which are already helping to reduce workloads, and I’m looking forward to having the final pieces of the new system in place.

    Another major change that is currently ongoing is that we are moving from a quarterly to a monthly cadence for our accounting. This is the cycle we move on to “complete” the accounts, with all data inputted and reconciled by the end of the cycle. The move to a monthly cycle will mean that we are generating finance reports on a more frequent basis, which will allow the Board to have a closer view on the organisation’s finances.

    Finally, this week we also had our regular monthly “books” call with our accountant and finance advisor. This was our usual opportunity to resolve any questions that have come up in relation to the accounts, but we also discussed progress on the improvements that we’ve been making.

    Infrastructure

    On the infrastructure side, the main highlight in recent weeks has been the migration from Anubis to Fastly’s Next-Gen Web Application Firewall (WAF) for protecting our infrastructure. The result of this migration will be an increased level of protection from bots, while simultaneously not interfering in peoples’ way when they’re using our infra. The Fastly product provides sophisticated detection of threats plus the ability for us to write our own fine-grained detection rules, so we can adjust firewall behaviour as we go.

    Huge thanks to Fastly for providing us with sponsorship for this service – it is a major improvement for our community and would not have been possible without their help.

    That’s it for this update. Thanks for reading and be on the lookout for the next update, probably in two weeks!

    • Pl chevron_right

      This Week in GNOME: #241 Fifty!

      news.movim.eu / PlanetGnome • 2 days ago • 3 minutes

    Update on what happened across the GNOME project in the week from March 13 to March 20.

    This week we released GNOME 50!

    50_banner.DNeFh5vr_Z2x2Q6n.webp

    This new major release of GNOME is full of exciting changes, including improved parental controls, many accessibility enhancements, expanded document annotation capabilities, calendar updates, and much more! See the GNOME 50 release notes and developer notes for more information.

    Readers who have been following this site will already be aware of some of the new features. If you’d like to follow the development of GNOME 51 (Fall 2026), keep an eye on this page - we’ll be posting exciting news every week!

    GNOME Circle Apps and Libraries

    gtk-rs

    Safe bindings to the Rust language for fundamental libraries from the GNOME stack.

    Julian 🍃 reports

    I’ve added a chapter about accessibility to the gtk4-rs book . While I researched the topic beforehand and tested all examples with a screenreader, I would still appreciated additional feedback from people experienced with accessibility.

    Eyedropper

    Pick and format colors.

    FineFindus reports

    Eyedropper 2.2.0 is out now, bringing support for color picking without having the application open. It also now supports RGB in decimal notation and improves support for systems without a proper portal setup.

    As always, you can download the latest release from Flathub .

    Eyedropper_twig.IY6kyzLq_ZE8Jj4.webp

    Third Party Projects

    JumpLink announces

    The TypeScript type definitions generator ts-for-gir v4.0.0-beta.41 is out, and the big news is that we now have browsable API documentation for GJS TypeScript bindings, live at https://gjsify.github.io/docs/ . As a bonus, the same work also greatly improved the inline TypeScript documentation, hover docs in your editor are now much richer and more complete.

    Anton Isaiev reports

    RustConn is a GTK4/libadwaita connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, and Zero Trust protocols. Core protocols use embedded Rust implementations - no external dependencies required.

    The 0.10.x series brings 8 new features and a major platform upgrade:

    New features:

    • MOSH protocol support with predict mode, UDP port range, and server binary path
    • Session recording in scriptreplay-compatible format with per-connection toggle and sensitive output sanitization
    • Text highlighting rules - regex-based pattern matching with customizable colors, per-connection and global
    • Ad-hoc broadcast - send keystrokes to multiple terminals simultaneously
    • Smart Folders - dynamic connection grouping by protocol, tags, or host glob pattern
    • Script credentials - resolve passwords from external commands with a Test button
    • Per-connection terminal theming - background, foreground, and cursor color overrides
    • CSV import/export with auto column mapping and configurable delimiter

    Platform changes:

    • GTK-rs bindings upgraded to gtk4 0.11, libadwaita 0.9, vte4 0.10
    • Flatpak runtime bumped to GNOME 50 with VTE 0.80
    • Migrated to AdwSpinner, AdwShortcutsDialog, AdwSwitchRow, and AdwWrapBox (cfg-gated)
    • FreeRDP 3.24.0 bundled in Flatpak - external RDP works out of the box on Wayland
    • rdp file association - double-click to open and connect
    • Split view now works with all VTE-based protocols

    0.10.2 is a follow-up with 11 bug fixes for session recording, MOSH dispatch, highlight rules wiring, picocom detection in Flatpak, sidebar overflow, and RDP error messages.

    https://github.com/totoshko88/RustConn https://flathub.org/en/apps/io.github.totoshko88.RustConn

    rustconn_1.1Qdkrz9i_2ilX12.webp

    rustconn_22.BiBq2KUY_Z1uRMC4.webp

    Quadrapassel

    Fit falling blocks together.

    Will Warner reports

    Quadrapassel 50.0 has been released! This release has a lot of improvements for controls and polishes the app. Here is what’s new:

    • Made game view and preview exactly fit the blocks
    • Improved game controller support
    • Stopped duplicate keyboard events
    • Replaced the libcanberra sound engine
    • Fixed many small bugs and stylistic issues

    You can get Quadrapassel on Flathub .

    TWIG-241-Quadrapassel.B_La2UMB_Zy5Nwy.webp

    Documentation

    Jan-Willem says

    This week Java was added to the programming languages section on developer.gnome.org and many of the code examples were translated to Java.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Jussi Pakkanen: Simple sort implementations vs production quality ones

      news.movim.eu / PlanetGnome • 3 days ago • 2 minutes

    One of the most optimized algorithms in any standard library is sorting. It is used everywhere so it must be fast. Thousands upon thousands of developer hours have been sunk into inventing new algorithms and making sort implementations faster. Pystd has a different design philosophy where fast compilation times and readability of the implementation have higher priority than absolute performance. Perf still very much matters, it has to be fast , but not at the cost of 10x compilation time.

    This leads to the natural question of how much slower such an implementation would be compared to a production quality one. Could it even be faster? (Spoilers: no) The only way to find out is to run performance benchmarks on actual code.

    To keep things simple there is only one test set, sorting 10'000'000 consecutive 64 bit integers that have been shuffled to a random order which is the same for all algorithms. This is not an exhaustive test by any means but you have to start somewhere. All tests used GCC 15.2 using -O2 optimization. Pystd code was not thoroughly hand optimized, I only fixed (some of the) obvious hotspots.

    Stable sort

    Pystd uses mergesort for stable sorting. The way the C++ standard specifies stable sort means that most implementations probably use it as well. I did not dive in the code to find out. Pystd's merge sort implementation consists of ~220 lines of code. It can be read on this page .

    Stdlibc++ can do the sort in 0.9 seconds whereas Pystd takes .94 seconds. Getting to within 5% with such a simple implementation is actually quite astonishing. Even when considering all the usual caveats where it might completely fall over with a different input data distribution and all that.

    Regular sort

    Both stdlibc++ and Pystd use introsort . Pystd's implementation has ~150 lines of code but it also uses heapsort, which has a further 100 lines of code). Code for introsort is here, and heapsort is here .

    Stdlibc++ gets the sort done in 0.76 seconds whereas Pystd takes 0.82 seconds. This makes it approximately 8% slower. It's not great, but getting within 10% with a few evening's work is still a pretty good result. Especially since, and I'm speculating here, std::sort has seen a lot more optimization work than std::stable_sort because it is used more.

    For heavy duty number crunching this would be way too slow. But for moderate data set sizes the performance difference might be insignificant for many use cases.

    Note that all of these are faster (note: did not measure) than libc's qsort because it requires an indirect function call on every comparison i.e. the comparison method can not be inlined.

    Where does the time go?

    Valgrind will tell you that quite easily.

    This picture shows quite clearly why big O notation can be misleading. Both quicksort (the inner loop of introsort) and heapsort have "the same" average time complexity but every call to heapsort takes approximately 4.5 times as long.

    • Pl chevron_right

      Jakub Steiner: Friday Sketches (part 2)

      news.movim.eu / PlanetGnome • 3 days ago

    Two years have passed since I last shared my Friday app icon sketches, but the sketching itself hasn't stopped.

    For me, it's the best way to figure out the right metaphors before we move to final pixels. These sketches are just one part of the GNOME Design Team's wider effort to keep our icons consistent and meaningful—it is an endeavor that’s been going on for years.

    If you design a GNOME app following the GNOME Design Guidelines , feel free to request an icon to be made for you. If you are serious and apply for inclusion in GNOME Circle , you are way more likely to get a designer's attention.

    Ai Assistant Aria 1 Articulate Bazaar 1 Bazaar 2 Bazaar 3 Bazaar 4 Bazaar Bouncer Carburetor 1 Carburetor 2 Carburetor 3 Carburetor Censor 1 Censor 2 Censor CoBang Constrict 1 Constrict 2 Constrict Deer 1 Deer 2 Deer Dev Toolbox 1 Dev Toolbox 2 Dev Toolbox Digitakt 2 Displaytune Drafting 1 Drafting 2 Drafting Drum Machine 1 Drum Machine Elfin 1 Elfin Exercise Timer Field Monitor 1 Field Monitor 2 Field Monitor 3 Field Monitor Gamepad Mirror Gelly Gems GNOME OS Installer 1 GNOME OS Installer 2 GNOME OS Installer Gnurd Gradia Identity 1 Identity 2 Identity Letters 1 Letters M8 Gate Mango Juice Concepts Memories 1 Memories 2 Memories 3 Memories 4 Memories Mercator 1 Mercator Meshtastic Chat Millisecond 1 Millisecond 2 Millisecond Mini Screenshot 1 Mini Screenshot 2 Mini Screenshot Mixtape 1 Mixtape 2 Mixtape 3 Mixtape 4 Mixtape Motivation Moviola 1 Moviola 2 Moviola Mutter Viewer Nucleus Passwords Pavucontrol Poliedros Push Reflection 1 Reflection 2 Reflection 3 Reflection Aria Rissole 1 Rissole 2 Rissole Rotor SSH Studio 1 SSH Studio Scriptorium Scrummy 1 Scrummy 2 Scrummy 3 Scrummy 4 Scrummy 5 Scrummy 6 Scrummy Serial Console 1 Serial Console Shaper 1 Shaper 2 Shaper 3 Shaper Sitra 1 Sitra Sitra 1 Sitra 2 Sitra Solitaire 1 Solitaire 2 Solitaire 3 Solitaire Tablets Tabs Template Tomodoro 1 Tomodoro 2 Tomodoro Twofun Typesetter 1 Typesetter 2 Typesetter Typester Typewriter Vocalis 1 Vocalis Wardrobe 1 Wardrobe 2 Wardrobe Web Apps eSIM

    Previously

    • Pl chevron_right

      Colin Walters: LLMs and core software: human driven

      news.movim.eu / PlanetGnome • 3 days ago • 5 minutes

    It’s clear LLMs are one of the biggest changes in technology ever. The rate of progress is astounding: recently due to a configuration mistake I accidentally used Claude Sonnet 3.5 (released ~2 years ago) instead of Opus 4.6 for a task and looked at the output and thought “what is this garbage”?

    But daily now: Opus 4.6 is able to generate reasonable PoC level Rust code for complex tasks for me. It’s not perfect – it’s a combination of exhausting and exhilarating to find the 10% absolutely bonkers/broken code that still makes it past subagents.

    So yes I use LLMs every day, but I will be clear: if I could push a button to “un-invent” them I absolutely would because I think the long term issues in larger society (not being able to trust any media, and many of the things from Dario’s recent blog etc.) will outweigh the benefits.

    But since we can’t un-invent them: here’s my opinion on how they should be used. As a baseline, I agree with a lot from this doc from Oxide about LLMs . What I want to talk about is especially around some of the norms/tools that I see as important for LLM use, following principles similar to those.

    On framing: there’s “core” software vs “bespoke”. An entirely new capability of course is for e.g. a nontechnical restaurant owner to use an LLM to generate (“vibe code”) a website (excepting hopefully online orderings and payments!). I’m not overly concerned about this.

    Whereas “core” software is what organizations/businesses provide/maintain for others. I work for a company (Red Hat) that produces a lot of this. I am sure no one would want to run for real an operating system, cluster filesystem, web browser, monitoring system etc. that was primarily “vibe coded”.

    And while I respect people and groups that are trying to entirely ban LLM use, I don’t think that’s viable for at least my space.

    Hence the subject of this blog is my perspective on how LLMs should be used for “core” software: not vibe coding, but using LLMs responsibly and intelligently – and always under human control and review.

    Agents should amplify and be controlled by humans

    I think most of the industry would agree we can’t give responsibility to LLMs. That means they must be overseen by humans. If they’re overseen by a human, then I think they should be amplifying what that human thinks/does as a baseline – intersected with the constraints of the task of course.

    On “amplification”: Everyone using a LLM to generate content should inject their own system prompt (e.g. AGENTS.md ) or equivalent. Here’s mine – notice I turn off all the emoji etc. and try hard to tune down bulleted lists because that’s not my style. This is a truly baseline thing to do.

    Now most LLM generated content targeted for core software is still going to need review, but just ensuring that the baseline matches what the human does helps ensure alignment.

    Pull request reviews

    Let’s focus on a very classic problem: pull request reviews. Many projects have wired up a flow such that when a PR comes in, it gets reviewed by a model automatically. Many projects and tools pitch this. We use one on some of my projects.

    But I want to get away from this because in my experience these reviews are a combination of:

    • Extremely insightful and correct things (there’s some amazing fine-tuning and tool use that must have happened to find some issues pointed out by some of these)
    • Annoying nitpicks that no one cares about (not handling spaces in a filename in a shell script used for tests)
    • Broken stuff like getting confused by things that happened after its training cutoff (e.g. Gemini especially seems to get confused by referencing the current date, and also is unaware of newer Rust features, etc)

    In practice, we just want the first of course.

    How I think it should work:

    • A pull request comes in
    • It gets auto-assigned to a human on the team for review
    • A human contributing to that project is running their own agents (wherever: could be local or in the cloud) using their own configuration (but of course still honoring the project’s default development setup and the project’s AGENTS.md etc)
    • A new containerized/sandboxed agent may be spawned automatically, or perhaps the human needs to click a button to do so – or perhaps the human sees the PR come in and thinks “this one needs a deeper review, didn’t we hit a perf issue with the database before?” and adds that to a prompt for the agent.
    • The agent prepares a draft review that only the human can see.
    • The human reviews/edits the draft PR review, and has the opportunity to remove confabulations, add their own content etc. And to send the agent back to look more closely at some code (i.e. this part can be a loop)
    • When the human is happy they click the “submit review” button.
    • Goal: it is 100% clear what parts are LLM generated vs human generated for the reader.

    I wrote this agent skill to try to make this work well, and if you search you can see it in action in a few places, though I haven’t truly tried to scale this up.

    I think the above matches the vision of LLMs amplifying humans.

    Code Generation

    There’s no doubt that LLMs can be amazing code generators, and I use them every day for that. But for any “core” software I work on, I absolutely review all of the output – not just superficially, and changes to core algorithms very closely.

    At least in my experience the reality is still there’s that percentage of the time when the agent decided to reimplement base64 encoding for no reason, or disable the tests claiming “the environment didn’t support it” etc.

    And to me it’s still a baseline for “core” software to require another human review to merge (per above!) with their own customized LLM assisting them (ideally a different model, etc).

    FOSS vs closed

    Of course, my position here is biased a bit by working on FOSS – I still very much believe in that, and working in a FOSS context can be quite different than working in a “closed environment” where a company/organization may reasonably want to (and be able to) apply uniform rules across a codebase.

    While for sure LLMs allow organizations to create their own Linux kernel filesystems or bespoke Kubernetes forks or virtual machine runtime or whatever – it’s not clear to me that it is a good idea for most to do so. I think shared (FOSS) infrastructure that is productized by various companies, provided as a service and maintained by human experts in that problem domain still makes sense. And how we develop that matters a lot.

    • Pl chevron_right

      Alberto Ruiz: Booting with Rust: Chapter 3

      news.movim.eu / PlanetGnome • 4 days ago • 5 minutes

    In Chapter 1 I gave the context for this project and in Chapter 2 I showed the bare minimum: an ELF that Open Firmware loads, a firmware service call, and an infinite loop.

    That was July 2024. Since then, the project has gone from that infinite loop to a bootloader that actually boots Linux kernels. This post covers the journey.

    The filesystem problem

    The Boot Loader Specification expects BLS snippets in a FAT filesystem under loaders/entries/ . So the bootloader needs to parse partition tables, mount FAT, traverse directories, and read files. All #![no_std] , all big-endian PowerPC.

    I tried writing my own minimal FAT32 implementation, then integrating simple-fatfs and fatfs . None worked well in a freestanding big-endian environment.

    Hadris

    The breakthrough was hadris , a no_std Rust crate supporting FAT12/16/32 and ISO9660. It needed some work to get going on PowerPC though. I submitted fixes upstream for:

    • thiserror pulling in std : default features were not disabled, preventing no_std builds.
    • Endianness bug : the FAT table code read cluster entries as native-endian u32 . On x86 that’s invisible; on big-endian PowerPC it produced garbage cluster chains.
    • Performance : every cluster lookup hit the firmware’s block I/O separately. I implemented a 4MiB readahead cache for the FAT table, made the window size parametric at build time, and improved read_to_vec() to coalesce contiguous fragments into a single I/O. This made kernel loading practical.

    All patches were merged upstream.

    Disk I/O

    Hadris expects Read + Seek traits. I wrote a PROMDisk adapter that forwards to OF’s read and seek client calls, and a Partition wrapper that restricts I/O to a byte range. The filesystem code has no idea it’s talking to Open Firmware.

    Partition tables: GPT, MBR, and CHRP

    PowerVM with modern disks uses GPT (via the gpt-parser crate): a PReP partition for the bootloader and an ESP for kernels and BLS entries.

    Installation media uses MBR. I wrote a small mbr-parser subcrate using explicit-endian types so little-endian LBA fields decode correctly on big-endian hosts. It recognizes FAT32, FAT16, EFI ESP, and CHRP (type 0x96 ) partitions.

    The CHRP type is what CD/DVD boot uses on PowerPC. For ISO9660 I integrated hadris-iso with the same Read + Seek pattern.

    Boot strategy? Try GPT first, fall back to MBR, then try raw ISO9660 on the whole device (CD-ROM). This covers disk, USB, and optical media.

    The firmware allocator wall

    This cost me a lot of time.

    Open Firmware provides claim and release for memory allocation. My initial approach was to implement Rust’s GlobalAlloc by calling claim for every allocation. This worked fine until I started doing real work: parsing partitions, mounting filesystems, building vectors, sorting strings. The allocation count went through the roof and the firmware started crashing.

    It turns out SLOF has a limited number of tracked allocations. Once you exhaust that internal table, claim either fails or silently corrupts state. There is no documented limit; you discover it when things break.

    The fix was to claim a single large region at startup (1/4 of physical RAM, clamped to 16-512 MB) and implement a free-list allocator on top of it with block splitting and coalescing. Getting this right was painful: the allocator handles arbitrary alignment, coalesces adjacent free blocks, and does all this without itself allocating. Early versions had coalescing bugs that caused crashes which were extremely hard to debug – no debugger, no backtrace, just writing strings to the OF console on a 32-bit big-endian target.

    And the kernel boots!

    March 7, 2026. The commit message says it all: “And the kernel boots!”

    The sequence:

    1. BLS discovery : walk loaders/entries/*.conf , parse into BLSEntry structs, filter by architecture ( ppc64le ), sort by version using rpmvercmp .

    2. ELF loading : parse the kernel ELF, iterate PT_LOAD segments, claim a contiguous region, copy segments to their virtual address offsets, zero BSS.

    3. Initrd : claim memory, load the initramfs.

    4. Bootargs : set /chosen/bootargs via setprop .

    5. Jump : inline assembly trampoline – r3=initrd address, r4=initrd size, r5=OF client interface, branch to kernel:

    core::arch::asm!(
        "mr 7, 3",   // save of_client
        "mr 0, 4",   // r0 = kernel_entry
        "mr 3, 5",   // r3 = initrd_addr
        "mr 4, 6",   // r4 = initrd_size
        "mr 5, 7",   // r5 = of_client
        "mtctr 0",
        "bctr",
        in("r3") of_client,
        in("r4") kernel_entry,
        in("r5") initrd_addr as usize,
        in("r6") initrd_size as usize,
        options(nostack, noreturn)
    )
    

    One gotcha: do NOT close stdout/stdin before jumping. On some firmware, closing them corrupts /chosen and the kernel hits a machine check. We also skip calling exit or release – the kernel gets its memory map from the device tree and avoids claimed regions naturally.

    The boot menu

    I implemented a GRUB-style interactive menu:

    • Countdown : boots the default after 5 seconds unless interrupted.
    • Arrow/PgUp/PgDn/Home/End navigation .
    • ESC : type an entry number directly.
    • e : edit the kernel command line with cursor navigation and word jumping (Ctrl+arrows).

    This runs on the OF console with ANSI escape sequences. Terminal size comes from OF’s Forth interpret service ( #columns / #lines ), with serial forced to 80×24 because SLOF reports nonsensical values.

    Secure boot (initial, untested)

    IBM POWER has its own secure boot: the ibm,secure-boot device tree property (0=disabled, 1=audit, 2=enforce, 3=enforce+OS). The Linux kernel uses an appended signature format – PKCS#7 signed data appended to the kernel file, same format GRUB2 uses on IEEE 1275.

    I wrote an appended-sig crate that parses the appended signature layout, extracts an RSA key from a DER X.509 certificate (compiled in via include_bytes! ), and verifies the signature (SHA-256/SHA-512) using the RustCrypto crates, all no_std .

    The unit tests pass, including an end-to-end sign-and-verify test. But I have not tested this on real firmware yet. It needs a PowerVM LPAR with secure boot enforced and properly signed kernels, which QEMU/SLOF cannot emulate. High on my list.

    The ieee1275-rs crate

    The crate has grown well beyond Chapter 2. It now provides: claim / release , the custom heap allocator, device tree access ( finddevice , getprop , instance-to-package ), block I/O, console I/O with read_stdin , a Forth interpret interface, milliseconds for timing, and a GlobalAlloc implementation so Vec and String just work.

    Published on crates.io at github.com/rust-osdev/ieee1275-rs .

    What’s next

    I would like to test the Secure Boot feature on an end to end setup but I have not gotten around to request access to a PowerVM PAR. Beyond that I want to refine the menu. Another idea would be to perhaps support the equivalent of the Unified Kernel Image using ELF. Who knows, if anybody finds this interesting let me know!

    The source is at the powerpc-bootloader repository . Contributions welcome, especially from anyone with POWER hardware access.