call_end

    • Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 3 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Christian Schaller: Using AI to create some hardware tools and bring back the past

      news.movim.eu / PlanetGnome • 16:07 • 18 minutes

    As I talked about in a couple of blog posts now I been working a lot with AI recently as part of my day to day job at Red Hat, but also spending a lot of evenings and weekend time on this (sorry kids pappa has switched to 1950’s mode for now). One of the things I spent time on is trying to figure out what the limitations of AI models are and what kind of use they can have for Open Source developers.

    One thing to mention before I start talking about some of my concrete efforts is that I more and more come to conclude that AI is an incredible tool to hypercharge someone in their work, but I feel it tend to fall short for fully autonomous systems. In my experiments AI can do things many many times faster than you ordinarily could, talking specifically in the context of coding here which is what is most relevant for those of us in the open source community.

    So one annoyance I had for years as a Linux user is that I get new hardware which has features that are not easily available to me as a Linux user. So I have tried using AI to create such applications for some of my hardware which includes an Elgato Light and a Dell Ultrasharp Webcam.

    I found with AI and this is based on using Google Gemini, Claude Sonnet and Opus and OpenAI codex, they all required me to direct and steer the AI continuously, if I let the AI just work on its own, more often than not it would end up going in circles or diverging from the route it was supposed to go, or taking shortcuts that makes wanted output useless.On the other hand if I kept on top of the AI and intervened and pointed it in the right direction it could put together things for me in very short time spans.
    My projects are also mostly what I would describe as end leaf nodes, the kind of projects that already are 1 person projects in the community for the most part. There are extra considerations when contributing to bigger efforts, and I think a point I seen made by others in the community too is that you need to own the patches you submit, meaning that even if an AI helped your write the patch you still need to ensure that what you submit is in a state where it can be helpful and is merge-able. I know that some people feel that means you need be capable of reviewing the proposed patch and ensuring its clean and nice before submitting it, and I agree that if you expect your patch to get merged that has to be the case. On the other hand I don’t think AI patches are useless even if you are not able to validate them beyond ‘does it fix my issue’.

    My friend and PipeWire maintainer Wim Taymans and I was talking a few years ago about what I described at the time as the problem of ‘bad quality patches’, and this was long before AI generated code was a thing. Wim response to me which I often thought about afterwards was “a bad patch is often a great bug report”. And that would hold true for AI generated patches to. If someone makes a patch using AI, a patch they don’t have the ability to code review themselves, but they test it and it fixes their problem, it might be a good bug report and function as a clearer bug report than just a written description by the user submitting the report. Of course they should be clear in their bug report that they don’t have the skills to review the patch themselves, but that they hope it can be useful as a tool for pinpointing what isn’t working in the current codebase.

    Anyway, let me talk about the projects I made. They are all found on my personal website Linuxrising.org a website that I also used AI to update after not having touched the site in years.

    Elgato Light GNOME Shell extension

    Elgato Light GNOME Shell extension

    Elgato Light GNOME Shell extension

    The first project I worked on is a GNOME Shell extension for controlling my Elgato Key Wifi Lamp . The Elgato lamp is basically meant for podcasters and people doing a lot of video calls to be able to easily configure light in their room to make a good recording. The lamp announces itself over mDNS, and thus can be controlled via Avahi. For Windows and Mac the vendor provides software to control their lamp, but unfortunately not for Linux.

    There had been GNOME Shell extensions for controlling the lamp in the past, but they had not been kept up to date and their feature set was quite limited. Anyway, I grabbed one of these old extensions and told Claude to update it for latest version of GNOME. It took a few iterations of testing, but we eventually got there and I had a simple GNOME Shell extension that could turn the lamp off and on and adjust hue and brightness. This was a quite straightforward process because I had code that had been working at some point, it just needed some adjustments to work with current generation of GNOME Shell.

    Once I had the basic version done I decided to take it a bit further and try to recreate the configuration dialog that the windows application offers for the full feature set which took me quite a bit of back and forth with Claude. I found that if I ask Claude to re-implement from a screenshot it recreates the functionality of the user interface first, meaning that it makes sure that if the screenshot has 10 buttons, then you get a GUI with 10 buttons. You then have to iterate both on the UI design, for example telling Claude that I want a dark UI style to match the GNOME Shell, and then I also had to iterate on each bit of functionality in the UI. Like most of the buttons in the UI didn’t really do anything from the start, but when you go back and ask Claude to add specific functionality per button it is usually able to do so.

    Elgato Light Settings Application

    Elgato Light Settings Application

    So this was probably a fairly easy thing for the AI because all the functionality of the lamp could be queried over Avahi, there was no ‘secret’ USB registers to be set or things like that.
    Since the application was meant to be part of the GNOME Shell extension I didn’t want to to have any dependency requirements that the Shell extension itself didn’t have, so I asked Claude to make this application in JavaScript and I have to say so far I haven’t seen any major differences in terms of the AIs ability to generate different languages. The application now reproduce most of the functionality of the Windows application. Looking back I think it probably took me a couple of days in total putting this tool together.

    Dell Ultrasharp Webcam 4K

    Dell UltraSharp 4K settings application for Linux

    Dell UltraSharp 4K settings application for Linux

    The second application on the list is a controller application for my Dell UltraSharp Webcam 4K UHD (WB7022) . This is a high end Webcam I that have been using for a while and it is comparable to something like the Logitech BRIO 4K webcam. It has mostly worked since I got it with the generic UVC driver and I been using it for my Google Meetings and similar, but since there was no native Linux control application I could not easily access a lot of the cameras features. To address this I downloaded the windows application installer and installed it under Windows and then took a bunch of screenshots showcasing all features of the application. I then fed the screenshots into Claude and told it I wanted a GTK+ version for Linux of this application. I originally wanted to have Claude write it in Rust, but after hitting some issues in the PipeWire Rust bindings I decided to just use C instead.

    I took me probably 3-4 days with intermittent work to get this application working and Claude turned out to be really good and digging into Windows binaries and finding things like USB property values. Claude was also able to analyze the screenshots and figure out the features the application needed to have. It was a lot of trial and error writing the application, but one way I was able to automate it was by building a screenshot option into the application, allowing it to programmatically take screenshots of itself. That allowed me to tell Claude to try fixing something and then check the screenshot to see if it worked without me having to interact with the prompt. Also to get the user interface looking nicer, once I had all the functionality in I asked Claude to tweak the user interface to follow the guidelines of the GNOME Human Interface Guidelines, which greatly improved the quality of the UI.

    At this point my application should have almost all the features of the Windows application. Since it is using PipeWire underneath it is also tightly integrated with the PipeWire media graph, allowing you to see it connect and work with your application in PipeWire patchbay applications like Helvum. The remaining features are software features of Dell’s application, like background removal and so on, but I think that if I decided to to implement that it should be as a standalone PipeWire tool that can be used with any camera, and not tied to this specific one.

    Red Hat Planet

    Red Hat Vulkan Globe

    The application shows the worlds Red Hat offices and include links to latest Red Hat news.


    The next application on my list is called Red Hat Planet. It is mostly a fun toy, but I made it to partly revisit the Xtraceroute modernisation I blogged about earlier. So as I mentioned in that blog, Xtraceroute while cute isn’t really very useful IMHO, since the way the modern internet works rarely have your packets jump around the world. Anyway, as people pointed out after I posted about the port is that it wasn’t an actual Vulkan application, it was a GTK+ application using the GTK+ Vulkan backend. The Globe animation itself was all software rendered.

    I decided if I was going to revisit the Vulkan problem I wanted to use a different application idea than traceroute. The idea I had was once again a 3D rendered globe, but this one reading the coordinates of Red Hats global offices from a file and rendering them on the globe. And alongside that provide clickable links to recent Red Hat news items. So once again maybe not the worlds most useful application, but I thought it was a cute idea and hopefully it would allow me to create it using actual Vulkan rendering this time.

    Creating this turned out to be quite the challenge (although it seems to have gotten easier since I started this effort), with Claude Opus 4.6 being more capable at writing Vulkan code than Claude Sonnet, Google Gemini or OpenAI Codex was when I started trying to create this application.
    When I started this project I had to keep extremely close tabs on the AI and what is was doing in order to force it to keep working on this as a Vulkan application, as it kept wanting to simplify with Software rendering or OpenGL and sometimes would start down that route without even asking me. That hasn’t happened more recently, so maybe that was a problem of AI of 5 Months ago.

    I also discovered as part of this that rendering Vulkan inside a GTK4 application is far from trivial and would ideally need the GTK4 developers to create such a widget to get rendering timings and similar correct. It is one of the few times I have had Claude outright say that writing a widget like that was beyond its capabilities (haven’t tried again so I don’t know if I would get the same response today). So I started moving the application to SDL3 first, which worked as I got a spinning globe with red dots on, but came with its own issues, in the sense that SDL is not a UI toolkit as such. So while I got the globe rendered and working the AU struggled badly with the news area when using SDL.

    So I ended up trying to port the application to Qt, which again turned out to be non-trivial in terms of how much time it took with trial and error to get it right. I think in my mind I had a working globe using Vulkan, how hard could it be to move it from SDL3 to Qt, but there was a million rendering issues. In fact I ended up using the Qt Vulkan rendering example as a starting point in the end and then ‘porting’ the globe over bit by bit, testing it for each step, to finally get a working version. The current version is a Vulkan+Qt app and it basically works, although it seems the planet is not spinning correctly on AMD systems at the moment, while it seems to work well on Intel and NVIDIA systems.

    WMDock

    WmDock fullscreen with config application

    WmDock fullscreen with config application.


    This project came out of a chat with Matthias Clasen over lunch where I mused about if Claude would be able to bring the old Window Maker dockapps to GNOME and Wayland. Turns out the answer is yes although the method of doing so changed as I worked on it.

    My initial thought was for Claude to create a shim that the old dockapps could be compiled against, without any changes. That worked, but then I had a ton of dockapps showing up in things like the alt+tab menu. It also required me to restart my GNOME Shell session all the time as I was testing the extension to house the dockapps. In the end I decided that since a lot of the old dockapps don’t work with modern Linux versions anyway, and thus they would need to be actively ported, I should accept that I ship the dockapps with the tool and port them to work with modern linux technologies. This worked well and is what I currently have in the repo, I think the wildest port was porting the old dockapp webcam app from V4L1 to PipeWire. Although updating the soundcontroller from ESD to PulesAudio was also a generational jump.

    XMMS resuscitated

    XMMS brought back to life

    XMMS brought back to life


    So the last effort I did was reviving the old XMMS media player. I had tried asking Claude to do this for Months and it kept failing, but with Opus 4.6 it plowed through it and had something working in a couple of hours, with no input from me beyond kicking it off. This was a big lift,moving it from GTK2 and Esound, to GTK4, GStreamer and PipeWire. One thing I realized is that a challenge with bringing an old app back is that since keeping the themeable UI is a big part of this specific application adding new features is a little kludgy. Anyway I did set it up to be able to use network speakers through PipeWire and also you can import your Spotify playlists and play those, although you need to run the Spotify application in the background to be able to play sound on your local device.

    Monkey Bubble
    Monkey Bubble game
    Monkey Bubble was a game created in the heyday of GNOME 2 and while I always thought it was a well made little game it had never been updated to never technologies. So I asked Claude to port it to GTK4 and use GStreamer for audio.This port was fairly straightforward with Claude having little problems with it. I also asked Claude to add highscores using the libmanette library and network game discovery with Avahi. So some nice little.improvements.

    All the applications are available either as Flatpaks or Fedora RPMS, through the gitlab project page, so I hope people enjoy these applications and tools. And enoy the blasts from the past as much as I did.

    Worries about Artifical Intelligence

    When I speak to people both inside Red Hat and outside in the community I often come across negativity or even sometimes anger towards Artificial Intelligence in the coding space. And to be clear I to worry about where things could be heading and how it will affect my livelihood too, so I am not unsympathetic to those worries at all. I probably worry about these things at least a few times a day. At the same time I don’t think we can hide from or avoid this change, it is happening with or without us. We have to adapt to a world where this tool exists, just like our ancestors have adapted to jobs changing due to industrialization and science before. So do I worry about the future, yes I do. Do I worry about how I might personally get affected by this? yes, I do. Do I worry about how society might change for the worse due to this? yes, I do. But I also remind myself that I don’t know the future and that people have found ways to move forward before and society has survived and thrived. So what I can control is that I try to be on top of these changes myself and take advantage of them where I can and that is my recommendation to the wider open source community on this too. By leveraging them to move open source forward and at the same time trying to put our weight on the scale towards the best practices and policies around Artificial Intelligence.

    The Next Test and where AI might have hit a limit for me.

    So all these previous efforts did teach me a lot of tricks and helped me understand how I can work with an AI agent like Claude, but especially after the success with the webcam I decided to up the stakes and see if I could use Claude to help me create a driver for my Plustek OpticFilm 8200i scanner. So I have zero backround in any kind of driver development and probably less than zero in the field of scanner driver specifically. So I ended up going down a long row of deadends on this journey and I to this day has not been able to get a single scan out of the scanner with anything that even remotely resembles the images I am trying to scan.

    My idea was to have Claude analyse the Windows and Mac driver and build me a SANE driver based on that, which turned out to be horribly naive and lead nowhere. One thing I realized is that I would need to capture USB traffic to help Claude contextualize some of the findings it had from looking at the Windows and Mac drivers.I started out with Wireshark and feeding Claude with the Wireshark capture logs. Claude quite soon concluded that the Wireshark logs wasn’t good enough and that I needed lower level traffic capture. Buying a USB packet analyzer isn’t cheap so I had the idea that I could use one of the ARM development boards floating around the house as a USB relay, allowing me to perfectly capture the USB traffic. With some work I did manage to set up my LibreComputer Solitude AML-S905D3-CC arm board going and setting it in device mode. I also had a usb-relay daemon going on the board. After a lot of back and forth, and even at one point trying to ask Claude to implement a missing feature in the USB kernel stack, I realized this would never work and I ended up ordering a Beagle USB 480 USB hardware analyzer.

    At about the same time I came across the chipset documentation for the Genesys Logic GL845 chip in the scanner. I assumed that between my new USB analyzer and the chipset docs this would be easy going from here on, but so far no. I even had Claude decompile the windows driver using ghidra and then try to extract the needed information needed from the decompiled code.
    I bought a network controlled electric outlet so that Claude can cycle the power of the scanner on its own.

    So the problem here is that with zero scanner driver knowledge I don’t even know what I should be looking for, or where I should point Claude to, so I keept trying to brute force it by trial and error. I managed to make SANE detect the scanner and I managed to get motor and lamp control going, but that is about it. I can hear the scanner motor running and I ask for a scan, but I don’t know if it moves correctly. I can see light turning on and off inside the scanner, but I once again don’t know if it is happening at the correct times and correct durations. And Claude has of course no way of knowing either, relying on me to tell it if something seems like it has improved compared to how it was.

    I have now used Claude to create two tools for Claude to use, once using a camera to detect what is happening with the light inside the scanner and the other recording sound trying to compare the sound this driver makes compared to the sounds coming out when doing a working scan with the MacOS X application. I don’t know if this will take me to the promised land eventually, but so far I consider my scanner driver attempt a giant failure. At the same time I do believe that if someone actually skilled in scanner driver development was doing this they could have guided Claude to do the right things and probably would have had a working driver by now.

    So I don’t know if I hit the kind of thing that will always be hard for an AI to do, as it has to interact with things existing in the real world, or if newer versions of Claude, Gemini or Codex will suddenly get past a threshold and make this seem easy, but this is where things are at for me at the moment.

    • Pl chevron_right

      Jussi Pakkanen: Everything old is new again: memory optimization

      news.movim.eu / PlanetGnome • 14:06 • 2 minutes

    At this point in history, AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast. Thus the amount of memory in consumer computers and phones seems to be going down. After decades of not having to care about memory usage, reducing it has very much become a thing.

    Relevant questions to this state of things include a) is it really worth it and b) what sort of improvements are even possible. The answers to these depend on the task and data set at hand. Let's examine one such case. It might be a bit contrived, unrepresentative and unfair, but on the other hand it's the one I already had available.

    Suppose you have to write script that opens a text file, parses it as UTF-8, splits it into words according to white space, counts the number of time each word appears and prints the words and counts in decreasing order (most common first).

    The Python baseline

    This sounds like a job for Python. Indeed, an implementation takes fewer than 30 lines of code. Its memory consumption on a small text file looks like this.

    Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

    The native version

    A fully native C++ version using Pystd requires 60 lines of code to implement the same thing. If you ignore the boilerplate, the core functionality fits in 20 lines. The steps needed are straightforward:

    1. Mmap the input file to memory.
    2. Validate that it is utf-8
    3. Convert raw data into a utf-8 view
    4. Split the view into words lazily
    5. Compute the result into a hash table whose keys are string views, not strings

    The main advantage of this is that there are no string objects. The only dynamic memory allocations are for the hash table and the final vector used for sorting and printing. All text operations use string views , which are basically just a pointer + size.

    In code this looks like the following:

    Its memory usage looks like this.

    Peak consumption is ~100 kB in this implementation. It uses only 7.7% of the amount of memory required by the Python version.

    Isn't this a bit unfair towards Python?

    In a way it is. The Python runtime has a hefty startup cost but in return you get a lot of functionality for free. But if you don't need said functionality, things start looking very different.

    But we can make this comparison even more unfair towards Python. If you look at the memory consumption graph you'll quite easily see that 70 kB is used by the C++ runtime. It reserves a bunch of memory up front so that it can do stack unwinding and exception handling even when the process is out of memory. It should be possible to build this code without exception support in which case the total memory usage would be a mere 21 kB. Such version would yield a 98.4% reduction in memory usage.

    • Pl chevron_right

      Colin Walters: Agent security is just security

      news.movim.eu / PlanetGnome • 13:51 • 5 minutes

    Suddenly I have been hearing the term Landlock more in (agent) security circles. To me this is a bit weird because while Landlock is absolutely a useful Linux security tool, it’s been a bit obscure and that’s for good reason. It feels to me a lot like the how weird prevalence of the word delve became a clear tipoff that LLMs were the ones writing, not a human.

    Here’s my opinion: Agentic LLM AI security is just security .

    We do not need to reinvent any fundamental technologies for this. Most uses of agents one hears about provide the ability to execute arbitrary code as a feature. It’s how OpenCode, Claude Code, Cursor, OpenClaw and many more work.

    Especially let me emphasize since OpenClaw is popular for some reason right now: You should absolutely not give any LLM tool blanket read and write access to your full user account on your computer. There are many issues with that, but everyone using an LLM needs to understand just how dangerous prompt injection can be. This post is just one of many examples. Even global read access is dangerous because an attacker could exfiltrate your browser cookies or other files.

    Let’s go back to Landlock – one prominent place I’ve seen it mentioned is in this project nono.sh pitches itself as a new sandbox for agents. It’s not the only one, but indeed it heavily leans on Landlock on Linux. Let’s dig into this blog post from the author. First of all, I’m glad they are working on agentic security. We both agree: unsandboxed OpenClaw (and other tools!) is a bad idea.

    Here’s where we disagree:

    With AI agents, the core issue is access without boundaries. We give agents our full filesystem permissions because that’s how Unix works. We give them network access because they need to call APIs. We give them access to our SSH keys, our cloud credentials, our shell history, our browser cookies – not because they need any of that, but because we haven’t built the tooling to say “you can have this, but not that.”

    No. We have had usable tooling for “you can have this, but not that” for well over a decade. Docker kicked off a revolution for a reason: docker run <app> is “reasonably completely isolated” from the host system. Since then of course, there’s many OCI runtime implementations, from podman to apple/container on MacOS and more.

    If you want to provide the app some credentials, you can just use bind mounts to provide them like docker|podman|ctr -v ~/.config/somecred.json:/etc/cred.json:ro . Notice there the ro which makes it readonly. Yes, it’s that straightforward to have “this but not that”.

    Other tools like Flatpak on Linux have leveraged Linux kernel namespacing similar to this to streamline running GUI apps in an isolated way from the host. For a decade.

    There’s far more sophisticated tooling built on top of similar container runtimes since then, from having them transparently backed by virtual machines, Kubernetes and similar projects are all about running containers at scale with lots of built up security knowledge.

    That doesn’t need reinventing. It’s generic workload technology, and agentic AI is just another workload from the perspective of kernel/host level isolation. There absolutely are some new, novel risks and issues of course: but again the core principle here is we don’t need to reinvent anything from the kernel level up.

    Security here really needs to start from defaulting to fully isolating (from the host and other apps), and then only allow-listing in what is needed. That’s again how docker run worked from the start. Also on this topic, Flatpak portals are a cool technology for dynamic resource access on a single host system.

    So why do I think Landlock is obscure? Basically because most workloads should already be isolated already per above, and Landlock has heavy overlap with the wide variety of Linux kernel security mechanisms already in use in containers.

    The primary pitch of Landlock is more for an application to further isolate itself – it’s at its best when it’s a complement coarse-grained isolation techniques like virtualization or containers. One way to think of it is that often container runtimes don’t grant privileges needed for an application to further spawn its own sub-containers (for kernel attack surface reasons), but Landlock is absolutely a reasonable thing for an app to use to e.g. disable networking from a sub-process that doesn’t need it, etc.

    Of course the challenge is that not every app is easy to run in a container or virtual machine. Some workloads are most convenient with that “ambient access” to all of your data (like an IDE or just a file browser).

    But giving that ambient access by default to agentic AI is a terrible idea. So don’t do it: use (OCI) containers and allowlist in what you need.

    (There’s other things nono is doing here that I find dubious/duplicative; for example I don’t see the need for a new filesystem snapshotting system when we have both git and OCI)

    But I’m not specifially trying to pick on nono – just in the last two weeks I had to point out similar problems in two different projects I saw go by also pitched for AI security. One used bubblewrap, but with insufficient sandboxing, and the other was also trying to use Landlock.

    On the other hand, I do think the credential problem (that nono and others are trying to address in differnet ways) is somewhat specific to agentic AI, and likely does need new tooling. When deploying a typical containerized app usually one just provisions a few relatively static credentials. In contrast, developer/user agentic AI is often a lot more freeform and dynamic, and while it’s hard to get most apps to leak credentials without completely compromising it, it’s much easier with agentic AI and prompt injection. I have thoughts on credentials, and absolutely more work here is needed.

    It’s great that people want to work on FOSS security, and AI could certainly use more people thinking about security. But I don’t think we need “next generation” security here: we should build on top of the “previous generation”. I actually use plain separate Unix users for isolation for some things, which works quite well! Running OpenShell in a secondary user account where one only logs into a select few things (i.e. not your email and online banking) is much more reasonable, although clearly a lot of care is still needed. Landlock is a fine technology but is just not there as a replacement for other sandboxing techniques. So just use containers and virtual machines because these are proven technologies. And if you take one message away from this: absolutely don’t wire up an LLM via OpenShell or a similar tool to your complete digital life with no sandboxing.

    • Pl chevron_right

      Matthew Garrett: SSH certificates and git signing

      news.movim.eu / PlanetGnome • 1 day ago • 7 minutes

    When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

    git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account 1 , but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

    SSH Certificates

    And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

    And, wonderfully, you can use them in git! Let’s find out how.

    Local config

    There’s two main parameters you need to set. First,

    1
    
    git config set gpg.format ssh
    

    because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one . It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

    Validating signatures

    This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen 2 , which validates signatures against a file in a format that looks somewhat like authorized-keys . This lets you add something like:

    1
    
    * cert-authority ssh-rsa AAAA…
    

    which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

    Haha. No.

    Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own . It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS . Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

    In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

    Doing it in hardware

    Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

    So, obviously, I wrote my own . This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows 3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

    And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

    Wait, attestation?

    Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

    Conclusion

    Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it 4 and everyone wins.


    1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys ? Now you do ↩︎

    2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

    3. This is more difficult than it sounds ↩︎

    4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

    • Pl chevron_right

      Sam Thursfield: Status update, 21st March 2026

      news.movim.eu / PlanetGnome • 2 days ago • 6 minutes

    Hello there,

    If you’re an avid reader of blogs, you’ll know this medium is basically dead now. Everyone switched to making YouTube videos, complete with cuts and costume changes every few seconds because, I guess, our brains work much faster now.

    The YouTube recommendation algorithm, problematic as it is, does turn up some interesting stuff, such this video entitled “Why Work is Starting to Look Medieval” :

    It is 15 minutes long, but it does include lots of short snippets and some snipping scissors, so maybe you’ll find it a fun 15 minutes. The key point, I guess, is that before we were wage slaves we used to be craftspeople, more deeply connected to our work and with a sense of purpose. The industrial revolution marked a shift from cottage industry, where craftspeople worked with their own tools in their own house or workshop, to modern capitalism where the owners of the tools are the 1%, and the rest of us are reduced to selling our labour at whatever is the going rate.

    Then she posits that, since the invention of the personal computer, influencers and independent content creators have begun to transcend the structures of 20th century capitalism, and are returning to a more traditional relationship with work. Hence, perhaps, why nearly everyone under 18 wants to be a YouTuber. Maybe that’s a stretch.

    This message resonated with me after 20 years in the open source software world, and hopefully you can see the link. Software development is a craft. And the Free Software movement has always been in tacit opposition to capitalism, with its implied message that anyone working on a computer should have some ownership of the software tools we use: let me use it, let me improve it, and let me share it.

    I’ve read many many takes on AI-generated code this year, and its really only March. I’m guilty one of these myself: AI Predictions for 2026 , in which I made a link between endless immersion in LLM-driven coding and more traditional drug addictions that has now been corroborated by Steve Yegge himself. See his update “The AI Vampire” (which is also something of a critique of capitalism).

    I’ve read several takes that the Free Software movement has won now because it is much easier to understand, share and modify programs than ever before. See, for example, this one from Bruce Perens on Linquedin : “The advent of AI and its capability to create software quickly, with human guidance, means that we can probably have almost anything we want as Free Software.” .

    I’ve also seen takes that, in fact, the capitalism has won. Such as the (fictional) MALUSCorp : “Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.”.

    One take I haven’t seen is what this means for people who love the craft of building software. Software is a craft, and our tools are the operating system and the compiler. Programmers working on open source, where code serves as reference material and can live in the open for decades, will show much more pride in their code than programmers in academia and industry, whose prototypes or products just need to get the job done. The programmer is a craftsperson, just like the seamstress, the luthier and the blacksmith. But unlike clothes, guitars and horseshoes, the stuff we build is intangible. Perhaps as a result, society sees us less like craftspeople and more like weird, unpopular wizards.

    I’ve spent a lot of my career building and testing open source operating systems, as you can see from these 30 different blog posts , which include the blockbuster “Some CMake Tips” , the satisfying “Tracker 💙 Meson” , and or largely obsolete “How BuildStream uses OSTree” .

    It’s really not that I have some deep-seated desire to rewrite all of the world’s Makefiles. My interest in operating systems and build tools has always came from a desire to democratize these here computers. To free us from being locked into fixed ways of working designed by Apple, Google, Microsoft. Open source tools are great, yes, but I’m more interested in whether someone can access the full power of their computer without needing a university education. This is why I’ve found GNOME interesting over the years: it’s accessible to non-wizards, and the code is right there in the open, for anyone to change. That said, I’ve always wished we GNOME focus more on customizability , and I don’t mean adding more preferences. Look, here’s me in 2009 discovering Nix for the first time and jumping straight to this: “So Nix could give us beautiful support for testing and hacking on bits of GNOME” .

    So what happened? Plenty has changed, but I feel that hacking on bits of GNOME hasn’t become meaningfully easier in the intervening 17 years. And perhaps we can put that largely down to the tech industry’s relentless drive to sell us new computers, and our own hunger to do everything faster and better. In the 1980s, an operating system could reasonably get away with running only one program at a time. In the 1990s, you had multitasking but there was still just the one CPU, at least in my PC. I don’t think there was any point in the 2000s when I owned a GPU. In the 2010s, my monitor was small enough that I never worried about fractional scaling. And so on. For every one person working to simplify, there are a hundred more paid to innovate. Nobody gets promoted for simplicity .

    I can see a steadily growing interest in tech from people who aren’t necessarily interested in programming. If you’re not tired of videos yet, here’s a harp player discussing the firmware of a digital guitar pedal (cleverly titled “What pedal makers don’t want you to see” ). Here’s another musician discussing STM32 chips and mesh networks under the title “Gadgets For People Who Don’t Trust The Government” . This one does not have costume changes every few seconds.

    So we’re at an inflection point.

    The billions pumped into the AI bubble come from a desire by rich men to take back control of computing. It’s a feature, not a bug, that you can’t run ChatGPT on a consumer GPU, and that AI companies need absolutely all of the DRAM . They could spend that money on a programme like Outreachy , supporting people to learn and understand today’s software tools … but you don’t consolidate power through education. (The book Careless People, which I recommend last year , will show you how much tech CEOs crave raw power).

    In another sense, AI models are a new kind of operating system, exposing the capabilities of a GPU in a radical new interface. The computer now contains a facility that can translate instructions in your native language into any well-known programming language. (Just don’t ask it to generate Whitespace ). By now you must know someone non-technical who has nevertheless automated parts of their job away by prompting ChatGPT to generate Excel macros. This is the future we were aiming for, guys!

    I’m no longer sure if the craft I care about is writing software, or getting computers to do things, or both. And I’m really not sure what this craft is going to look like in 10 or 20 years. What topics will be universally understood, what work will be open to individual craftspeople, and what tools will be available only to states and mega-corporations? Will basic computer tools be universally available and understood, like knives and saucepans in a kitchen? Will they require small scale investment and training, like a microbrewery? Or will the whole world come to depend on a few enourmous facilities in China?

    And most importantly, will I be able to share my passion for software without feeling like a weird, unpopular wizard any time soon?

    • Pl chevron_right

      Allan Day: GNOME Foundation Update, 2026-03-20

      news.movim.eu / PlanetGnome • 3 days ago • 4 minutes

    Hello and welcome to another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last update, and there’s been plenty going on, so let’s dive straight in.

    GNOME 50!

    My update wouldn’t be complete without mentioning this week’s GNOME 50 release . It looks like an amazing release with lots of great improvements! Many thanks to everyone who contributed and made it such a success.

    The Foundation plays a critical role in these releases, whether it’s providing development infrastructure, organising events where planning takes place, or providing development funding. If you are reading this and have the means, please consider signing up as a Friend of GNOME . Even small regular donations make a huge difference.

    Board Meeting

    The Board of Directors had its regular monthly meeting on March 9th, and we had a full agenda. Highlights from the meeting included:

    • The Board agreed to sign the Keep Android Open letter, as well as endorsing the United Nations Open Source Principles .
    • We heard reports from a number of committees, including the Executive Committee, Finance Committee, Travel Committee, and Code of Conduct Committee. Committee presentations are a new addition to the Board meeting format, with the goal of pushing more activity out to committees, with the Board providing high-level oversight and coordination.
    • Creation of a new bank account was authorized, which is needed as part of our ongoing finance and accounting development effort.
    • The main discussion topic was Flathub and what the organizational arrangements could be for it in the future. There weren’t any concrete decisions made here, but the Board indicated that it’s open to different options and sees Flathub’s success as the main priority rather than being attached to any particular organisation type or location.
    • The next regular Board meeting will be on April 13th.

    Travel

    The Travel Committee met both this week and last week, as it processed the initial batch of GUADEC sponsorship applications. As a result of this work the first set of approvals have been sent out. Documentation has also been provided for those who are applying for visas for their travel.

    The membership of the current committee is quite new and it is having to figure out processes and decision-making principals as it goes, which is making its work more intensive than might normally be the case. We are starting to write up guidelines for future funding rounds, to help smooth the process.

    Huge thanks to our committee members Asmit, Anisa, Julian, Maria, and Nirbeek, for taking on this important work.

    Conferences

    Planning and preparation for the 2026 editions of LAS and GUADEC have continued over the past fortnight. The call for papers for both events is a particular focus right now, and there are a couple of important deadlines to be aware of:

    • If you want to speak at LAS 2026 , the deadline for proposals is 23 March – that’s in just three days.
    • The GUADEC 2026 call for abstracts has been extended to 27 March , so there is one more week to submit a talk .

    There are teams behind each of these calls, reviewing and selecting proposals. Many thanks to the volunteers doing this work!

    We are also excited to have sponsors come forward to support GUADEC.

    Accounting

    The Foundation has been undertaking a program of improvements to our accounting and finance systems in recent months. Those were put on hold for the audit fieldwork that took place at the beginning of March, but now that’s done, attention has turned to the remaining work items there.

    We’ve been migrating to a new payments processing platform since the beginning of the year, and setup work has continued, including configuration to make it integrate correctly with our accounting software, migrating credit cards over from our previous solution, and creating new web forms which are going to be used for reimbursement requests in future.

    There are a number of significant advantages to the new system, like the accounting integration, which are already helping to reduce workloads, and I’m looking forward to having the final pieces of the new system in place.

    Another major change that is currently ongoing is that we are moving from a quarterly to a monthly cadence for our accounting. This is the cycle we move on to “complete” the accounts, with all data inputted and reconciled by the end of the cycle. The move to a monthly cycle will mean that we are generating finance reports on a more frequent basis, which will allow the Board to have a closer view on the organisation’s finances.

    Finally, this week we also had our regular monthly “books” call with our accountant and finance advisor. This was our usual opportunity to resolve any questions that have come up in relation to the accounts, but we also discussed progress on the improvements that we’ve been making.

    Infrastructure

    On the infrastructure side, the main highlight in recent weeks has been the migration from Anubis to Fastly’s Next-Gen Web Application Firewall (WAF) for protecting our infrastructure. The result of this migration will be an increased level of protection from bots, while simultaneously not interfering in peoples’ way when they’re using our infra. The Fastly product provides sophisticated detection of threats plus the ability for us to write our own fine-grained detection rules, so we can adjust firewall behaviour as we go.

    Huge thanks to Fastly for providing us with sponsorship for this service – it is a major improvement for our community and would not have been possible without their help.

    That’s it for this update. Thanks for reading and be on the lookout for the next update, probably in two weeks!

    • Pl chevron_right

      This Week in GNOME: #241 Fifty!

      news.movim.eu / PlanetGnome • 3 days ago • 3 minutes

    Update on what happened across the GNOME project in the week from March 13 to March 20.

    This week we released GNOME 50!

    50_banner.DNeFh5vr_Z2x2Q6n.webp

    This new major release of GNOME is full of exciting changes, including improved parental controls, many accessibility enhancements, expanded document annotation capabilities, calendar updates, and much more! See the GNOME 50 release notes and developer notes for more information.

    Readers who have been following this site will already be aware of some of the new features. If you’d like to follow the development of GNOME 51 (Fall 2026), keep an eye on this page - we’ll be posting exciting news every week!

    GNOME Circle Apps and Libraries

    gtk-rs

    Safe bindings to the Rust language for fundamental libraries from the GNOME stack.

    Julian 🍃 reports

    I’ve added a chapter about accessibility to the gtk4-rs book . While I researched the topic beforehand and tested all examples with a screenreader, I would still appreciated additional feedback from people experienced with accessibility.

    Eyedropper

    Pick and format colors.

    FineFindus reports

    Eyedropper 2.2.0 is out now, bringing support for color picking without having the application open. It also now supports RGB in decimal notation and improves support for systems without a proper portal setup.

    As always, you can download the latest release from Flathub .

    Eyedropper_twig.IY6kyzLq_ZE8Jj4.webp

    Third Party Projects

    JumpLink announces

    The TypeScript type definitions generator ts-for-gir v4.0.0-beta.41 is out, and the big news is that we now have browsable API documentation for GJS TypeScript bindings, live at https://gjsify.github.io/docs/ . As a bonus, the same work also greatly improved the inline TypeScript documentation, hover docs in your editor are now much richer and more complete.

    Anton Isaiev reports

    RustConn is a GTK4/libadwaita connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, and Zero Trust protocols. Core protocols use embedded Rust implementations - no external dependencies required.

    The 0.10.x series brings 8 new features and a major platform upgrade:

    New features:

    • MOSH protocol support with predict mode, UDP port range, and server binary path
    • Session recording in scriptreplay-compatible format with per-connection toggle and sensitive output sanitization
    • Text highlighting rules - regex-based pattern matching with customizable colors, per-connection and global
    • Ad-hoc broadcast - send keystrokes to multiple terminals simultaneously
    • Smart Folders - dynamic connection grouping by protocol, tags, or host glob pattern
    • Script credentials - resolve passwords from external commands with a Test button
    • Per-connection terminal theming - background, foreground, and cursor color overrides
    • CSV import/export with auto column mapping and configurable delimiter

    Platform changes:

    • GTK-rs bindings upgraded to gtk4 0.11, libadwaita 0.9, vte4 0.10
    • Flatpak runtime bumped to GNOME 50 with VTE 0.80
    • Migrated to AdwSpinner, AdwShortcutsDialog, AdwSwitchRow, and AdwWrapBox (cfg-gated)
    • FreeRDP 3.24.0 bundled in Flatpak - external RDP works out of the box on Wayland
    • rdp file association - double-click to open and connect
    • Split view now works with all VTE-based protocols

    0.10.2 is a follow-up with 11 bug fixes for session recording, MOSH dispatch, highlight rules wiring, picocom detection in Flatpak, sidebar overflow, and RDP error messages.

    https://github.com/totoshko88/RustConn https://flathub.org/en/apps/io.github.totoshko88.RustConn

    rustconn_1.1Qdkrz9i_2ilX12.webp

    rustconn_22.BiBq2KUY_Z1uRMC4.webp

    Quadrapassel

    Fit falling blocks together.

    Will Warner reports

    Quadrapassel 50.0 has been released! This release has a lot of improvements for controls and polishes the app. Here is what’s new:

    • Made game view and preview exactly fit the blocks
    • Improved game controller support
    • Stopped duplicate keyboard events
    • Replaced the libcanberra sound engine
    • Fixed many small bugs and stylistic issues

    You can get Quadrapassel on Flathub .

    TWIG-241-Quadrapassel.B_La2UMB_Zy5Nwy.webp

    Documentation

    Jan-Willem says

    This week Java was added to the programming languages section on developer.gnome.org and many of the code examples were translated to Java.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Jussi Pakkanen: Simple sort implementations vs production quality ones

      news.movim.eu / PlanetGnome • 4 days ago • 2 minutes

    One of the most optimized algorithms in any standard library is sorting. It is used everywhere so it must be fast. Thousands upon thousands of developer hours have been sunk into inventing new algorithms and making sort implementations faster. Pystd has a different design philosophy where fast compilation times and readability of the implementation have higher priority than absolute performance. Perf still very much matters, it has to be fast , but not at the cost of 10x compilation time.

    This leads to the natural question of how much slower such an implementation would be compared to a production quality one. Could it even be faster? (Spoilers: no) The only way to find out is to run performance benchmarks on actual code.

    To keep things simple there is only one test set, sorting 10'000'000 consecutive 64 bit integers that have been shuffled to a random order which is the same for all algorithms. This is not an exhaustive test by any means but you have to start somewhere. All tests used GCC 15.2 using -O2 optimization. Pystd code was not thoroughly hand optimized, I only fixed (some of the) obvious hotspots.

    Stable sort

    Pystd uses mergesort for stable sorting. The way the C++ standard specifies stable sort means that most implementations probably use it as well. I did not dive in the code to find out. Pystd's merge sort implementation consists of ~220 lines of code. It can be read on this page .

    Stdlibc++ can do the sort in 0.9 seconds whereas Pystd takes .94 seconds. Getting to within 5% with such a simple implementation is actually quite astonishing. Even when considering all the usual caveats where it might completely fall over with a different input data distribution and all that.

    Regular sort

    Both stdlibc++ and Pystd use introsort . Pystd's implementation has ~150 lines of code but it also uses heapsort, which has a further 100 lines of code). Code for introsort is here, and heapsort is here .

    Stdlibc++ gets the sort done in 0.76 seconds whereas Pystd takes 0.82 seconds. This makes it approximately 8% slower. It's not great, but getting within 10% with a few evening's work is still a pretty good result. Especially since, and I'm speculating here, std::sort has seen a lot more optimization work than std::stable_sort because it is used more.

    For heavy duty number crunching this would be way too slow. But for moderate data set sizes the performance difference might be insignificant for many use cases.

    Note that all of these are faster (note: did not measure) than libc's qsort because it requires an indirect function call on every comparison i.e. the comparison method can not be inlined.

    Where does the time go?

    Valgrind will tell you that quite easily.

    This picture shows quite clearly why big O notation can be misleading. Both quicksort (the inner loop of introsort) and heapsort have "the same" average time complexity but every call to heapsort takes approximately 4.5 times as long.