call_end

    • Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 3 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Sebastian Wick: Three Little Rust Crates

      news.movim.eu / PlanetGnome • 0:15 • 2 minutes

    I published three Rust crates:

    • name-to-handle-at : Safe, low-level Rust bindings for Linux name_to_handle_at and open_by_handle_at system calls
    • pidfd-util : Safe Rust wrapper for Linux process file descriptors (pidfd)
    • listen-fds : A Rust library for handling systemd socket activation

    They might seem like rather arbitrary, unconnected things – but there is a connection!

    systemd socket activation passes file descriptors and a bit of metadata as environment variables to the activated process. If the activated process exec’s another program, the file descriptors get passed along because they are not CLOEXEC . If that process then picks them up, things could go very wrong. So, the activated process is supposed to mark the file descriptors CLOEXEC , and unset the socket activation environment variables. If a process doesn’t do this for whatever reason however, the same problems can arise. So there is another mechanism to help prevent it: another bit of metadata contains the PID of the target. Processes can check it against their own PID to figure out if they were the target of the activation, without having to depend on all other processes doing the right thing.

    PIDs however are racy because they wrap around pretty fast, and that’s why nowadays we have pidfds. They are file descriptors which act as a stable handle to a process and avoid the ID wrap-around issue. Socket activation with systemd nowadays also passes a pidfd ID. A pidfd ID however is not the same as a pidfd file descriptor! It is the 64 bit inode of the pidfd file descriptor on the pidfd filesystem. This has the advantage that systemd doesn’t have to install another file descriptor in the target process which might not get closed. It can just put the pidfd ID number into the $LISTEN_PIDFDID environment variable.

    Getting the inode of a file descriptor doesn’t sound hard. fstat(2) fills out struct stat which has the st_ino field. The problem is that it has a type of ino_t , which is 32 bits on some systems so we might end up with a process identifier which wraps around pretty fast again.

    We can however use the name_to_handle syscall on the pidfd to get a struct file_handle with a f_handle field. The man page helpfully says that “the caller should treat the file_handle structure as an opaque data type”. We’re going to ignore that, though, because at least on the pidfd filesystem, the first 64 bits are the 64 bit inode. With systemd already depending on this and the kernel rule of “don’t break user-space”, this is now API, no matter what the man page tells you.

    So there you have it. It’s all connected.

    Obviously both pidfds and name_to_handle have more exciting uses, many of which serve my broader goal: making Varlink services a first-class citizen. More about that another time.

    • Pl chevron_right

      Lennart Poettering: Mastodon Stories for systemd v260

      news.movim.eu / PlanetGnome • 18 hours ago • 1 minute

    On March 17 we released systemd v260 into the wild .

    In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd260 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 21 posts:

    I intend to do a similar series of serieses of posts for the next systemd release (v261), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

    My series for v261 will begin in a few weeks most likely, under the #systemd261 hash tag.

    In case you are interested, here is the corresponding blog story for systemd v259 , here for v258 , here for v257 , and here for v256 .

    • Pl chevron_right

      Andy Wingo: free trade and the left, quater: witches

      news.movim.eu / PlanetGnome • 19 hours ago • 9 minutes

    Good evening. Tonight, we wrap up our series on free trade and the left. To recap where we were, I started by retelling the story that free trade improves overall productivity , but expressed reserves about the way in which it does so: plant closures and threats thereof, regulatory arbitrage, and so on. Then we went back in history, discussing the progressive roots of free trade as a cause of the peace-and-justice crowd , in the 19th century. Then we looked at the leading exponents of free trade in the 20th century, the neoliberals , ending in an odd place: instead of free trade being a means for the end of peace and prosperity, neoliberalism turns this on its head, instead holding that war, immiseration, apartheid, dictatorship, ecological disaster, all are justified if they serve the ends of the “free market”, of which free trade is a component.

    When I make this list of evils I find myself back in 1999, that clearly “we” were right then to shut down the WTO meetings in Seattle. With the distance of time, I start to wonder, not about then, but about now: for all the evil of our days, Trump at least has the virtue of making clear that trade barriers have a positive dot-product with acts of war. As someone who lives in the banlieue of Geneva, I am always amused when I find myself tut-tutting over the defunding of this or that institution of international collaboration.

    I started this series by calling out four works. Pax Economica and Globalists have had adequate treatment. The third, Webs of Power , by Starhawk, is one that I have long seen as a bit of an oddball; forgive my normie white boy (derogatory) sensibilities, but I have often wondered how a book by a voice of “earth-based spirituality and Goddess religion” has ended up on my shelf. I am an atheist. How much woo is allowed to me?

    choice of axiom

    Conventional wisdom is to treat economists seriously, and Wiccans less so. In this instance, I have my doubts. The issue is that a neoliberal is at the same time a true believer in markets, and a skilled jurist. In service of the belief, any rhetorical device is permissible, if it works; if someone comes now and tries to tell me that the EU-Mercosur agreement is a good thing because of its effect on capybara populations, my first reaction is to doubt them, because maybe they are a neoliberal, and if so they would literally say anything .

    Whereas if Starhawk has this Earth-mother-spiritual vibe... who am I to say? Yes, I think religion on the whole is a predatory force on vulnerable people, but that doesn’t mean that her interpretation of the web of life as divine is any less legitimate than neoliberal awe of the market. Let’s hear her argument and get on with things.

    Starhawk’s book has three parts. The first is an as-I-lived-it chronicle, going from Seattle to Washington to Prague to Quebec City to Genoa, and thence to 9/11 and its aftermath, describing what it was like to be an activist seeking to disrupt the various WTO-adjacent meetings, seeking to build something else. She follows this up with 80 pages of contemporary-to-2002 topics such as hierarchy within the movement, nonviolence vs black blocs, ecological principles, cultural appropriation, and so on.

    These first two sections inform the last final 20 pages, in which Starhawk attempts to synthesize what it is that “we” wanted, as a kind of memento and hopefully a generator of actions to come. She comes up with a list of nine principles, which I’ll just quote here because I don’t have an editor (the joke’s on all of us!):

    1. We must protect the viability of the life-sustaining systems of the planet, which are everywhere under attack.
    2. A realm of the sacred exists, of things too precious to be commodified, and must be respected.
    3. Communities must control their own resources and destinies.
    4. The rights and heritages of indigenous communities must be acknowledged and respected.
    5. Enterprises must be rooted in communities and be responsible to communities and to future generations.
    6. Opportunity for human beings to meet their needs and fulfill their dreams and aspirations should be open to all.
    7. Labor deserves just compensation, security, and dignity.
    8. The human community has a collective responsibility to assure the basic means of life, growth, and development for all its members.
    9. Democracy means that all people have a voice in the decisions that affect them, including economic decisions.

    Now friends, this is Starhawk’s list, not mine, and a quarter-century-old list at that. I’m not here to judge it, though I think it’s not bad; what I find interesting is its multifaceted nature, that when contrasted with the cybernetic awe of late neoliberalism, that actually it’s the Witch who has the more down-to-earth concerns: a planet to live on, a Rawlsian concern with justice, and a control of the economic by the people.

    which leaves us

    Former European Central Bank president Mario Draghi published a report some 18 months ago diagnosing a European malaise and proposing a number of specific remedies. I find that we on my part of the left are oft ill-equipped to engage with the problem he identifies, not to mention the solutions. The whole question of productivity is very technical, to the extent that we might consider it owned by our enemies: our instinct is to deflect, “productivity for what”, that sort of thing. Worse, if we do concede the problem, we haven’t spent as much time sparring in the gyms of comparative advantage; we risk a first-round knockout. We come with Starhawk’s list in hand, and they smile at us condescendingly: “very nice but we need to focus on the economy, you know,” and we lose again.

    But Starhawk was not wrong. We do need a set of principles that we can use to analyze the present and plot a course to the future. I do not pretend to offer such a set today, but after having looked into the free trade question over the last couple months, I have reached two simple conclusions, which I will share with you now.

    The first is that, from an intellectual point of view, we should just ignore the neoliberals; they are not serious people . That’s not a value judgment on the price mechanism, but rather one on those that value nothing else: that whereas classical liberalism was a means to an end, neoliberalism admits no other end than commerce, and admits any means that furthers its end. And so, we can just ignore them . If neoliberals were the only ones thinking about productivity, well, we might need new branches of economics. Fortunately that’s not the case. Productivity is but one dimension of the good, and it is our collective political task to choose a point from the space of the possible according to our collective desires.

    The second conclusion is that we should take back free trade from our enemies on the right. We are one people, but divided into states by historical accident. Although there is a productivity argument for trade, we don’t have to limit ourselves to it: the bond that one might feel between Colorado and Wyoming should be the same between Italy and Tunisia, between Canada and Mexico, indeed between France and Brasil. One people, differentiated but together, sharing ideas and, yes, things. Internationalism, not nationalism.

    There is no reason to treat free trade as the sole criterion against which to judge a policy. States are heterogeneous: what works for the US might not be right for Haiti; states differ in the degree that they internalize environmental impacts; and they differ as regards public services. We can take these into account via policy, but our goal should be progress for all.

    So while Thomas Piketty is right to decry a kind of absolutism among European decisionmakers regarding free trade , I can’t help but notice a chauvinist division being set up in the way we leftists are inclined to treat these questions: we in Europe are one bloc, despite e.g. very different carbon impacts of producing a dishwasher in Poland versus Spain, whereas a dishwasher from China belongs to a different, worse, more sinful category.

    and mercosur?

    To paraphrase Marley’s ghost, mankind is my business . I want an ever closer union with my brothers and sisters in Uruguay and Zambia and Cambodia and Palestine. Trade is a part of it. All things being equal, we should want to trade with Chile. We on the left should not oppose free trade with Mercosur out of a principle that goods produced far away are necessarily a bad thing.

    All this is not to say that we should just doux it (although, gosh, Karthik is such a worthy foe); we can still participate in collective carrot-and-stick exercises such as carbon taxes and the like, and this appreciation of free trade would not have trumped the campaign to boycott apartheid South Africa, nor would it for apartheid Israel. But our default position should be to support free trade with Mercosur, in such a way that does improves the lot of all humanity.

    I don’t know what to think about the concrete elements of the EU-Mercosur deal. The neoliberal play is to design legal structures that encase commerce, and a free trade deal risks subordinating the political to the economic. But unlike some of my comrades on the left, I am starting to think that we should want free trade with Bolivia, and that’s already quite a change from where I was 25 years ago.

    fin

    Emily Saliers famously went seeking clarity ; I fear I have brought little. We are still firmly in the world of the political, and like Starhawk, still need a framework of pre-thunk thoughts to orient us when some Draghi comes with a new four-score-page manifesto. Good luck and godspeed.

    But it is easier to find a solution if we cull the dimensionality of the problem. The neoliberals had their day, but perhaps these staves may be of use to you in exorcising their discursive domination; it is time we cut them off. Internationalist trade was ours anyway, and it should resume its place as a means to our ends.

    And what ends? As with prices, we discover them on the margin, in each political choice we make. Some are easy; some less so. And while a list like Starhawk’s is fine enough, I keep coming back to a simpler question: which side are you on? The sheriff or the union? ICE or the immigrant? Which side are you on? The question cuts fine. For the WTO in Seattle, to me it said to shut it all down. For EU-Mercosur, to me it says, “let’s talk.”

    • Pl chevron_right

      Thibault Martin: TIL that Proxmox can provision Kubernetes Persistent Volumes

      news.movim.eu / PlanetGnome • 2 days ago • 2 minutes

    I wanted to dip my toes into Kubernetes for my homelab, but I knew I would need some flexibility to experiment. So instead of deploying k3s directly on my server, I

    1. Installed a base Debian on my server, encrypting the disk with LUKS and using LVM to partition it.
    2. Installed the Proxmox hypervisor on that base Debian
    3. Spun up a Debian VM, and installed k3s on it.

    Proxmox supports several storage plugins . It allows me to create LVM Local Volumes for the VM disks for example.

    This setup allows me to spin up fresh VMs for my experiments, all while leaving my production k3s intact. This is great, but it came up with two problems:

    1. When I provision the VM for k3s I need to allocate it a massive amount of disk space. This is because k3s uses a local path provisioner to provision new Persistent Volumes directly on the VM.
    2. I can't take snapshots of the Persistent Volumes when doing backups. There's a risk that the data will change while I perform the backup.

    The situation looks like the following.

    On the LVM disk of the host, I create a VM for k3s. This VM has a virtual disk that doesn't rely on LVM, so it can't create LVM Logical Volumes. The local provisioner can only create volumes on the virtual disk, because it can't escape the VM to create volumes on the Proxmox host.

    Because the volumes are created on the virtual disk that doesn't rely on LVM, I can't use LVM snapshots to take snapshots of my volumes.

    [!question] Why not LVM Thin?

    One solution to address the massive disk requirement could be to use LVM Thin : it would allow me to allocate a lot of space in theory, but in practice in only fills up as the VM storage gets used.

    I don't want to use LVM Thin because it puts me at risk of overprovisioning. I could allocate more storage than I actually have, and it would be difficult to realize that my disks are filling up before it's too late.

    My colleague Quentin mentioned the Proxmox CSI Plugin . It is a plugin that replaces k3s' local path provisioner. Instead of creating the kubernetes Persistent Volumes inside the VM, it calls the Proxmox host, asks it to create a LVM Logical Volume and binds it to a Persistent Volume in kubernetes.

    Using the Proxmox CSI volume, the situation would look like this.

    It solves the two problems for me:

    1. I can now only provision a small disk for the k3s VM, since the Persistent Volumes will be created outside of the VM.
    2. Since Proxmox will create LVM Logical Volumes to provision the Persistent Volumes, I can either do a LVM Snapshot from Proxmox or use Kubernete's Volume Snapshot feature, with some caveats .

    Setting up the Proxmox-CSI-Plugin for k3s can be a bit involved, but I'm writing a longer blog post about it.

    • Pl chevron_right

      Thibault Martin: TIL that GNOME has launched a fellowship program

      news.movim.eu / PlanetGnome • 2 days ago • 2 minutes

    When open source nonprofits ask for donations, one common answer is "I only want to fund code, I don't want to fund anything else." GNOME has created a Fellowship Program to fund direct work on GNOME, a program entirely funded by donations . This is a testament to the Foundation's maturity, as it becomes a direct contributor to the project it stewards.

    Let's take a step back to address the code-only argument. It is a misguided reaction, but I can see where its proponents are coming from. In the world of proprietary software, you pay to get your software. You don't realize that this bundles the marketing, accounting, legal, and even HR costs.

    In the open source world, everyone can see who contributes code and how that code is built and packaged to create a software solution. A lot of things are not shown in git commits though. A few of them are:

    • What did it take to create the Human Interface Guidelines to have a coherent suite of applications? How many designers had to meet, what research did they have to do, did they have to meet in person?
    • What did it take to create the Developer Documentation to onboard new developers, help them make their first steps, and turn them into bigger contributors over the years?
    • What did it take to build a website to advertize all the cool apps that follow the GNOME HIG ?
    • What did it take to set up the infrastructure the code lives on, and that builds the software we all love?

    GNOME, like many other open source projects, is first and foremost a community. This is a group of people with diverse backgrounds, diverse opinions, who try to find common ground to solve problems. They don't always agree on how to solve problems, nor necessarily on what even is a problem in the first place.

    The role of The GNOME Foundation is to provide a place to support its community. Its role is to help its contributors find common ground. Its role is to give them the tools and opportunities to do so.

    Some people still don't value this, and want The GNOME Foundation to be a vendor for GNOME. They want to fund developers to produce code , because that's a very visible metric.

    For them, and for everyone who's ever wanted to give back to GNOME without knowing how, The GNOME Foundation has created a Fellowship Program . It will directly fund a person to work on what few people want to do in their spare time: maintenance.

    Round one focuses on sustainability: improving tooling, build systems, test infrastructure, automation, documentation, developer productivity, and ongoing maintainability. We are not funding feature development: the goal is for each fellowship to leave the project in a more efficient and sustainable state.

    This is only fueled by our donations. If you want a direct pipeline between your money and GNOME development, this is it. Donate to GNOME , we can't afford not to have them when Big Tech has so much influence on our lives.

    • Pl chevron_right

      GNOME Foundation News: Introducing the GNOME Fellowship program

      news.movim.eu / PlanetGnome • 3 days ago • 2 minutes

    [![fellows-social](/foundation/files/2026/03/fellows-social.png){.alignnone}](/foundation/files/2026/03/fellows-social.png)

    Sustaining GNOME by directly funding contributors

    The GNOME Foundation is excited to announce the GNOME Fellowship program, a new initiative to fund community members working on the long-term sustainability of the GNOME project. We’re now accepting applications for our inaugural fellowship cycle, beginning around May 2026.

    GNOME has always thrived because of its contributors: people who invest their time and expertise to build and maintain the desktop, applications, and platform that millions rely on. But open source contribution often depends on volunteers finding time alongside other commitments, or on companies choosing to fund development amongst competing priorities. Many important areas of the project – the less glamorous but critical infrastructure work – can go underinvested.

    The fellowship program changes that. Thanks to the generous support of Friends of GNOME donors, we can now directly fund contributors to focus on what matters most for GNOME’s future. Programs such as this rely on ongoing support from our donors, so if you would like to see this and similar programs continue in future, please consider setting up a recurring donation .

    What’s a Fellowship?

    A fellowship is funding for an individual to spend dedicated time over a 12 month period working in an area where they have expertise. Unlike traditional contracts with rigid scopes and deliverables, fellowships are built on trust. We’re backing people and the type of work they do, giving them the flexibility to tackle problems as they find them.

    This approach reduces bureaucratic overhead for both contributors and the Foundation. It lets talented people do what they do best: identify important problems and solve them.

    Focus: Sustainability

    For this first cycle, we’re seeking proposals focused on sustainability work that makes GNOME more maintainable, efficient, and productive for developers. This includes areas like build systems, CI/CD infrastructure, testing frameworks, developer tooling, documentation, accessibility, and reducing technical debt.

    We’re not funding new features this round. Instead, we want to invest in the foundations that make future development and contributions easier and faster. The goal is for each fellowship to leave the project in better shape than we found it.

    Apply Now

    We have funding for at least one 12-month fellowship paid between $70,000 and $100,000 USD per year based on experience and location. Applicants can propose full-time, half-time work, or either – half-time proposals may allow us to support multiple fellows.

    Applications are open to anyone with a track record in GNOME or relevant experience, with some restrictions due to US sanctions compliance. A GNOME Foundation Board committee will review applications and select fellows for this inaugural cycle.

    Full details, application requirements, and FAQ are available at fellowship.gnome.org . Applications close on 20th April 2026.

    Thank You to Friends of GNOME

    This program is possible because of the individuals and organizations who support GNOME through Friends of GNOME donations. When we ask for donations, funding contributor work is exactly the kind of initiative we have in mind. If you’d like to sustain this program beyond its first year, consider becoming a Friend of GNOME . A recurring donation, no matter how small, gives us the predictability to expand this program and others like it.

    Looking Ahead

    This is a pilot program. We’re optimistic, and if it succeeds, we hope to sustain and grow the fellowship program in future years, funding more contributors across more areas of GNOME. We believe this model can become a sustainable way to invest in the project’s long-term health.

    We can’t wait to see your proposals!

    • Pl chevron_right

      Christian Schaller: Using AI to create some hardware tools and bring back the past

      news.movim.eu / PlanetGnome • 4 days ago • 18 minutes

    As I talked about in a couple of blog posts now I been working a lot with AI recently as part of my day to day job at Red Hat, but also spending a lot of evenings and weekend time on this (sorry kids pappa has switched to 1950’s mode for now). One of the things I spent time on is trying to figure out what the limitations of AI models are and what kind of use they can have for Open Source developers.

    One thing to mention before I start talking about some of my concrete efforts is that I more and more come to conclude that AI is an incredible tool to hypercharge someone in their work, but I feel it tend to fall short for fully autonomous systems. In my experiments AI can do things many many times faster than you ordinarily could, talking specifically in the context of coding here which is what is most relevant for those of us in the open source community.

    So one annoyance I had for years as a Linux user is that I get new hardware which has features that are not easily available to me as a Linux user. So I have tried using AI to create such applications for some of my hardware which includes an Elgato Light and a Dell Ultrasharp Webcam.

    I found with AI and this is based on using Google Gemini, Claude Sonnet and Opus and OpenAI codex, they all required me to direct and steer the AI continuously, if I let the AI just work on its own, more often than not it would end up going in circles or diverging from the route it was supposed to go, or taking shortcuts that makes wanted output useless.On the other hand if I kept on top of the AI and intervened and pointed it in the right direction it could put together things for me in very short time spans.
    My projects are also mostly what I would describe as end leaf nodes, the kind of projects that already are 1 person projects in the community for the most part. There are extra considerations when contributing to bigger efforts, and I think a point I seen made by others in the community too is that you need to own the patches you submit, meaning that even if an AI helped your write the patch you still need to ensure that what you submit is in a state where it can be helpful and is merge-able. I know that some people feel that means you need be capable of reviewing the proposed patch and ensuring its clean and nice before submitting it, and I agree that if you expect your patch to get merged that has to be the case. On the other hand I don’t think AI patches are useless even if you are not able to validate them beyond ‘does it fix my issue’.

    My friend and PipeWire maintainer Wim Taymans and I was talking a few years ago about what I described at the time as the problem of ‘bad quality patches’, and this was long before AI generated code was a thing. Wim response to me which I often thought about afterwards was “a bad patch is often a great bug report”. And that would hold true for AI generated patches to. If someone makes a patch using AI, a patch they don’t have the ability to code review themselves, but they test it and it fixes their problem, it might be a good bug report and function as a clearer bug report than just a written description by the user submitting the report. Of course they should be clear in their bug report that they don’t have the skills to review the patch themselves, but that they hope it can be useful as a tool for pinpointing what isn’t working in the current codebase.

    Anyway, let me talk about the projects I made. They are all found on my personal website Linuxrising.org a website that I also used AI to update after not having touched the site in years.

    Elgato Light GNOME Shell extension

    Elgato Light GNOME Shell extension

    Elgato Light GNOME Shell extension

    The first project I worked on is a GNOME Shell extension for controlling my Elgato Key Wifi Lamp . The Elgato lamp is basically meant for podcasters and people doing a lot of video calls to be able to easily configure light in their room to make a good recording. The lamp announces itself over mDNS, and thus can be controlled via Avahi. For Windows and Mac the vendor provides software to control their lamp, but unfortunately not for Linux.

    There had been GNOME Shell extensions for controlling the lamp in the past, but they had not been kept up to date and their feature set was quite limited. Anyway, I grabbed one of these old extensions and told Claude to update it for latest version of GNOME. It took a few iterations of testing, but we eventually got there and I had a simple GNOME Shell extension that could turn the lamp off and on and adjust hue and brightness. This was a quite straightforward process because I had code that had been working at some point, it just needed some adjustments to work with current generation of GNOME Shell.

    Once I had the basic version done I decided to take it a bit further and try to recreate the configuration dialog that the windows application offers for the full feature set which took me quite a bit of back and forth with Claude. I found that if I ask Claude to re-implement from a screenshot it recreates the functionality of the user interface first, meaning that it makes sure that if the screenshot has 10 buttons, then you get a GUI with 10 buttons. You then have to iterate both on the UI design, for example telling Claude that I want a dark UI style to match the GNOME Shell, and then I also had to iterate on each bit of functionality in the UI. Like most of the buttons in the UI didn’t really do anything from the start, but when you go back and ask Claude to add specific functionality per button it is usually able to do so.

    Elgato Light Settings Application

    Elgato Light Settings Application

    So this was probably a fairly easy thing for the AI because all the functionality of the lamp could be queried over Avahi, there was no ‘secret’ USB registers to be set or things like that.
    Since the application was meant to be part of the GNOME Shell extension I didn’t want to to have any dependency requirements that the Shell extension itself didn’t have, so I asked Claude to make this application in JavaScript and I have to say so far I haven’t seen any major differences in terms of the AIs ability to generate different languages. The application now reproduce most of the functionality of the Windows application. Looking back I think it probably took me a couple of days in total putting this tool together.

    Dell Ultrasharp Webcam 4K

    Dell UltraSharp 4K settings application for Linux

    Dell UltraSharp 4K settings application for Linux

    The second application on the list is a controller application for my Dell UltraSharp Webcam 4K UHD (WB7022) . This is a high end Webcam I that have been using for a while and it is comparable to something like the Logitech BRIO 4K webcam. It has mostly worked since I got it with the generic UVC driver and I been using it for my Google Meetings and similar, but since there was no native Linux control application I could not easily access a lot of the cameras features. To address this I downloaded the windows application installer and installed it under Windows and then took a bunch of screenshots showcasing all features of the application. I then fed the screenshots into Claude and told it I wanted a GTK+ version for Linux of this application. I originally wanted to have Claude write it in Rust, but after hitting some issues in the PipeWire Rust bindings I decided to just use C instead.

    I took me probably 3-4 days with intermittent work to get this application working and Claude turned out to be really good and digging into Windows binaries and finding things like USB property values. Claude was also able to analyze the screenshots and figure out the features the application needed to have. It was a lot of trial and error writing the application, but one way I was able to automate it was by building a screenshot option into the application, allowing it to programmatically take screenshots of itself. That allowed me to tell Claude to try fixing something and then check the screenshot to see if it worked without me having to interact with the prompt. Also to get the user interface looking nicer, once I had all the functionality in I asked Claude to tweak the user interface to follow the guidelines of the GNOME Human Interface Guidelines, which greatly improved the quality of the UI.

    At this point my application should have almost all the features of the Windows application. Since it is using PipeWire underneath it is also tightly integrated with the PipeWire media graph, allowing you to see it connect and work with your application in PipeWire patchbay applications like Helvum. The remaining features are software features of Dell’s application, like background removal and so on, but I think that if I decided to to implement that it should be as a standalone PipeWire tool that can be used with any camera, and not tied to this specific one.

    Red Hat Planet

    Red Hat Vulkan Globe

    The application shows the worlds Red Hat offices and include links to latest Red Hat news.


    The next application on my list is called Red Hat Planet. It is mostly a fun toy, but I made it to partly revisit the Xtraceroute modernisation I blogged about earlier. So as I mentioned in that blog, Xtraceroute while cute isn’t really very useful IMHO, since the way the modern internet works rarely have your packets jump around the world. Anyway, as people pointed out after I posted about the port is that it wasn’t an actual Vulkan application, it was a GTK+ application using the GTK+ Vulkan backend. The Globe animation itself was all software rendered.

    I decided if I was going to revisit the Vulkan problem I wanted to use a different application idea than traceroute. The idea I had was once again a 3D rendered globe, but this one reading the coordinates of Red Hats global offices from a file and rendering them on the globe. And alongside that provide clickable links to recent Red Hat news items. So once again maybe not the worlds most useful application, but I thought it was a cute idea and hopefully it would allow me to create it using actual Vulkan rendering this time.

    Creating this turned out to be quite the challenge (although it seems to have gotten easier since I started this effort), with Claude Opus 4.6 being more capable at writing Vulkan code than Claude Sonnet, Google Gemini or OpenAI Codex was when I started trying to create this application.
    When I started this project I had to keep extremely close tabs on the AI and what is was doing in order to force it to keep working on this as a Vulkan application, as it kept wanting to simplify with Software rendering or OpenGL and sometimes would start down that route without even asking me. That hasn’t happened more recently, so maybe that was a problem of AI of 5 Months ago.

    I also discovered as part of this that rendering Vulkan inside a GTK4 application is far from trivial and would ideally need the GTK4 developers to create such a widget to get rendering timings and similar correct. It is one of the few times I have had Claude outright say that writing a widget like that was beyond its capabilities (haven’t tried again so I don’t know if I would get the same response today). So I started moving the application to SDL3 first, which worked as I got a spinning globe with red dots on, but came with its own issues, in the sense that SDL is not a UI toolkit as such. So while I got the globe rendered and working the AU struggled badly with the news area when using SDL.

    So I ended up trying to port the application to Qt, which again turned out to be non-trivial in terms of how much time it took with trial and error to get it right. I think in my mind I had a working globe using Vulkan, how hard could it be to move it from SDL3 to Qt, but there was a million rendering issues. In fact I ended up using the Qt Vulkan rendering example as a starting point in the end and then ‘porting’ the globe over bit by bit, testing it for each step, to finally get a working version. The current version is a Vulkan+Qt app and it basically works, although it seems the planet is not spinning correctly on AMD systems at the moment, while it seems to work well on Intel and NVIDIA systems.

    WMDock

    WmDock fullscreen with config application

    WmDock fullscreen with config application.


    This project came out of a chat with Matthias Clasen over lunch where I mused about if Claude would be able to bring the old Window Maker dockapps to GNOME and Wayland. Turns out the answer is yes although the method of doing so changed as I worked on it.

    My initial thought was for Claude to create a shim that the old dockapps could be compiled against, without any changes. That worked, but then I had a ton of dockapps showing up in things like the alt+tab menu. It also required me to restart my GNOME Shell session all the time as I was testing the extension to house the dockapps. In the end I decided that since a lot of the old dockapps don’t work with modern Linux versions anyway, and thus they would need to be actively ported, I should accept that I ship the dockapps with the tool and port them to work with modern linux technologies. This worked well and is what I currently have in the repo, I think the wildest port was porting the old dockapp webcam app from V4L1 to PipeWire. Although updating the soundcontroller from ESD to PulesAudio was also a generational jump.

    XMMS resuscitated

    XMMS brought back to life

    XMMS brought back to life


    So the last effort I did was reviving the old XMMS media player. I had tried asking Claude to do this for Months and it kept failing, but with Opus 4.6 it plowed through it and had something working in a couple of hours, with no input from me beyond kicking it off. This was a big lift,moving it from GTK2 and Esound, to GTK4, GStreamer and PipeWire. One thing I realized is that a challenge with bringing an old app back is that since keeping the themeable UI is a big part of this specific application adding new features is a little kludgy. Anyway I did set it up to be able to use network speakers through PipeWire and also you can import your Spotify playlists and play those, although you need to run the Spotify application in the background to be able to play sound on your local device.

    Monkey Bubble
    Monkey Bubble game
    Monkey Bubble was a game created in the heyday of GNOME 2 and while I always thought it was a well made little game it had never been updated to never technologies. So I asked Claude to port it to GTK4 and use GStreamer for audio.This port was fairly straightforward with Claude having little problems with it. I also asked Claude to add highscores using the libmanette library and network game discovery with Avahi. So some nice little.improvements.

    All the applications are available either as Flatpaks or Fedora RPMS, through the gitlab project page, so I hope people enjoy these applications and tools. And enoy the blasts from the past as much as I did.

    Worries about Artifical Intelligence

    When I speak to people both inside Red Hat and outside in the community I often come across negativity or even sometimes anger towards Artificial Intelligence in the coding space. And to be clear I to worry about where things could be heading and how it will affect my livelihood too, so I am not unsympathetic to those worries at all. I probably worry about these things at least a few times a day. At the same time I don’t think we can hide from or avoid this change, it is happening with or without us. We have to adapt to a world where this tool exists, just like our ancestors have adapted to jobs changing due to industrialization and science before. So do I worry about the future, yes I do. Do I worry about how I might personally get affected by this? yes, I do. Do I worry about how society might change for the worse due to this? yes, I do. But I also remind myself that I don’t know the future and that people have found ways to move forward before and society has survived and thrived. So what I can control is that I try to be on top of these changes myself and take advantage of them where I can and that is my recommendation to the wider open source community on this too. By leveraging them to move open source forward and at the same time trying to put our weight on the scale towards the best practices and policies around Artificial Intelligence.

    The Next Test and where AI might have hit a limit for me.

    So all these previous efforts did teach me a lot of tricks and helped me understand how I can work with an AI agent like Claude, but especially after the success with the webcam I decided to up the stakes and see if I could use Claude to help me create a driver for my Plustek OpticFilm 8200i scanner. So I have zero backround in any kind of driver development and probably less than zero in the field of scanner driver specifically. So I ended up going down a long row of deadends on this journey and I to this day has not been able to get a single scan out of the scanner with anything that even remotely resembles the images I am trying to scan.

    My idea was to have Claude analyse the Windows and Mac driver and build me a SANE driver based on that, which turned out to be horribly naive and lead nowhere. One thing I realized is that I would need to capture USB traffic to help Claude contextualize some of the findings it had from looking at the Windows and Mac drivers.I started out with Wireshark and feeding Claude with the Wireshark capture logs. Claude quite soon concluded that the Wireshark logs wasn’t good enough and that I needed lower level traffic capture. Buying a USB packet analyzer isn’t cheap so I had the idea that I could use one of the ARM development boards floating around the house as a USB relay, allowing me to perfectly capture the USB traffic. With some work I did manage to set up my LibreComputer Solitude AML-S905D3-CC arm board going and setting it in device mode. I also had a usb-relay daemon going on the board. After a lot of back and forth, and even at one point trying to ask Claude to implement a missing feature in the USB kernel stack, I realized this would never work and I ended up ordering a Beagle USB 480 USB hardware analyzer.

    At about the same time I came across the chipset documentation for the Genesys Logic GL845 chip in the scanner. I assumed that between my new USB analyzer and the chipset docs this would be easy going from here on, but so far no. I even had Claude decompile the windows driver using ghidra and then try to extract the needed information needed from the decompiled code.
    I bought a network controlled electric outlet so that Claude can cycle the power of the scanner on its own.

    So the problem here is that with zero scanner driver knowledge I don’t even know what I should be looking for, or where I should point Claude to, so I keept trying to brute force it by trial and error. I managed to make SANE detect the scanner and I managed to get motor and lamp control going, but that is about it. I can hear the scanner motor running and I ask for a scan, but I don’t know if it moves correctly. I can see light turning on and off inside the scanner, but I once again don’t know if it is happening at the correct times and correct durations. And Claude has of course no way of knowing either, relying on me to tell it if something seems like it has improved compared to how it was.

    I have now used Claude to create two tools for Claude to use, once using a camera to detect what is happening with the light inside the scanner and the other recording sound trying to compare the sound this driver makes compared to the sounds coming out when doing a working scan with the MacOS X application. I don’t know if this will take me to the promised land eventually, but so far I consider my scanner driver attempt a giant failure. At the same time I do believe that if someone actually skilled in scanner driver development was doing this they could have guided Claude to do the right things and probably would have had a working driver by now.

    So I don’t know if I hit the kind of thing that will always be hard for an AI to do, as it has to interact with things existing in the real world, or if newer versions of Claude, Gemini or Codex will suddenly get past a threshold and make this seem easy, but this is where things are at for me at the moment.

    • Pl chevron_right

      Jussi Pakkanen: Everything old is new again: memory optimization

      news.movim.eu / PlanetGnome • 4 days ago • 2 minutes

    At this point in history, AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast. Thus the amount of memory in consumer computers and phones seems to be going down. After decades of not having to care about memory usage, reducing it has very much become a thing.

    Relevant questions to this state of things include a) is it really worth it and b) what sort of improvements are even possible. The answers to these depend on the task and data set at hand. Let's examine one such case. It might be a bit contrived, unrepresentative and unfair, but on the other hand it's the one I already had available.

    Suppose you have to write script that opens a text file, parses it as UTF-8, splits it into words according to white space, counts the number of time each word appears and prints the words and counts in decreasing order (most common first).

    The Python baseline

    This sounds like a job for Python. Indeed, an implementation takes fewer than 30 lines of code. Its memory consumption on a small text file looks like this.

    Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

    The native version

    A fully native C++ version using Pystd requires 60 lines of code to implement the same thing. If you ignore the boilerplate, the core functionality fits in 20 lines. The steps needed are straightforward:

    1. Mmap the input file to memory.
    2. Validate that it is utf-8
    3. Convert raw data into a utf-8 view
    4. Split the view into words lazily
    5. Compute the result into a hash table whose keys are string views, not strings

    The main advantage of this is that there are no string objects. The only dynamic memory allocations are for the hash table and the final vector used for sorting and printing. All text operations use string views , which are basically just a pointer + size.

    In code this looks like the following:

    Its memory usage looks like this.

    Peak consumption is ~100 kB in this implementation. It uses only 7.7% of the amount of memory required by the Python version.

    Isn't this a bit unfair towards Python?

    In a way it is. The Python runtime has a hefty startup cost but in return you get a lot of functionality for free. But if you don't need said functionality, things start looking very different.

    But we can make this comparison even more unfair towards Python. If you look at the memory consumption graph you'll quite easily see that 70 kB is used by the C++ runtime. It reserves a bunch of memory up front so that it can do stack unwinding and exception handling even when the process is out of memory. It should be possible to build this code without exception support in which case the total memory usage would be a mere 21 kB. Such version would yield a 98.4% reduction in memory usage.