call_end

    • Pl chevron_right

      This Week in GNOME: #210 Periodic Updates

      news.movim.eu / PlanetGnome • 1 August • 4 minutes

    Update on what happened across the GNOME project in the week from July 25 to August 01.

    GNOME Core Apps and Libraries

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

    last cycle, libadwaita gained a way to query the system document font, even though it was same as the UI font. This cycle it has been made larger (12pt instead of 11pt) and we have a new .document style class that makes the specified widget use it (as well as increases line height) - intended to be used for the app content such as messages in a chat client.

    Meanwhile, the formerly useless .body style class also features increased line height now (along with .caption ) and can be used to make labels such as UI descriptions more legible compared to default styles. Some examples of where it’s already used:

    • Dialog body text
    • Preferences group description
    • Status page description
    • What’s new, legal and troubleshooting sections in about dialogs

    GNOME Circle Apps and Libraries

    revisto announces

    Drum Machine 1.4.0 is out! 🥁

    Drum Machine , a GNOME Circle application, now supports more pages (bars) for longer beats, mobile improvements, and translations in 17 languages ! You can try it out and do more creativity with it.

    What’s new: • Extended pattern grid for longer, more complex rhythms • Mobile-responsive UI that adapts to different screen sizes • Global reach with translations including Farsi, Chinese, Russian, Arabic, Hebrew, and more

    If you have any suggestions and ideas, you can always contribute and make Drum Machine better, all ideas (even better/more default presets) are welcome!

    Available on Flathub: https://flathub.org/apps/io.github.revisto.drum-machine

    Happy drumming! 🎶

    drum-machine-1.4.0-screenshot.DJEasryX_1LnpcL.webp

    Third Party Projects

    lo reports

    After a while of working on and off, I have finally released the first version of Nucleus, a periodic table app for searching and viewing various properties of the chemical elements!

    You can get it on Flathub: https://flathub.org/apps/page.codeberg.lo_vely.Nucleus

    Nucleus_electron_shell_dialog.DOKtXHUa_2k9kRS.webp

    Nucleus_periodic_table_view.Ch8TANwn_Z1XWg5K.webp

    francescocaracciolo says

    Newelle 1.0.0 has been released! Huge release for this AI assistant for Gnome .

    • 📱 Mini Apps support! Extensions can now show custom mini apps on the sidebar
    • 🌐 Added integrated browser Mini App: browse the web directly in Newelle and attach web pages
    • 📁 Improved integrated file manager , supporting multiple file operations
    • 👨‍💻 Integrated file editor : edit files and codeblocks directly in Newelle
    • 🖥 Integrated Terminal mini app: open the terminal directly in Newelle
    • 💬 Programmable prompts : add dynamic content to prompts with conditionals and random strings
    • ✍️ Add ability to manually edit chat name
    • 🪲 Minor bug fixes
    • 🚩 Added support for multiple languages for Kokoro TTS and Whisper.CPP
    • 💻 Run HTML/CSS/JS websited directly in app
    • ✨ New animation on chat change

    Get it on FlatHub

    Fractal

    Matrix messaging app for GNOME written in Rust.

    Kévin Commaille announces

    Want to get a head start and try out Fractal 12 before its release? That’s what this Release Candidate is for! New since 12.beta:

    • The upcoming room version 12 is supported, with the special power level of room creators
    • Requesting invites to rooms (aka knocking) is now possible
    • Clicking on the name of the sender of a message adds a mention to them in the composer

    As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

    It is available to install via Flathub Beta, see the instructions in our README .

    As the version implies, it should be mostly stable and we expect to only include minor improvements until the release of Fractal 12.

    If you want to join the fun, you can try to fix one of our newcomers issues . We are always looking for new contributors!

    GNOME Websites

    Guillaume Bernard reports

    After months of work and test, it’s now possible to connect to Damned Lies using third party providers. You can now use your GNOME SSO account as well as other common providers used by translators: Fedora, Launchpad, GitHub and GitLab.com. The login/password authentication has been disabled and email addresses are validated for security reasons.

    On another hand, under the hood, Damned Lies has been modernized: we upgrade to Fedora 42, which provides a fresher gettext (0.23) and we moved to Django 5.2 LTS and Python 3.13. Users can expect performances improvements, as we replaced the Apache mod_wsgi by gunicorn that is said as more CPU and RAM efficient.

    Next step is working on async git pushes and merge requests support. Help is very welcome on these topics!

    Shell Extensions

    Just Perfection reports

    The GNOME Shell 49 port guide for extensions is ready! We are now accepting GNOME Shell 49 extension packages on EGO . Please join us on the GNOME Extensions Matrix Channel if you have any issues porting your extension. Also, thanks to Florian Müllner, gnome-extensions has added a new upload command for GNOME Shell 49, making it easier to upload your extensions to EGO. You can also use it with CI.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Matthew Garrett: Secure boot certificate rollover is real but probably won't hurt you

      news.movim.eu / PlanetGnome • 31 July • 8 minutes

    LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

    First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

    That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

    What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

    This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

    The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

    First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

    Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

    The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it . In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

    So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

    Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

    How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

    If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

    Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

    [1] (there's also a separate revocation mechanism called SBAT which I wrote about here , but it's not relevant in this scenario)

    [2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

    [3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

    [4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

    [5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

    [6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

    [7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

    comment count unavailable comments
    • Pl chevron_right

      Christian Schaller: Artificial Intelligence and the Linux Community

      news.movim.eu / PlanetGnome • 29 July • 14 minutes

    I have wanted to write this blog post for quite some time, but been unsure about the exact angle of it. I think I found that angle now where I will root the post in a very tangible concrete example.

    So the reason I wanted to write this was because I do feel there is a palpable skepticism and negativity towards AI in the Linux community, and I understand that there are societal implications that worry us all, like how deep fakes have the potential to upend a lot of things from news disbursement to court proceedings. Or how malign forces can use AI to drive narratives in social media etc., is if social media wasn’t toxic enough as it is. But for open source developers like us in the Linux community there is also I think deep concerns about tooling that deeply incurs into something that close to the heart of our community, writing code and being skilled at writing code. I hear and share all those concerns, but at the same time having spent time the last weeks using Claude.ai I do feel it is not something we can afford not to engage with. So I know people have probably used a lot of different AI tools in the last year, some being more cute than useful others being somewhat useful and others being interesting improvements to your Google search for instance. I think I shared a lot of those impressions, but using Claude this last week has opened my eyes to what AI enginers are going to be capable of going forward.

    So my initial test was writing a python application for internal use at Red Hat, basically connecting to a variety of sources and pulling data and putting together reports, typical management fare. How simple it was impressed me though, I think most of us having to deal with pulling data from a new source know how painful it can be, with issues ranging from missing, outdated or hard to parse API documentation. I think a lot of us also then spend a lot of time experimenting to figure out the right API calls to make in order to pull the data we need. Well Claude was able to give me python scripts that pulled that data right away, I still had to spend some time with it to fine tune the data being pulled and ensuring we pulled the right data, but I did it in a fraction of the time I would have spent figuring that stuff out on my own. The one data source Claude struggled with Fedora’s Bohdi, well once I pointed it to the URL with the latest documentation for that it figured out that it would be better to use the bohdi client library to pull data and once it had that figured out it was clear sailing.

    So coming of pretty impressed by that experience I wanted to understand if Claude would be able to put together something programmatically more complex, like a GTK+ application using Vulkan. So I thought what would be a good example of such an application and I also figured it would be fun if I found something really old and asked Claude to help me bring it into the current age. So I suddenly remembered xtraceroute, which is an old application orginally written in GTK1 and OpenGL showing your traceroute on a 3d Globe.

    Screenshot of original xtraceroute

    Screenshot of the original Xtraceroute application

    I went looking for it and found that while it had been updated to GTK2 since last I looked at it, it had not been touched in 20 years. So I thought, this is a great testcase. So I grabbed the code and fed it into Claude, asking Claude to give me a modern GTK4 version of this application using Vulkan. Ok so how did it go? Well it ended up being an iterative effort, with a lot of back and forth between myself and Claude. One nice feature Claude has is that you can upload screenshots of your application and Claude will use it to help you debug. Thanks to that I got a long list of screenshots showing how this application evolved over the course of the day I spent on it.

    First output of Claude

    This screenshot shows Claudes first attempt of transforming the 20 year old xtraceroute application into a modern one using GTK4, Vulkan and also adding a Meson build system. My prompt to create this was feeding in the old code and asking Claude to come up with a GTK4 and Vulkan equivalent. As you can see the GTK4 UI is very simple, but ok as it is. The rendered globe leaves something to be desired though. I assume the old code had some 2d fall backcode, so Claude latched onto that and focused on trying to use the Cairo API to recreate this application, despite me telling it I wanted a Vulkan application. What what we ended up with was a 2d circle that I could spin around like a wheel of fortuen. The code did have some Vulkan stuff, but defaulted to the Cairo code.

    Second attempt image

    Second attempt at updating this application Anyway, I feed the screenshot of my first version back into Claude and said that the image was not a globe, it was missing the texture and the interaction model was more like a wheel of fortune. As you can see the second attempt did not fare any better, in fact we went from circle to square. This was also the point where I realized that I hadn’t uploaded the textures into Claude, so I had to tell it to load the earth.png from the local file repository.

    Third attempt by Claude

    Third attempt from Claude.Ok, so I feed my second screenshot back into Claude and pointed out that it was no globe, in fact it wasn’t even a circle and the texture was still missing. With me pointing out it needed to load the earth.png file from disk it came back with the texture loading. Well, I really wanted it to be a globe, so I said thank you for loading the texture, now do it on a globe.

    This is the output of the 4th attempt. As you can see, it did bring back a circle, but the texture was gone again. At this point I also decided I didn’t want Claude to waste anymore time on the Cairo code, this was meant to be a proper 3d application. So I told Claude to drop all the Cairo code and instead focus on making a Vulkan application.

    Fifth attempt

    So now we finally had something that started looking like something, although it was still a circle, not a globe and it got that weird division of 4 thing on the globe. Anyway, I could see it using Vulkan now and it was loading the texture. So I was feeling like we where making some decent forward movement. So I wrote a longer prompt describing the globe I wanted and how I wanted to interact with it and this time Claude did come back with Vulkan code that rendered this as a globe, thus I didn’t end up screenshoting it unfortunately.

    So with the working globe now in place, I wanted to bring in the day/night cycle from the original application. So I asked Claude to load the night texture and use it as an overlay to get that day/night effect. I also asked it to calculate the position of the sun to earth at the current time, so that it could overlay the texture in the right location. As you can see Claude did a decent job of it, although the colors was broken.

    7th attempt

    So I kept fighting with the color for a bit, Claude could see it was rendering it brown, but could not initally figure out why. I could tell the code was doing things mostly right so I also asked it to look at some other things, like I realized that when I tried to spin the globe it just twisted the texture. We got that fixed and also I got Claude to create some tests scripts that helped us figure out that the color issue was a RGB vs BRG issue, so as soon as we understood that then Claude was able to fix the code to render colors correctly. I also had a few iterations trying to get the scaling and mouse interaction behaving correctly.

    10th attempt

    So at this point I had probably worked on this for 4-5 hours, the globe was rendering nicely and I could interact with it using the mouse. Next step was adding the traceroute lines back. By default Claude had just put in code to render some small dots on the hop points, not draw the lines. Also the old method for getting the geocoordinates, but I asked Claude to help me find some current services which it did and once I picked one it on first try gave me code that was able to request the geolocation of the ip addresses it got back. To polish it up I also asked Claude to make sure we drew the lines following the globes curvature instead of just drawing straight lines.

    Final version

    Final version of the updated Xtraceroute application. It mostly works now, but I did realize why I always thought this was a fun idea, but less interesting in practice, you often don’t get very good traceroutes back, probably due to websites being cached or hosted globally. But I felt that I had proven that with a days work Claude was able to help me bring this old GTK application into the modern world.

    Conclusions

    So I am not going to argue that Xtraceroute is an important application that deserved to be saved, in fact while I feel the current version works and proves my point I also lost motivation to try to polish it up due to the limitations of tracerouting, but t he code is available for anyone who finds it worthwhile .

    But this wasn’t really about Xtraceroute, what I wanted to show here is how someone lacking C and Vulkan development skills can actually use a tool like Claude to put together a working application even one using more advanced stuff like Vulkan, which I know many more than me would feel daunting. I also found Claude really good at producing documentation and architecture documents for your application. It was also able to give me a working Meson build system and create all the desktop integration files for me, like the .desktop file, the metainfo file and so on. For the icons I ended up using Gemini as Claude do not do image generation at this point, although it was able to take a png file and create a SVG version of it (although not a perfect likeness to the original png).

    Another thing I want to say is that the way I think about this, it is not that it makes coding skills less valuable, AIs can do amazing things, but you need to keep a close eye on them to ensure the code they create actually do what you want and that it does it in a sensible manner. For instance in my reporting application I wanted to embed a pdf file and Claude initial thought was to bring in webkit to do the rendering. That would have worked, but would have added a very big and complex dependency to my application, so I had to tell it that it could just use libpoppler to do it, something Claude agreed was a much better solution. The bigger the codebase the harder it also becomes for the AI to deal with it, but I think it hose circumstances what you can do is use the AI to give you sample code for the functionality you want in the programming language you want and then you can just work on incorporating that into your big application.

    The other part here if course in terms of open source is how should contributors and projects deal with this? I know there are projects where AI generated CVEs or patches are drowning them and that helps nobody. But I think if we see AI as a developers tool and that the developer using the tool is responsible for the code generated, then I think that mindset can help us navigate this. So if you used an AI tool to create a patch for your favourite project, it is your responsibility to verify that patch before sending it in, and with that I don’t mean just verifying the functionality it provides, but that the code is clean and readable and following the coding standards of said upstream project. Maintainers on the other hand can use AI to help them review and evaluate patches quicker and thus this can be helpful on both sides of the equation. I also found Claude and other AI tools like Gemini pretty good at generating test cases for the code they make, so this is another area where open source patch contributions can improve, by improving test coverage for the code.

    I do also believe there are many areas where projects can greatly benefit from AI, for instance in the GNOME project a constant challenge for extension developers have been keeping their extensions up-to-date, well I do believe a tool like Claude or Gemini should be able to update GNOME Shell extensions quite easily. So maybe having a service which tries to provide a patch each time there is a GNOME Shell update might be a great help there. At the same time having a AI take a look at updated extensions and giving an first review of the update might help reduce the load on people doing code reviews on extensions and help flag problematic extensions.

    I know for a lot of cases and situations uploading your code to a webservice like Claude, Gemini or Copilot is not something you want or can do. I know privacy is a big concern for many people in the community. My team at Red Hat has been working on a code assistant tool using the IBM Granite model , called Granite.code . What makes Granite different is that it relies on having the model run locally on your own system, so you don’t send your code or data of somewhere else. This of course have great advantages in terms of improving privacy and security, but it has challenges too. The top end AI models out there at the moment, of which Claude is probably the best at the time of writing this blog post, are running on hardware with vast resources in terms of computing power and memory available. Most of us do not have those kind of capabilities available at home, so the model size and performance will be significantly lower. So at the moment if you are looking for a great open source tool to use with VS Code to do things like code completion I recommend giving Granite.code a look. If you on the other hand want to do something like I have described here you need to use something like Claude, Gemini or ChatGPT. I do recommend Claude, not just because I believe them to be the best at it at the moment, but they also are a company trying to hold themselves to high ethical standards. Over time we hope to work with IBM and others in the community to improve local models, and I am also sure local hardware will keep improving, so over time the experience you can get with a local model on your laptop at least has less of a gap than what it does today compared to the big cloud hosted models. There is also the middle of the road option that will become increasingly viable, where you have a powerful server in your home or at your workplace that can at least host a midsize model, and then you connect to that on your LAN. I know IBM is looking at that model for the next iteration of Granite models where you can choose from a wide variety of sizes, some small enough to be run on a laptop, others of a size where a strong workstation or small server can run them or of course the biggest models for people able to invest in top of the line hardware to run their AI.

    Also the AI space is moving blazingly fast, if you are reading this 6 Months from now I am sure the capabilities of online and local models will have changed drastically already.

    So to all my friends in the Linux community I ask you to take a look at AI and what it can do and then lets work together on improving it, not just in terms of capabilities, but trying to figure out things like societal challenges around it and sustainability concerns I also know a lot of us got.

    Whats next for this code

    As I mentioned I while I felt I got it to a point where I proved to myself it worked, I am not planning on working anymore on it. But I did make a cute little application for internal use that shows a spinning globe with all global Red Hat offices showing up as little red lights and where it pulls Red Hat news at the bottom. Not super useful either, but I was able to use Claude to refactor the globe rendering code from xtraceroute into this in just a few hours.

    Red Hat Globe

    Red Hat Offices Globe and news.

    • Pl chevron_right

      Steven Deobald: 2025-07-25 Foundation Update

      news.movim.eu / PlanetGnome • 29 July • 2 minutes

    ## Annual Report

    The 2025 Annual Report is all-but-baked. Deepa and I would like to be completely confident in the final financial figures before publishing. The Board has seen these final numbers, during their all-day work day two days ago. I heard from multiple Board members that they’re ecstatic with how Deepa presented the financial report. This was a massive amount of work for Deepa to contribute in her first month volunteering as our new Treasurer and we all really appreciate the work that she’s put into this.

    ## GUADEC and Gratitude

    I’ve organized large events before and I know in my bones how difficult and tiresome it can be. But I don’t think I quite understood the scale of GUADEC. I had heard many times in the past three months “you just have to experience GUADEC to understand it” but I was quite surprised to find the day before the conference so intense and overwhelming that I was sick in bed for the entire first day of the conference — and that’s as an attendee!

    The conference takes the firehose of GNOME development and brings it all into one place. So many things happened here, I won’t attempt to enumerate them all. Instead, I’d like to talk about the energy.

    I have been pretty disoriented since the moment I landed in Italy but, even in my stupor, I was carried along by the energy of the conference. I could see that I wasn’t an exception — everyone I talked to seemed to be sleeping four hours a night but still highly energized, thrilled to take part, to meet their old friends, and to build GNOME together. My experience of the conference was a constant stream of people coming up to me, introducing themselves, telling me their stories, and sharing their dreams for the project. There is a real warmth to everyone involved in GNOME and it radiates from people the moment you meet them. You all made this a very comfortable space, even for an introvert like me.

    There is also incredible history here: folks who have been around for 5 years, 15 years, 25 years, 30 years. Lifelong friends like that are rare and it’s special to witness, as an outsider.

    But more important than anything I have to say about my experience of the conference, I want to proxy the gratitude of everyone I met. Everyone I spoke to, carried through the unbroken days on the energy of the space, kept telling me what a wonderful GUADEC it was. “The best GUADEC I’ve ever been to.” / “It’s so wonderful to meet the local community.” / “Everything is so smooth and well organized.”

    If you were not here and couldn’t experience it yourself, please know how grateful we all are for the hard work of the staff and volunteers. Kristi, for tirelessly managing the entire project and coordinating a thousand variables, from the day GUADEC 2024 ended until the moment she opened GUADEC 2025. Rosanna, for taking time away from all her regular work at the Foundation to give her full attention to the event. Pietro, for all the local coordination before the conference and his attention to detail throughout the conference. And the local/remote volunteer team — Maria, Deepesha, Ashmit, Aryan, Alessandro, and Syazwan — for openly and generously participating in every conceivable way.

    Thank you everyone for making such an important event possible.

    • Pl chevron_right

      Christian Hergert: Week 30 Status

      news.movim.eu / PlanetGnome • 28 July • 7 minutes

    My approach to engineering involves an engineers notebook and pen at my side almost all the time. My ADHD is so bad that without writing things down I would very much not remember what I did.

    Working at large companies can have a silencing effect on engineers in the community because all our communication energy is burnt on weekly status reports. You see this all the time, and it was famously expected behavior when FOSS people joined Google.

    But it is not unique to Google and I certainly suffer from it myself. So I’m going to try to experiment for a while dropping my status reports here too, at least for the things that aren’t extremely specific to my employer.

    Open Questions

    • What is the state-of-the-art right now for “I want to provide a completely artisan file-system to a container”. For example, say I wanted to have a FUSE file-system for that build pipeline or other tooling accessed.

      At least when it comes to project sources. Everything else should be read-only anyway.

      It would be nice to allow tooling some read/write access but gate the writes so they are limited to the tools running and not persistent when the tool returns.

    Foundry

    • A bit more testing of Foundry’s replacement for Jsonrpc-GLib, which is a new libdex based FoundryJsonrpcDriver . It knows how to talk a few different types (HTTP-style, \0 or \n delimited, etc).

      LSP backend has been ported to this now along with all the JSON node creating helpers so try to put those through their paces.

    • Add pre-load/post-load to FoundryTextDocumentAddin so that we can add hooks for addins early in the loading process. We actually want this more for avoiding things during buffer loading.

    • Found a nasty issue where creating addins was causing long running leaks do to the GParameter arrays getting saved for future addin creation. Need to be a bit more clever about initial property setup so that we don’t create this reference cycle.

    • New word-completion plugin for Foundry that takes a different approach from what we did in GtkSourceView. Instead, this runs on demand with a copy of the document buffer on a fiber on a thread. This allows using regex for word boundaries ( \w ) with JIT, no synchronization with GTK, and just generally _a lot_ faster. It also allowed for following referenced files from #include style style headers in C/C++/Obj-C which is something VIM does (at least with plugins) that I very much wanted.

      It is nice knowing when a symbol comes from the local file vs an included file as well (again, VIM does this) so I implemented that as well for completeness.

      Make sure it does word de-duplication while I’m at it.

    • Preserve completion activation (user initialized, automatic, etc) to propagate to the completion providers.

    • Live diagnostics tracking is much easier now. You can just create a FoundryOnTypeDiagnostics(document) and it will manage updating things as you go. It is also smart enough to do this with GWeakRef so that we don’t keep underlying buffers/documents/etc alive past the point they should be unloaded (as the worker runs on a fiber).

      You can share a single instance of the live diagnostics using foundry_text_document_watch_diagnostics() to avoid extra work.

    • Add a Git-specific clone API in FoundryGitClone which handles all the annoying things like SSH authentication/etc via the use of our prompt abstraction (TTY, app dialog, etc). This also means there is a new foundry clone ... CLI command to test that infrastructure outside of the IDE. Should help for tracking down weird integration issues.

    • To make the Git cloner API work well I had to remove the context requirement from FoundryAuthPrompt . You’ll never have a loaded context when you want to clone (as there is not yet a project) so that requirement was nonsensical.

    • Add new foundry_vcs_list_commits_with_file() API to get the commit history on a single file. This gives you a list model of FoundryCommit which should make it very easy for applications to browse through file history. One call, bridge the model to a listview and wire up some labels.

    • Add FoundryVcsTree , FoundryVcsDiff , FoundryVcsDelta types and git implementations of them. Like the rest of the new Git abstractions, this all runs threaded using libdex and futures which complete when the thread returns. Still need to iterate on this a bit before the 1.0 API is finalized.

    • New API to generate diffs from trees or find trees by identifiers.

    • Found out that libgit2 does not support the bitmap index of the command line git command. That means that you have to do a lot of diffing to determine what commits contain a specific file. Maybe that will change in the future though. We could always shell out to the git command for this specific operation if it ends up too slow.

    • New CTags parser that allows for read-only memory. Instead of doing the optimization in the past (insert \0 and use strings in place) the new index keeps string offset/run for a few important parts.

      Then the open-coded binary search to find the nearest partial match against (then walking backward to get first potential match) can keep that in mind for memcmp() .

      We can also send all this work off to the thread pools easily now with libdex/futures.

      Some work still remains if we want to use CTags for symbol resolution but I highly doubt we do.

      Anyway, having CTags is really more just about having an easy test case for the completion engine than “people will actually use this”.

    • Also write a new CTags miner which can build CTags files using whatever ctags engine is installed (universal-ctags, etc). The goal here is, again, to test the infrastructure in a super easy way rather than have people actually use this.

    • A new FoundrySymbolProvider and FoundrySymbol API which allows for some nice ergonomics when bridging to tooling like LSPs.

      It also makes it a lot easier to implement features like pathbars since you can foundry_symbol_list_to_root() and get a future-populated GListModel of the symbol hierarchy. Attach that to a pathbar widget and you’re done.

    Foundry-GTK

    • Make FoundrySourceView final so that we can be a lot more careful about life-cycle tracking of related documents, buffers, and addins.

    • Use FoundryTextDocumentAddin to implement spellchecking with libspelling as it vastly improves life-cycle tracking. We no longer rely on UB in GLib weak reference notifications to do cleanup in the right order.

    • Improve the completion bridge from FoundryCompletionProvider to GtkSourceCompletionProvider . Particularly start on after / comment fields. We still need to get before fields setup for return types.

      Still extremely annoyed at how LSP works in this regards. I mean really, my rage that LSP is what we have has no bounds. It’s terrible in almost every way imaginable.

    Builder

    • Make my Builder rewrite use new FoundrySourceView

    • Rewrite search dialog to use FoundrySearchEngine so that we can use the much faster VCS-backed file-listing + fuzzy search.

    GtkSourceView

    • Got a nice patch for porting space drawing to GskPath, merged it.

    • Make Ctrl+n/Ctrl+p work in VIM emulation mode.

    Sysprof

    • Add support for building introspection/docs. Don’t care about the introspection too much, because I doubt anyone would even use it. But it is nice to have documentation for potential contributors to look at how the APIs work from a higher level.

    GUADEC

    • Couldn’t attend GUADEC this year, so wrote up a talk on Foundry to share with those that are interested in where things are going. Given the number of downloads of the PDF, decided that maybe sharing my weekly status round-up is useful.

    • Watched a number of videos streamed from GUADEC. While watching Felipe demo his new boxes work, I fired up the repository with foundry and things seem to work on aarch64 (Asahi Fedora here).

      That was the first time ever I’ve had an easy experience running a virtual machine on aarch64 Linux. Really pleasing!

    foundry clone https://gitlab.gnome.org/felipeborges/boxes/
    cd boxes/
    foundry init
    foundry run

    LibMKS

    • While testing Boxes on aarch64 I noticed it is using the Cairo framebuffer fallback paintable. That would be fine except I’m running on 150% here and when I wrote that code we didn’t even have real fractional scaling in the Wayland protocol defined.

      That means there are stitch marks showing up for this non-accelerated path. We probably want to choose a tile-size based on the scale- factor and be done with it.

      The accelerated path shouldn’t have this problem since it uses one DMABUF paintable and sets the damage regions for the GSK renderer to do proper damage calculation.

    • Pl chevron_right

      Bastien Nocera: Digitising CDs (aka using your phone as an image scanner)

      news.movim.eu / PlanetGnome • 27 July • 3 minutes

    I recently found, under the rain, next to a book swap box, a pile of 90's “software magazines” which I spent my evening cleaning, drying, and sorting in the days afterwards.

    Magazine cover CDs with nary a magazine

    Those magazines are a peculiar thing in France, using the mechanism of “ Commission paritaire des publications et des agences de presse ” or “Commission paritaire” for short. This structure exists to assess whether a magazine can benefit from state subsidies for the written press (whether on paper at the time, and also the internet nowadays), which include a reduced VAT charge (2.1% instead of 20%), reduced postal rates, and tax exemptions .

    In the 90s, this was used by Diamond Editions[1] (a publisher related to tech shop Pearl , which French and German computer enthusiasts probably know) to publish magazines with just enough original text to qualify for those subsidies, bundled with the really interesting part, a piece of software on CD.

    If you were to visit a French newsagent nowadays, you would be able to find other examples of this: magazines bundled with music CDs, DVDs or Blu-rays, or even toys or collectibles. Some publishers (including the infamous and now shuttered Éditions Atlas ) will even get you a cheap kickstart to a new collection, with the first few issues (and collectibles) available at very interesting prices of a couple of euros, before making that “magazine” subscription-only, with each issue being increasingly more expensive ( article from a consumer protection association ).


    Other publishers have followed suite.

    I guess you can only imagine how much your scale model would end up costing with that business model (50 eurocent for the first part, 4.99€ for the second), although I would expect them to have given up the idea of being categorised as “written press”.

    To go back to Diamond Editions, this meant the eventual birth of 3 magazines: Presqu'Offert , BestSellerGames and StratéJ . I remember me or my dad buying a few of those, an older but legit and complete version of ClarisWorks, CorelDraw or a talkie version of a LucasArt point'n'click was certainly a more interesting proposition than a cut-down warez version full of viruses when budget was tight.

    3 of the magazines I managed to rescue from the rain

    You might also be interested in the UK “covertape wars” .

    Don't stress the technique

    This brings us back to today and while the magazines are still waiting for scanning, I tried to get a wee bit organised and digitising the CDs.

    Some of them will have printing that covers the whole of the CD, a fair few use the foil/aluminium backing of the CD as a blank surface, which will give you pretty bad results when scanning them with a flatbed scanner: the light source keeps moving with the sensor, and what you'll be scanning is the sensor's reflection on the CD.

    My workaround for this is to use a digital camera (my phone's 24MP camera), with a white foam board behind it, so the blank parts appear more light grey. Of course, this means that you need to take the picture from an angle, and that the CD will appear as an oval instead of perfectly circular.

    I tried for a while to use GIMP perspective tools, and “Multimedia” Mike Melanson's MobyCAIRO rotation and cropping tool. In the end, I settled on Darktable, which allowed me to do 4-point perspective deskewing, I just had to have those reference points.

    So I came up with a simple "deskew" template , which you can print yourself, although you could probably achieve similar results with grid paper.

    My janky setup

    The resulting picture

    After opening your photo with Darktable, and selecting the “ darkroom ” tab, go to the “ rotate and perspective tool ”, select the “manually defined rectangle” structure, and select the 4 center of the deskewing target. Then click on horizontal/vertical fit. This will give you a squished CD, don't worry, and select the “specific” lens model and voilà.

    Tools at the ready

    Targets acquired


    Straightened but squished

    You can now export the processed image (I usually use PNG to avoid data loss at each step), open things up in GIMP and use the ellipse selection tool to remove the background (don't forget the center hole), the rotate tool to make the writing straight, and the crop tool to crop it to size.

    And we're done!


    The result of this example is available on Archive.org, with the rest of my uploads being made available on Archive.org and Abandonware-Magazines for those 90s magazines and their accompanying CDs.

    [1]: Full disclosure, I wrote a couple of articles for Linux Pratique and Linux Magazine France in the early 2000s, that were edited by that same company.

    • Pl chevron_right

      Sam Thursfield: Thoughts during GUADEC 2025

      news.movim.eu / PlanetGnome • 26 July • 5 minutes

    Greetings readers of the future from my favourite open technology event of the year. I am hanging out with the people who develop the GNOME platform talking about interesting stuff.

    Being realistic, I won’t have time to make a readable writeup of the event. So I’m going to set myself a challenge: how much can I write up of the event so far, in 15 minutes?

    Let’s go!

    Conversations and knowledge

    Conferences involve a series of talks, usually monologues on different topics, with slides and demos. A good talk leads to multi-way conversations.

    One thing I love about open source is: it encourages you to understand how things work. Big tech companies want you to understand nothing about your devices beyond how to put in your credit card details and send them money. Sharing knowledge is cool, though. If you know how things work then you can fix it yourself.

    Structures

    Last year, I also attended the conference and was left with a big question for the GNOME project: “What is our story?” (Inspired by an excellent keynote from Ryan Sipes about the Thunderbird email app, and how it’s supported by donations).

    We didn’t answer that directly, but I have some new thoughts.

    Open source desktops are more popular than ever. Apparently we have like 5% of the desktop market share now. Big tech firms are nowadays run as huge piles of cash, whose story is that they need to make more cash, in order to give it to shareholders, so that one day you can, allegedly, have a pension. Their main goal isn’t to make computers do interesting things. The modern for-profit corporation is a super complex institution, with great power, which is often abused.

    Open communities like GNOME are an antidote to that. With way fewer people, they nevertheless manage to produce better software in many cases, but in a way that’s demanding, fun, chaotic, mostly leaderless and which frequently burns out volunteers who contribute.

    Is the GNOME project’s goal to make computers do interesting things? For me, the most interesting part of the conference so far was the focus on project structure . I think we learned some things about how independent, non-profit communities can work, and how they can fail, and how we can make things better.

    In a world where political structures are being heavily tested and, in many cases, are crumbling, we would do well to talk more about structures, and to introspect a bit more on what works and what doesn’t. And to highlight the amazing work that the GNOME Foundation’s many volunteer directors have achieved over the last 30 years to create an institution that still functions today, and in many ways functions a lot better than organizations with significantly more resources.

    Relevant talks

    • Stephen Deobold’s keynote
    • Emmanuele’s talk on teams

    Teams

    Emmanuele Bassi tried, in a friendly way, to set fire to long-standing structures around how the GNOME community agrees and disagrees changes to the platform. Based on ideas from other successful projects that are driven by independent, non-profit communities such as the Rust and Python programming languages.

    Part of this idea is to create well-defined teams of people who collaborate on different parts of the GNOME platform.

    I’ve been contributing to GNOME in different ways for a loooong time, partly due to my day job, where I sometimes work with the technology stack, and partly because its a great group of people, we get to meet around the world once a year, and make software that’s a little more independent from the excesses and the exploitation of modern capitalism, or technofuedalism.

    And I think it’s going to be really helpful to organize my contributions according to a team structure with a defined form.

    Search

    I really hope we’ll have a search team.

    I don’t have much news about search. GNOME’s indexer (localsearch) might start indexing the whole home directory soon. Carlos Garnacho continues to heroically make it work really well.

    QA / Testing / Developer Experience

    I did a talk at the conference (and half of another one with Martín Abente Lahaye) , about end-to-end testing using openQA.

    The talks were pretty successful, they lead to some interesting conversations with new people. I hope we’ll continue to grow the Linux QA call and try to keep these conversations going, and try to share knowledge and create better structures so that paid QA engineers who are testing products built with GNOME can collaborate on testing upstream.

    Freeform notes

    I’m 8 minutes over time already so the rest of this will be freeform notes from my notepad.

    Live-coding streams aren’t something I watch or create. It’s an interesting way to share knowledge with the new generation of people who have grown up with internet videos as a primary knowledge source. I don’t have age stats for this blog, but I’m curious how many readers under 30 have read this far down. (Leave a comment if you read this and prove me wrong! : -)

    systemd-sysexts for development are going to catch on.

    There should be karaoke every year.

    Fedora Silverblue isn’t actively developed at the moment. bootc is something to keep an eye on.

    GNOME Shell Extensions are really popular and are a good “gateway drug” to get newcomers involved. Nobody figured out a good automated testing story for these yet. I wonder if there’s a QA project there? I wonder if there’s a low-cost way to allow extension developers to test extensions?

    Legacy code is “code without tests”. I’m not sure I agree with that.

    “Toolkits are transient, apps are forever”. That’s spot-on.

    There is a spectrum between being a user and a developer. It’s not a black-and-white distinction.

    BuildStream is still difficult to learn and the documentation isn’t a helpful getting started guide for newcomers.

    We need more live demos of accessibility tools. I still don’t know how you use the screen reader. I’d like to have the computer read to me.

    That’s it for now. It took 34 minutes to empty my brain into my blog, more than planned, but a necessary step. Hope some of it was interesting. See you soon!

    • Pl chevron_right

      Nancy Wairimu Nyambura: Outreachy Update:Understanding and Improving def-extractor.py

      news.movim.eu / PlanetGnome • 25 July • 2 minutes

    Introduction

    Over the past couple of weeks, I have been working on understanding and improving def-extractor.py , a Python script that processes dictionary data from Wiktionary to generate word lists and definitions in structured formats. My main task has been to refactor the script to use configuration files instead of hardcoded values, making it more flexible and maintainable.

    In this blog post, I’ll explain:

    1. What the script does
    2. How it works under the hood
    3. The changes I made to improve it
    4. Why these changes matter

    What Does the Script Do?

    At a high level, this script processes huge JSONL (JSON Lines) dictionary dumps, like the ones from Kaikki.org , and filters them down into clean, usable formats.

    The def-extractor.py script takes raw dictionary data (from Wiktionary) and processes it into structured formats like:

    • Filtered word lists (JSONL)
    • GVariant binary files (for efficient storage)
    • Enum tables (for parts of speech & word tags)

    It was originally designed to work with specific word lists (Wordnik, Broda, and a test list), but my goal is to make it configurable so it could support any word list with a simple config file.

    How It Works (Step by Step)

    1. Loading the Word List

    The script starts by loading a word list (e.g., Wordnik’s list of common English words). It filters out invalid words (too short, contain numbers, etc.) and stores them in a hash table for quick lookup.

    2. Filtering Raw Wiktionary Data

    Next, it processes a massive raw-wiktextract-data.jsonl file (theWiktionary dump) and keeps only entries that:

    • Match words from the loaded word list
    • Are in the correct language (e.g., English)

    3. Generating Structured Outputs

    After filtering, the script creates:

    • Enum tables (JSON files listing parts of speech & word tags)
    • GVariant files (binary files for efficient storage and fast lookup)

    What Changes have I Made?

    1. Added Configuration Support

    Originally, the script uses hardcoded paths and settings. I modified it to read from .config files , allowing users to define:

    • Source word list file
    • Output directory
    • Word validation rules (min/max length, allowed characters)

    Before (Hardcoded):

    WORDNIK_LIST = "wordlist-20210729.txt"
    ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

    After (Configurable):

    ini

    [Word List]
    Source = my-wordlist.txt
    MinLength = 2
    MaxLength = 20

    2. Improved File Path Handling

    Instead of hardcoding paths, the script now constructs them dynamically:

    output_path = os.path.join(config.word_lists_dir, f"{config.id}-filtered.jsonl")

    Why Do These Changes Matter?

    Flexibility -Now supports any word list via config files.
    Maintainability – No more editing code to change paths or rules.
    Scalability -Easier to add new word lists or languages.
    Consistency -All settings are in configs.

    Next Steps?

    1. Better Error Handling

    I am working on adding checks for:

    • Missing config fields
    • Invalid word list files
    • Incorrectly formatted data

    2. Unified Word Loading Logic

    There are separate functions ( load_wordnik() , load_broda() ).

    I want to merged them into one load_words(config) that would works for any word list.

    3. Refactor legacy code for better structure

    Try It Yourself

    1. Download the script: [ wordlist-Gitlab ]
    2. Create a . conf config file
    3. Run: python3 def-extractor.py --config my-wordlist.conf filtered-list

    Happy coding!

    • Pl chevron_right

      Philip Withnall: A brief parental controls update

      news.movim.eu / PlanetGnome • 24 July

    Over the past few weeks, Ignacy and I have made good progress on the next phase of features for parental controls in GNOME: a refresh of the parental controls UI, support for screen time limits for child accounts, and basic web filtering support are all in progress. I’ve been working on the backend stuff, while Ignacy has been speedily implementing everything needed in the frontend.

    Ignacy is at GUADEC, so please say hi to him! The next phase of parental controls work will involve changes to gnome-control-center and gnome-shell, so he’ll be popping up all over the stack.

    I’ll try and blog more soon about the upcoming features and how they’re implemented, because there are necessarily quite a few moving parts to them.