call_end

    • Pl chevron_right

      Julian Hofer: Git Forges Made Simple: gh & glab

      news.movim.eu / PlanetGnome • 4 August • 6 minutes

    When I set the goal for myself to contribute to open source back in 2018, I mostly struggled with two technical challenges:

    • Python virtual environments, and
    • Git together with GitHub.

    Solving the former is nowadays my job , so let me write up my current workflow for the latter.

    Most people use Git in combination with modern Git forges like GitHub and GitLab. Git doesn't know anything about these forges, which is why CLI tools exist to close that gap. It's still good to know how to handle things without them, so I will also explain how to do things with only Git. For GitHub there's gh and for GitLab there's glab . Both of them are Go binaries without any dependencies that work on Linux, macOS and Windows. If you don't like any of the provided installation methods, you can simply download the binary, make it executable and put it in your PATH .

    Luckily, they also have mostly the same command line interface. First, you have to login with the command that corresponds to your git forge:

    gh auth login
    glab auth login
    

    In the case of gh this even authenticates Git with GitHub. With GitLab, you still have to set up authentication via SSH .

    Working Solo #

    The simplest way to use Git is to use it like a backup system. First, you create a new repository on either Github or GitLab . Then you clone it with git clone <REPO> . From that point on, all you have to do is:

    • do some work
    • commit
    • push
    • repeat

    On its own there aren't a lot of reasons to choose this approach over a file syncing service like Nextcloud . No, the main reason you do this, is because you are either already familiar with the git workflow, or want to get used to it.

    Contributing #

    Git truly shines as soon as you start collaborating with others. On a high level this works like this:

    • You modify some files in a Git repository,
    • you propose your changes via the Git forge,
    • maintainers of the repository review your changes, and
    • as soon as they are happy with your changes, they will integrate your changes into the main branch of the repository.

    As before, you clone the repository with git clone <REPO> . Change directories into that repository and run git status . The branch that it shows is the default branch which is probably called main or master . Before you start a new branch, you will run the following two commands to make sure you start with the latest state of the repository:

    git switch <DEFAULT-BRANCH>
    git pull
    

    You switch and create a new branch with:

    git switch --create <BRANCH>
    

    That way you can work on multiple features at the same time and easily keep your default branch synchronized with the remote repository.

    The next step is to open a pull request on GitHub or merge request on GitLab. They are equivalent, so I will call both of them pull requests from now on. The idea of a pull request is to integrate the changes from one branch into another branch (typically the default branch). However, you don't necessarily want to give every potential contributor the power to create new branches on your repository. That is why the concept of forks exists. Forks are copies of a repository that are hosted on the same Git forge. Contributors can now create branches on their forks and open pull requests based on these branches.

    If you don't have push access to the repository, now it's time to create your own fork. Without the forge CLI tools, you first fork the repository in the web interface.

    Then, you run the following commands:

    git remote rename origin upstream
    git remote add origin <FORK>
    

    When you cloned your repository, Git set the default branch of the original repo as the upstream branch for your local default branch. This is preserved by the remote rename, which is why the default branch can still be updated from upstream with git pull and no additional arguments.

    Git Forge Tools #

    Alternatively, you can use the forge tool that corresponds to your git forge. You still clone and switch branches with Git as shown before. However, you only need a single command to both fork the repository and set up the git remotes.

    gh repo fork --remote
    glab repo fork --remote
    

    Then, you need to push your local branch. With Git, you first have to tell it that it should create the corresponding branch on the remote and set it as upstream branch.

    git push --set-upstream origin <BRANCH>
    

    Next, you open the repository in the web interface, where it will suggest opening a pull request. The upstream branch of your local branch is now configured, which means you can update your remote by running git push without any additional arguments.

    Using pr create directly pushes and sets up your branch, and opens the pull request for you. If you have a fork available, it will ask whether you want to push your branch there:

    gh pr create
    glab mr create
    

    Checking out Pull Requests #

    Often, you want to check out a pull request on your own machine to verify that it works as expected. This is surprisingly difficult with Git alone.

    First, navigate to the repository where the pull request originates in your web browser. This might be the same repository, or it could be a fork.

    If it's the same repository, checking out their branch is not too difficult: you run git switch <BRANCH> , and it's done.

    However, if it's a fork, the simplest way is to add a remote for the user who opened the pull request, fetch their repo, and finally check out their branch.

    This looks like this:

    git remote add <USER> <FORK_OF_USER>
    git fetch <USER>
    git switch <USER>/<BRANCH>
    

    With the forge CLIs, all you have to do is:

    gh pr checkout <PR_NUMBER>
    glab mr checkout <MR_NUMBER>
    

    It's a one-liner, works no matter if the pull request is coming from the repo itself or a fork, and it doesn't set up any additional remotes.

    You don't even have to open your browser to get the pull request number. Simply run the following commands, and it will give you a list of all open pull requests:

    gh pr list
    glab mr list
    

    If you have push access to the original repository, you will also be able to push to the branch of the pull request unless the author explicitly opted out of that. This is useful for changes that are easier to do yourself than communicating via a comment.

    Finally, you can check out the status of your pull request with pr view . By adding --web , it will directly open your web browser for you:

    gh pr view --web
    glab mr view --web
    

    Conclusion #

    When I first heard of gh 's predecessor hub , I thought this is merely a tool for people who insist on doing everything in the terminal. I only realized relatively recently that Git forge tools are in fact the missing piece to an efficient Git workflow. Hopefully, you now have a better idea of the appeal of these tools!

    Many thanks to Sabrina and Lucas for their comments and suggestions on this article

    You can find the discussion at this Mastodon post .

    • Pl chevron_right

      Tobias Bernard: GUADEC 2025

      news.movim.eu / PlanetGnome • 3 August • 5 minutes

    Last week was this year’s GUADEC, the first ever in Italy! Here are a few impressions.

    Local-First

    One of my main focus areas this year was local-first, since that’s what we’re working on right now with the Reflection project (see the previous blog post ). Together with Julian and Andreas we did two lightning talks (one on local-first generally, and one on Reflection in particular), and two BoF sessions.

    Local-First BoF

    At the BoFs we did a bit of Reflection testing, and reached a new record of people simultaneously trying the app:

    This also uncovered a padding bug in the users popover :)

    Andreas also explained the p2anda stack in detail using a new diagram we made a few weeks back, which visualizes how the various components fit together in a real app.

    p2panda stack diagram

    We also discussed some longer-term plans, particularly around having a system-level sync service. The motivation for this is twofold: We want to make it as easy as possible for app developers to add sync to their app. It’s never going to be “free”, but if we can at least abstract some of this in a useful way that’s a big win for developer experience. More importantly though, from a security/privacy point of view we really don’t want every app to have unrestricted access to network, Bluetooth, etc. which would be required if every app does its own p2p sync.

    One option being discussed is taking the networking part of p2panda (including iroh for p2p networking) and making it a portal API which apps can use to talk to other instances of themselves on other devices.

    Another idea was a more high-level portal that works more like a file “share” system that can sync arbitary files by just attaching the sync context to files as xattrs and having a centralized service handle all the syncing. This would have the advantage of not requiring special UI in apps, just a portal and some integration in Files. Real-time collaboration would of course not be possible without actual app integration, but for many use cases that’s not needed anyway, so perhaps we could have both a high- and low-level API to cover different scenarios?

    There are still a lot of open questions here, but it’s cool to see how things get a little bit more concrete every time :)

    If you’re interested in the details, check out the full notes from both BoF sessions .

    Design

    Jakub and I gave the traditional design team talk – a bit underprepared and last-minute (thanks Jakub for doing most of the heavy lifting), but it was cool to see in retrospect how much we got done in the past year despite how much energy unfortunately went into unrelated things. The all-new slate of websites is especially cool after so many years of gnome.org et al looking very stale. You can find the slides here .

    Jakub giving the design talk

    Inspired by the Summer of GNOME OS challenge many of us are doing, we worked on concepts for a new app to make testing sysexts of merge requests (and nightly Flatpaks) easier. The working title is “Test Ride” (a more sustainable version of Apple’s “Test Flight” :P) and we had fun drawing bicycles for the icon.

    Test Ride app mockup

    Jakub and I also worked on new designs for Georges’ presentation app Spiel (which is being rebranded to “Podium” to avoid the name clash with the a11y project ). The idea is to make the app more resilient and data more future-proof, by going with a file-based approach and simple syntax on top of Markdown for (limited) layout customization.

    Georges and Jakub discussing Podium designs

    Miscellaneous

    • There was a lot of energy and excitement around GNOME OS. It feels like we’ve turned a corner, finally leaving the “science experiment” stage and moving towards “daily-drivable beta”.
    • I was very happy that the community appreciation award went to Alice Mikhaylenko this year. The impact her libadwaita work has had on the growth of the app ecosystem over the past 5 years can not be overstated. Not only did she build dozens of useful and beautiful new adaptive widgets, she also has a great sense for designing APIs in a way that will get people to adopt them, which is no small thing. Kudos, very well deserved!
    • While some of the Foundation conflicts of the past year remain unresolved, I was happy to see that Steven’s and the board’s plans are going in the right direction.

    Brescia

    The conference was really well-organized (thanks to Pietro and the local team!), and the venue and city of Brescia had a number of advantages that were not always present at previous GUADECs:

    • The city center is small and walkable, and everyone was staying relatively close by
    • The university is 20 min by metro from the city center, so it didn’t feel like a huge ordeal to go back and forth
    • Multiple vegan lunch options within a few minutes walk from the university
    • Lots of tables (with electrical outlets!) for hacking at the venue
    • Lots of nice places for dinner/drinks outdoors in the city center
    • Many dope ice cream places
    Piazza della Loggia at sunset

    A few (minor) points that could be improved next time:

    • The timetable started veeery early every day, which contributed to a general lack of sleep. Realistically people are not going to sleep before 02:00, so starting the program at 09:00 is just too early. My experience from multi-day events in Berlin is that 12:00 is a good time to start if you want everyone to be awake :)
    • The BoFs could have been spread out a bit more over the two days, there were slots with three parallel ones and times with nothing on the program.
    • The venue closing at 19:00 is not ideal when people are in the zone hacking. Doesn’t have to be all night, but the option to hack until after dinner (e.g. 22:00) would be nice.
    • Since that the conference is a week long accommodation can get a bit expensive, which is not ideal since most people are paying for their own travel and accommodation nowadays. It’d have been great if there was a more affordable option for accommodation, e.g. at student dorms, like at previous GUADECs.
    • A frequent topic was how it’s not ideal to have everyone be traveling and mostly unavailable for reviews a week before feature freeze. It’s also not ideal because any plans you make at GUADEC are not going to make it into the September release, but will have to wait for March. What if the conference was closer to the beginning of the cycle, e.g. in May or June?

    A few more random photos:

    Matthias showing us fancy new dynamic icon stuff Dinner on the first night feat. yours truly, Robert, Jordan, Antonio, Maximiliano, Sam, Javier, Julian, Adrian, Markus, Adrien, and Andreas Adrian and Javier having an ad-hoc Buildstream BoF at the pizzeria Robert and Maximiliano hacking on Snapshot
    • Pl chevron_right

      Daiki Ueno: Optimizing CI resource usage in upstream projects

      news.movim.eu / PlanetGnome • 3 August • 5 minutes

    At GnuTLS , our journey into optimizing GitLab CI began when we faced a significant challenge: we lost our GitLab.com Open Source subscription. While we are still hoping that this limitation is temporary , this meant our available CI/CD resources became significantly lower. We took this opportunity to find smarter ways to manage our pipelines and reduce our footprint.

    This blog post shares the strategies we employed to optimize our GitLab CI usage, focusing on reducing running time and network resources, which are crucial for any open-source project operating with constrained resources.

    CI on every PR: a best practice, but not cheap

    While running CI on every commit is considered a best practice for secure software development, our experience setting up a self-hosted GitLab runner on a modest Virtual Private Server (VPS) highlighted its cost implications, especially with limited resources. We provisioned a VPS with 2GB of memory and 3 CPU cores, intending to support our GnuTLS CI pipelines.

    The reality, however, was a stark reminder of the resource demands. A single CI pipeline for GnuTLS took an excessively long time to complete, often stretching beyond acceptable durations. Furthermore, the extensive data transfer involved in fetching container images, dependencies, building artifacts, and pushing results quickly led us to reach the bandwidth limits imposed by our VPS provider, resulting in throttled connections and further delays.

    This experience underscored the importance of balancing CI best practices with available infrastructure and budget, particularly for resource-intensive projects.

    Reducing CI Running Time

    Efficient CI pipeline execution is paramount, especially when resources are scarce. GitLab provides an excellent article on pipeline efficiency, though in practice, project specific optimization is needed. We focused on three key areas to achieve faster pipelines:

    • Tiering tests
    • Layering container images
    • De-duplicating build artifacts

    Tiering tests

    Not all tests need to run on every PR. For more exotic or costly tasks, such as extensive fuzzing, generating documentation, or large-scale integration tests, we adopted a tiering approach. These types of tests are resource-intensive and often provide value even when run less frequently. Instead of scheduling them for every PR, they are triggered manually or on a periodic basis (e.g., nightly or weekly builds). This ensures that critical daily development workflows remain fast and efficient, while still providing comprehensive testing coverage for the project without incurring excessive resource usage on every minor change.

    Layering container images

    The tiering of tests gives us an idea which CI images are more commonly used in the pipeline. For those common CI images, we transitioned to using a more minimal base container image, such as fedora-minimal or debian:<flavor>-slim . This reduced the initial download size and the overall footprint of our build environment.

    For specialized tasks, such as generating documentation or running cross-compiled tests that require additional tools, we adopted a layering approach. Instead of building a monolithic image with all possible dependencies, we created dedicated, smaller images for these specific purposes and layered them on top of our minimal base image as needed within the CI pipeline. This modular approach ensures that only the necessary tools are present for each job, minimizing unnecessary overhead.

    De-duplicating build artifacts

    Historically, our CI pipelines involved many “configure && make” steps for various options. One of the major culprits of long build times is repeatedly compiling source code, oftentimes resulting in almost identical results.

    We realized that many of these compile-time options could be handled at runtime. By moving configurations that didn’t fundamentally alter the core compilation process to runtime, we simplified our build process and reduced the number of compilation steps required. This approach transforms a lengthy compile-time dependency into a quicker runtime check.

    Of course, this approach cuts both ways: while it simplifies the compilation process, it could increase the code size and attack surface. For example, support for legacy protocol features such as SSL 3.0 or SHA-1 that may lower the entire security should still be able to be switched off at the compile time.

    Another caveat is that some compilation options are inherently incompatible with each other. One example is that thread sanitizer cannot be enabled with address sanitizer at the same time. In such cases a separate build artifact is still needed.

    The impact: tangible results

    The efforts put into optimizing our GitLab CI configuration yielded significant benefits:

    • The size of the container image used for our standard build jobs is now 2.5GB smaller than before. This substantial reduction in image size translates to faster job startup times and reduced storage consumption on our runners.
    • 9 “configure && make” steps were removed from our standard build jobs. This streamlined the build process and directly contributed to faster execution times.

    By implementing these strategies, we not only adapted to our reduced resources but also built a more efficient, cost-effective, and faster CI/CD pipeline for the GnuTLS project. These optimizations highlight that even small changes can lead to substantial improvements, especially in the context of open-source projects with limited resources.

    For further information on this, please consult the actual changes .

    Next steps

    While the current optimizations have significantly improved our CI efficiency, we are continuously exploring further enhancements. Our future plans include:

    • Distributed GitLab runners with external cache: To further scale and improve resource utilization, we are considering running GitLab runners on multiple VPS instances. To coordinate these distributed runners and avoid redundant data transfers, we could set up an external cache, potentially using a solution like MinIO . This would allow shared access to build artifacts, reducing bandwidth consumption and build times.
    • Addressing flaky tests: Flaky tests, which intermittently pass or fail without code changes, are a major bottleneck in any CI pipeline. They not only consume valuable CI resources by requiring entire jobs to be rerun but also erode developer confidence in the test suite. In TLS testing, it is common to write a test script that sets up a server and a client as a separate process, let the server bind a unique port to which the client connects, and instruct the client to initiate a certain event through a control channel. This kind of test could fail in many ways regardless of the test itself, e.g., the port might be already used by other tests. Therefore, rewriting tests without requiring a complex setup would be a good first step.
    • Pl chevron_right

      Jonathan Blandford: GUADEC 2025: Thoughts and Reflections

      news.movim.eu / PlanetGnome • 2 August • 4 minutes

    Another year, another GUADEC. This was the 25th anniversary of the first GUADEC, and the 25th one I’ve gone to. Although there have been multiple bids for Italy during the past quarter century, this was the first successful one. It was definitely worth the wait, as it was one of the best GUADECs in recent memory.

    A birthday cake with sparklers stands next to a sign saying GUADEC 2025 GUADEC’s 25th anniversary cake

    This was an extremely smooth conference — way smoother than previous years. The staff and volunteers really came through in a big way and did heroic work! I watched Deepesha, Asmit, Aryan, Maria, Zana, Kristi, Anisa, and especially Pietro all running around making this conference happen. I’m super grateful for their continued hard work in the project. GNOME couldn’t happen without their effort.

    A brioche and espresso cup sit on a table La GNOME vita

    Favorite Talks

    I commented on some talks as they happened ( Day 1 , Day 2 , Day 3 ). I could only attend one track at a time so missed a lot of them, but the talks I saw were fabulous. They were much higher quality than usual this year, and I’m really impressed at this community’s creativity and knowledge. As I said, it really was a strong conference.

    I did an informal poll of assorted attendees I ran into on the streets of Brescia on the last night, asking what their favorite talks were. Here are the results:

    Emmanuele standing in front of a slide that says "Whither Maintainers" Whither Maintainers
    • Emmanuele’s talk on “Getting Things Done In GNOME”: This talk clearly struck a chord amongst attendees. He proposed a path forward on the technical governance of the project. It also had a “Wither Maintainers” slide that lead to a lot of great conversations.
    • George’s talk on streaming: This was very personal, very brave, and extremely inspiring. I left the talk wanting to try my hand at live streaming my coding sessions, and I’m not the only one.
    • The poop talk, by Niels: This was a very entertaining lightning talk with a really important message at the end.
    • Enhancing Screen Reader Functionality in Modern Gnome by Lukas: Unfortunately, I was at the other track when this one happened so I don’t know much about it. I’ll have to go back and watch it! That being said, it was so inspiring to see how many people at GUADEC were working on accessibility, and how much progress has been made across the board. I’m in awe of everyone that works in this space.

    Honorable mentions

    In reality, there were many amazing talks beyond the ones I listed. I highly recommend you go back and see them. I know I’m planning on it!

    Crosswords at GUADEC

    Refactoring gnome-crosswords

    We didn’t have a Crosswords update talk this cycle. However we still had some appearances worth mentioning:

    • Federico gave a talk about how to use unidirectional programming to add tests to your application , and used Crosswords as his example. This probably the 5th time one of the two of us have talked about this topic. This was the best one to date, though we keep giving it because we don’t think we’ve gotten the explanation right. It’s a complex architectural change which has a lot of nuance, and is hard for us to explain succinctly. Nevertheless, we keep trying, as we see how this could lead to a big revolution in the quality of GNOME applications. Crosswords is pushing 80KLOC, and this architecture is the only thing allowing us to keep it at a reasonable quality.
    • People keep thinking this is “just” MVC, or a minor variation thereof, but it’s different enough that it requires a new mindset, a disciplined approach, and a good data model. As an added twist, Federico sent an MR to GNOME Calendar to add initial unidirectional support to that application. If crosswords are too obscure a subject for you, then maybe the calendar section of the talk will help you understand it.
    • I gave a lighting talk about some of the awesome work that our GSoC (and prospective GSoC) students are doing.
    • I gave a BOF on Words. It was lightly attended, but led to a good conversation with the always-helpful Martin.
    • Finally, I sat down with Tobias to do a UX review of the crossword game, with an eye to getting it into GNOME Circle. This has been in the works for a long-time and I’m really grateful for the chance to do it in person. We identified many papercuts to fix of course, but Tobias was also able to provide a suggestion to improve a long-standing problem with the interface. We sketched out a potential redesign that I’m really happy with. I hope the Circle team is able to continue to do reviews across the GNOME ecosystem as it provides so much value.

    Personal

    A final comment: I’m reminded again of how much of a family the GNOME community is. For personal reasons, it was a very tough GUADEC for Zana and I. It was objectively a fabulous GUADEC and we really wanted to enjoy it, but couldn’t. We’re humbled at the support and love this community is capable of. Thank you.

    • Pl chevron_right

      This Week in GNOME: #210 Periodic Updates

      news.movim.eu / PlanetGnome • 1 August • 4 minutes

    Update on what happened across the GNOME project in the week from July 25 to August 01.

    GNOME Core Apps and Libraries

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

    last cycle, libadwaita gained a way to query the system document font, even though it was same as the UI font. This cycle it has been made larger (12pt instead of 11pt) and we have a new .document style class that makes the specified widget use it (as well as increases line height) - intended to be used for the app content such as messages in a chat client.

    Meanwhile, the formerly useless .body style class also features increased line height now (along with .caption ) and can be used to make labels such as UI descriptions more legible compared to default styles. Some examples of where it’s already used:

    • Dialog body text
    • Preferences group description
    • Status page description
    • What’s new, legal and troubleshooting sections in about dialogs

    GNOME Circle Apps and Libraries

    revisto announces

    Drum Machine 1.4.0 is out! 🥁

    Drum Machine , a GNOME Circle application, now supports more pages (bars) for longer beats, mobile improvements, and translations in 17 languages ! You can try it out and do more creativity with it.

    What’s new: • Extended pattern grid for longer, more complex rhythms • Mobile-responsive UI that adapts to different screen sizes • Global reach with translations including Farsi, Chinese, Russian, Arabic, Hebrew, and more

    If you have any suggestions and ideas, you can always contribute and make Drum Machine better, all ideas (even better/more default presets) are welcome!

    Available on Flathub: https://flathub.org/apps/io.github.revisto.drum-machine

    Happy drumming! 🎶

    drum-machine-1.4.0-screenshot.DJEasryX_1LnpcL.webp

    Third Party Projects

    lo reports

    After a while of working on and off, I have finally released the first version of Nucleus, a periodic table app for searching and viewing various properties of the chemical elements!

    You can get it on Flathub: https://flathub.org/apps/page.codeberg.lo_vely.Nucleus

    Nucleus_electron_shell_dialog.DOKtXHUa_2k9kRS.webp

    Nucleus_periodic_table_view.Ch8TANwn_Z1XWg5K.webp

    francescocaracciolo says

    Newelle 1.0.0 has been released! Huge release for this AI assistant for Gnome .

    • 📱 Mini Apps support! Extensions can now show custom mini apps on the sidebar
    • 🌐 Added integrated browser Mini App: browse the web directly in Newelle and attach web pages
    • 📁 Improved integrated file manager , supporting multiple file operations
    • 👨‍💻 Integrated file editor : edit files and codeblocks directly in Newelle
    • 🖥 Integrated Terminal mini app: open the terminal directly in Newelle
    • 💬 Programmable prompts : add dynamic content to prompts with conditionals and random strings
    • ✍️ Add ability to manually edit chat name
    • 🪲 Minor bug fixes
    • 🚩 Added support for multiple languages for Kokoro TTS and Whisper.CPP
    • 💻 Run HTML/CSS/JS websited directly in app
    • ✨ New animation on chat change

    Get it on FlatHub

    Fractal

    Matrix messaging app for GNOME written in Rust.

    Kévin Commaille announces

    Want to get a head start and try out Fractal 12 before its release? That’s what this Release Candidate is for! New since 12.beta:

    • The upcoming room version 12 is supported, with the special power level of room creators
    • Requesting invites to rooms (aka knocking) is now possible
    • Clicking on the name of the sender of a message adds a mention to them in the composer

    As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

    It is available to install via Flathub Beta, see the instructions in our README .

    As the version implies, it should be mostly stable and we expect to only include minor improvements until the release of Fractal 12.

    If you want to join the fun, you can try to fix one of our newcomers issues . We are always looking for new contributors!

    GNOME Websites

    Guillaume Bernard reports

    After months of work and test, it’s now possible to connect to Damned Lies using third party providers. You can now use your GNOME SSO account as well as other common providers used by translators: Fedora, Launchpad, GitHub and GitLab.com. The login/password authentication has been disabled and email addresses are validated for security reasons.

    On another hand, under the hood, Damned Lies has been modernized: we upgrade to Fedora 42, which provides a fresher gettext (0.23) and we moved to Django 5.2 LTS and Python 3.13. Users can expect performances improvements, as we replaced the Apache mod_wsgi by gunicorn that is said as more CPU and RAM efficient.

    Next step is working on async git pushes and merge requests support. Help is very welcome on these topics!

    Shell Extensions

    Just Perfection reports

    The GNOME Shell 49 port guide for extensions is ready! We are now accepting GNOME Shell 49 extension packages on EGO . Please join us on the GNOME Extensions Matrix Channel if you have any issues porting your extension. Also, thanks to Florian Müllner, gnome-extensions has added a new upload command for GNOME Shell 49, making it easier to upload your extensions to EGO. You can also use it with CI.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Matthew Garrett: Secure boot certificate rollover is real but probably won't hurt you

      news.movim.eu / PlanetGnome • 31 July • 8 minutes

    LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

    First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

    That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

    What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

    This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

    The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

    First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

    Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

    The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it . In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

    So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

    Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

    How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

    If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

    Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

    [1] (there's also a separate revocation mechanism called SBAT which I wrote about here , but it's not relevant in this scenario)

    [2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

    [3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

    [4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

    [5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

    [6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

    [7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

    comment count unavailable comments
    • Pl chevron_right

      Christian Schaller: Artificial Intelligence and the Linux Community

      news.movim.eu / PlanetGnome • 29 July • 14 minutes

    I have wanted to write this blog post for quite some time, but been unsure about the exact angle of it. I think I found that angle now where I will root the post in a very tangible concrete example.

    So the reason I wanted to write this was because I do feel there is a palpable skepticism and negativity towards AI in the Linux community, and I understand that there are societal implications that worry us all, like how deep fakes have the potential to upend a lot of things from news disbursement to court proceedings. Or how malign forces can use AI to drive narratives in social media etc., is if social media wasn’t toxic enough as it is. But for open source developers like us in the Linux community there is also I think deep concerns about tooling that deeply incurs into something that close to the heart of our community, writing code and being skilled at writing code. I hear and share all those concerns, but at the same time having spent time the last weeks using Claude.ai I do feel it is not something we can afford not to engage with. So I know people have probably used a lot of different AI tools in the last year, some being more cute than useful others being somewhat useful and others being interesting improvements to your Google search for instance. I think I shared a lot of those impressions, but using Claude this last week has opened my eyes to what AI enginers are going to be capable of going forward.

    So my initial test was writing a python application for internal use at Red Hat, basically connecting to a variety of sources and pulling data and putting together reports, typical management fare. How simple it was impressed me though, I think most of us having to deal with pulling data from a new source know how painful it can be, with issues ranging from missing, outdated or hard to parse API documentation. I think a lot of us also then spend a lot of time experimenting to figure out the right API calls to make in order to pull the data we need. Well Claude was able to give me python scripts that pulled that data right away, I still had to spend some time with it to fine tune the data being pulled and ensuring we pulled the right data, but I did it in a fraction of the time I would have spent figuring that stuff out on my own. The one data source Claude struggled with Fedora’s Bohdi, well once I pointed it to the URL with the latest documentation for that it figured out that it would be better to use the bohdi client library to pull data and once it had that figured out it was clear sailing.

    So coming of pretty impressed by that experience I wanted to understand if Claude would be able to put together something programmatically more complex, like a GTK+ application using Vulkan. So I thought what would be a good example of such an application and I also figured it would be fun if I found something really old and asked Claude to help me bring it into the current age. So I suddenly remembered xtraceroute, which is an old application orginally written in GTK1 and OpenGL showing your traceroute on a 3d Globe.

    Screenshot of original xtraceroute

    Screenshot of the original Xtraceroute application

    I went looking for it and found that while it had been updated to GTK2 since last I looked at it, it had not been touched in 20 years. So I thought, this is a great testcase. So I grabbed the code and fed it into Claude, asking Claude to give me a modern GTK4 version of this application using Vulkan. Ok so how did it go? Well it ended up being an iterative effort, with a lot of back and forth between myself and Claude. One nice feature Claude has is that you can upload screenshots of your application and Claude will use it to help you debug. Thanks to that I got a long list of screenshots showing how this application evolved over the course of the day I spent on it.

    First output of Claude

    This screenshot shows Claudes first attempt of transforming the 20 year old xtraceroute application into a modern one using GTK4, Vulkan and also adding a Meson build system. My prompt to create this was feeding in the old code and asking Claude to come up with a GTK4 and Vulkan equivalent. As you can see the GTK4 UI is very simple, but ok as it is. The rendered globe leaves something to be desired though. I assume the old code had some 2d fall backcode, so Claude latched onto that and focused on trying to use the Cairo API to recreate this application, despite me telling it I wanted a Vulkan application. What what we ended up with was a 2d circle that I could spin around like a wheel of fortuen. The code did have some Vulkan stuff, but defaulted to the Cairo code.

    Second attempt image

    Second attempt at updating this application Anyway, I feed the screenshot of my first version back into Claude and said that the image was not a globe, it was missing the texture and the interaction model was more like a wheel of fortune. As you can see the second attempt did not fare any better, in fact we went from circle to square. This was also the point where I realized that I hadn’t uploaded the textures into Claude, so I had to tell it to load the earth.png from the local file repository.

    Third attempt by Claude

    Third attempt from Claude.Ok, so I feed my second screenshot back into Claude and pointed out that it was no globe, in fact it wasn’t even a circle and the texture was still missing. With me pointing out it needed to load the earth.png file from disk it came back with the texture loading. Well, I really wanted it to be a globe, so I said thank you for loading the texture, now do it on a globe.

    This is the output of the 4th attempt. As you can see, it did bring back a circle, but the texture was gone again. At this point I also decided I didn’t want Claude to waste anymore time on the Cairo code, this was meant to be a proper 3d application. So I told Claude to drop all the Cairo code and instead focus on making a Vulkan application.

    Fifth attempt

    So now we finally had something that started looking like something, although it was still a circle, not a globe and it got that weird division of 4 thing on the globe. Anyway, I could see it using Vulkan now and it was loading the texture. So I was feeling like we where making some decent forward movement. So I wrote a longer prompt describing the globe I wanted and how I wanted to interact with it and this time Claude did come back with Vulkan code that rendered this as a globe, thus I didn’t end up screenshoting it unfortunately.

    So with the working globe now in place, I wanted to bring in the day/night cycle from the original application. So I asked Claude to load the night texture and use it as an overlay to get that day/night effect. I also asked it to calculate the position of the sun to earth at the current time, so that it could overlay the texture in the right location. As you can see Claude did a decent job of it, although the colors was broken.

    7th attempt

    So I kept fighting with the color for a bit, Claude could see it was rendering it brown, but could not initally figure out why. I could tell the code was doing things mostly right so I also asked it to look at some other things, like I realized that when I tried to spin the globe it just twisted the texture. We got that fixed and also I got Claude to create some tests scripts that helped us figure out that the color issue was a RGB vs BRG issue, so as soon as we understood that then Claude was able to fix the code to render colors correctly. I also had a few iterations trying to get the scaling and mouse interaction behaving correctly.

    10th attempt

    So at this point I had probably worked on this for 4-5 hours, the globe was rendering nicely and I could interact with it using the mouse. Next step was adding the traceroute lines back. By default Claude had just put in code to render some small dots on the hop points, not draw the lines. Also the old method for getting the geocoordinates, but I asked Claude to help me find some current services which it did and once I picked one it on first try gave me code that was able to request the geolocation of the ip addresses it got back. To polish it up I also asked Claude to make sure we drew the lines following the globes curvature instead of just drawing straight lines.

    Final version

    Final version of the updated Xtraceroute application. It mostly works now, but I did realize why I always thought this was a fun idea, but less interesting in practice, you often don’t get very good traceroutes back, probably due to websites being cached or hosted globally. But I felt that I had proven that with a days work Claude was able to help me bring this old GTK application into the modern world.

    Conclusions

    So I am not going to argue that Xtraceroute is an important application that deserved to be saved, in fact while I feel the current version works and proves my point I also lost motivation to try to polish it up due to the limitations of tracerouting, but t he code is available for anyone who finds it worthwhile .

    But this wasn’t really about Xtraceroute, what I wanted to show here is how someone lacking C and Vulkan development skills can actually use a tool like Claude to put together a working application even one using more advanced stuff like Vulkan, which I know many more than me would feel daunting. I also found Claude really good at producing documentation and architecture documents for your application. It was also able to give me a working Meson build system and create all the desktop integration files for me, like the .desktop file, the metainfo file and so on. For the icons I ended up using Gemini as Claude do not do image generation at this point, although it was able to take a png file and create a SVG version of it (although not a perfect likeness to the original png).

    Another thing I want to say is that the way I think about this, it is not that it makes coding skills less valuable, AIs can do amazing things, but you need to keep a close eye on them to ensure the code they create actually do what you want and that it does it in a sensible manner. For instance in my reporting application I wanted to embed a pdf file and Claude initial thought was to bring in webkit to do the rendering. That would have worked, but would have added a very big and complex dependency to my application, so I had to tell it that it could just use libpoppler to do it, something Claude agreed was a much better solution. The bigger the codebase the harder it also becomes for the AI to deal with it, but I think it hose circumstances what you can do is use the AI to give you sample code for the functionality you want in the programming language you want and then you can just work on incorporating that into your big application.

    The other part here if course in terms of open source is how should contributors and projects deal with this? I know there are projects where AI generated CVEs or patches are drowning them and that helps nobody. But I think if we see AI as a developers tool and that the developer using the tool is responsible for the code generated, then I think that mindset can help us navigate this. So if you used an AI tool to create a patch for your favourite project, it is your responsibility to verify that patch before sending it in, and with that I don’t mean just verifying the functionality it provides, but that the code is clean and readable and following the coding standards of said upstream project. Maintainers on the other hand can use AI to help them review and evaluate patches quicker and thus this can be helpful on both sides of the equation. I also found Claude and other AI tools like Gemini pretty good at generating test cases for the code they make, so this is another area where open source patch contributions can improve, by improving test coverage for the code.

    I do also believe there are many areas where projects can greatly benefit from AI, for instance in the GNOME project a constant challenge for extension developers have been keeping their extensions up-to-date, well I do believe a tool like Claude or Gemini should be able to update GNOME Shell extensions quite easily. So maybe having a service which tries to provide a patch each time there is a GNOME Shell update might be a great help there. At the same time having a AI take a look at updated extensions and giving an first review of the update might help reduce the load on people doing code reviews on extensions and help flag problematic extensions.

    I know for a lot of cases and situations uploading your code to a webservice like Claude, Gemini or Copilot is not something you want or can do. I know privacy is a big concern for many people in the community. My team at Red Hat has been working on a code assistant tool using the IBM Granite model , called Granite.code . What makes Granite different is that it relies on having the model run locally on your own system, so you don’t send your code or data of somewhere else. This of course have great advantages in terms of improving privacy and security, but it has challenges too. The top end AI models out there at the moment, of which Claude is probably the best at the time of writing this blog post, are running on hardware with vast resources in terms of computing power and memory available. Most of us do not have those kind of capabilities available at home, so the model size and performance will be significantly lower. So at the moment if you are looking for a great open source tool to use with VS Code to do things like code completion I recommend giving Granite.code a look. If you on the other hand want to do something like I have described here you need to use something like Claude, Gemini or ChatGPT. I do recommend Claude, not just because I believe them to be the best at it at the moment, but they also are a company trying to hold themselves to high ethical standards. Over time we hope to work with IBM and others in the community to improve local models, and I am also sure local hardware will keep improving, so over time the experience you can get with a local model on your laptop at least has less of a gap than what it does today compared to the big cloud hosted models. There is also the middle of the road option that will become increasingly viable, where you have a powerful server in your home or at your workplace that can at least host a midsize model, and then you connect to that on your LAN. I know IBM is looking at that model for the next iteration of Granite models where you can choose from a wide variety of sizes, some small enough to be run on a laptop, others of a size where a strong workstation or small server can run them or of course the biggest models for people able to invest in top of the line hardware to run their AI.

    Also the AI space is moving blazingly fast, if you are reading this 6 Months from now I am sure the capabilities of online and local models will have changed drastically already.

    So to all my friends in the Linux community I ask you to take a look at AI and what it can do and then lets work together on improving it, not just in terms of capabilities, but trying to figure out things like societal challenges around it and sustainability concerns I also know a lot of us got.

    Whats next for this code

    As I mentioned I while I felt I got it to a point where I proved to myself it worked, I am not planning on working anymore on it. But I did make a cute little application for internal use that shows a spinning globe with all global Red Hat offices showing up as little red lights and where it pulls Red Hat news at the bottom. Not super useful either, but I was able to use Claude to refactor the globe rendering code from xtraceroute into this in just a few hours.

    Red Hat Globe

    Red Hat Offices Globe and news.

    • Pl chevron_right

      Steven Deobald: 2025-07-25 Foundation Update

      news.movim.eu / PlanetGnome • 29 July • 2 minutes

    ## Annual Report

    The 2025 Annual Report is all-but-baked. Deepa and I would like to be completely confident in the final financial figures before publishing. The Board has seen these final numbers, during their all-day work day two days ago. I heard from multiple Board members that they’re ecstatic with how Deepa presented the financial report. This was a massive amount of work for Deepa to contribute in her first month volunteering as our new Treasurer and we all really appreciate the work that she’s put into this.

    ## GUADEC and Gratitude

    I’ve organized large events before and I know in my bones how difficult and tiresome it can be. But I don’t think I quite understood the scale of GUADEC. I had heard many times in the past three months “you just have to experience GUADEC to understand it” but I was quite surprised to find the day before the conference so intense and overwhelming that I was sick in bed for the entire first day of the conference — and that’s as an attendee!

    The conference takes the firehose of GNOME development and brings it all into one place. So many things happened here, I won’t attempt to enumerate them all. Instead, I’d like to talk about the energy.

    I have been pretty disoriented since the moment I landed in Italy but, even in my stupor, I was carried along by the energy of the conference. I could see that I wasn’t an exception — everyone I talked to seemed to be sleeping four hours a night but still highly energized, thrilled to take part, to meet their old friends, and to build GNOME together. My experience of the conference was a constant stream of people coming up to me, introducing themselves, telling me their stories, and sharing their dreams for the project. There is a real warmth to everyone involved in GNOME and it radiates from people the moment you meet them. You all made this a very comfortable space, even for an introvert like me.

    There is also incredible history here: folks who have been around for 5 years, 15 years, 25 years, 30 years. Lifelong friends like that are rare and it’s special to witness, as an outsider.

    But more important than anything I have to say about my experience of the conference, I want to proxy the gratitude of everyone I met. Everyone I spoke to, carried through the unbroken days on the energy of the space, kept telling me what a wonderful GUADEC it was. “The best GUADEC I’ve ever been to.” / “It’s so wonderful to meet the local community.” / “Everything is so smooth and well organized.”

    If you were not here and couldn’t experience it yourself, please know how grateful we all are for the hard work of the staff and volunteers. Kristi, for tirelessly managing the entire project and coordinating a thousand variables, from the day GUADEC 2024 ended until the moment she opened GUADEC 2025. Rosanna, for taking time away from all her regular work at the Foundation to give her full attention to the event. Pietro, for all the local coordination before the conference and his attention to detail throughout the conference. And the local/remote volunteer team — Maria, Deepesha, Ashmit, Aryan, Alessandro, and Syazwan — for openly and generously participating in every conceivable way.

    Thank you everyone for making such an important event possible.

    • Pl chevron_right

      Christian Hergert: Week 30 Status

      news.movim.eu / PlanetGnome • 28 July • 7 minutes

    My approach to engineering involves an engineers notebook and pen at my side almost all the time. My ADHD is so bad that without writing things down I would very much not remember what I did.

    Working at large companies can have a silencing effect on engineers in the community because all our communication energy is burnt on weekly status reports. You see this all the time, and it was famously expected behavior when FOSS people joined Google.

    But it is not unique to Google and I certainly suffer from it myself. So I’m going to try to experiment for a while dropping my status reports here too, at least for the things that aren’t extremely specific to my employer.

    Open Questions

    • What is the state-of-the-art right now for “I want to provide a completely artisan file-system to a container”. For example, say I wanted to have a FUSE file-system for that build pipeline or other tooling accessed.

      At least when it comes to project sources. Everything else should be read-only anyway.

      It would be nice to allow tooling some read/write access but gate the writes so they are limited to the tools running and not persistent when the tool returns.

    Foundry

    • A bit more testing of Foundry’s replacement for Jsonrpc-GLib, which is a new libdex based FoundryJsonrpcDriver . It knows how to talk a few different types (HTTP-style, \0 or \n delimited, etc).

      LSP backend has been ported to this now along with all the JSON node creating helpers so try to put those through their paces.

    • Add pre-load/post-load to FoundryTextDocumentAddin so that we can add hooks for addins early in the loading process. We actually want this more for avoiding things during buffer loading.

    • Found a nasty issue where creating addins was causing long running leaks do to the GParameter arrays getting saved for future addin creation. Need to be a bit more clever about initial property setup so that we don’t create this reference cycle.

    • New word-completion plugin for Foundry that takes a different approach from what we did in GtkSourceView. Instead, this runs on demand with a copy of the document buffer on a fiber on a thread. This allows using regex for word boundaries ( \w ) with JIT, no synchronization with GTK, and just generally _a lot_ faster. It also allowed for following referenced files from #include style style headers in C/C++/Obj-C which is something VIM does (at least with plugins) that I very much wanted.

      It is nice knowing when a symbol comes from the local file vs an included file as well (again, VIM does this) so I implemented that as well for completeness.

      Make sure it does word de-duplication while I’m at it.

    • Preserve completion activation (user initialized, automatic, etc) to propagate to the completion providers.

    • Live diagnostics tracking is much easier now. You can just create a FoundryOnTypeDiagnostics(document) and it will manage updating things as you go. It is also smart enough to do this with GWeakRef so that we don’t keep underlying buffers/documents/etc alive past the point they should be unloaded (as the worker runs on a fiber).

      You can share a single instance of the live diagnostics using foundry_text_document_watch_diagnostics() to avoid extra work.

    • Add a Git-specific clone API in FoundryGitClone which handles all the annoying things like SSH authentication/etc via the use of our prompt abstraction (TTY, app dialog, etc). This also means there is a new foundry clone ... CLI command to test that infrastructure outside of the IDE. Should help for tracking down weird integration issues.

    • To make the Git cloner API work well I had to remove the context requirement from FoundryAuthPrompt . You’ll never have a loaded context when you want to clone (as there is not yet a project) so that requirement was nonsensical.

    • Add new foundry_vcs_list_commits_with_file() API to get the commit history on a single file. This gives you a list model of FoundryCommit which should make it very easy for applications to browse through file history. One call, bridge the model to a listview and wire up some labels.

    • Add FoundryVcsTree , FoundryVcsDiff , FoundryVcsDelta types and git implementations of them. Like the rest of the new Git abstractions, this all runs threaded using libdex and futures which complete when the thread returns. Still need to iterate on this a bit before the 1.0 API is finalized.

    • New API to generate diffs from trees or find trees by identifiers.

    • Found out that libgit2 does not support the bitmap index of the command line git command. That means that you have to do a lot of diffing to determine what commits contain a specific file. Maybe that will change in the future though. We could always shell out to the git command for this specific operation if it ends up too slow.

    • New CTags parser that allows for read-only memory. Instead of doing the optimization in the past (insert \0 and use strings in place) the new index keeps string offset/run for a few important parts.

      Then the open-coded binary search to find the nearest partial match against (then walking backward to get first potential match) can keep that in mind for memcmp() .

      We can also send all this work off to the thread pools easily now with libdex/futures.

      Some work still remains if we want to use CTags for symbol resolution but I highly doubt we do.

      Anyway, having CTags is really more just about having an easy test case for the completion engine than “people will actually use this”.

    • Also write a new CTags miner which can build CTags files using whatever ctags engine is installed (universal-ctags, etc). The goal here is, again, to test the infrastructure in a super easy way rather than have people actually use this.

    • A new FoundrySymbolProvider and FoundrySymbol API which allows for some nice ergonomics when bridging to tooling like LSPs.

      It also makes it a lot easier to implement features like pathbars since you can foundry_symbol_list_to_root() and get a future-populated GListModel of the symbol hierarchy. Attach that to a pathbar widget and you’re done.

    Foundry-GTK

    • Make FoundrySourceView final so that we can be a lot more careful about life-cycle tracking of related documents, buffers, and addins.

    • Use FoundryTextDocumentAddin to implement spellchecking with libspelling as it vastly improves life-cycle tracking. We no longer rely on UB in GLib weak reference notifications to do cleanup in the right order.

    • Improve the completion bridge from FoundryCompletionProvider to GtkSourceCompletionProvider . Particularly start on after / comment fields. We still need to get before fields setup for return types.

      Still extremely annoyed at how LSP works in this regards. I mean really, my rage that LSP is what we have has no bounds. It’s terrible in almost every way imaginable.

    Builder

    • Make my Builder rewrite use new FoundrySourceView

    • Rewrite search dialog to use FoundrySearchEngine so that we can use the much faster VCS-backed file-listing + fuzzy search.

    GtkSourceView

    • Got a nice patch for porting space drawing to GskPath, merged it.

    • Make Ctrl+n/Ctrl+p work in VIM emulation mode.

    Sysprof

    • Add support for building introspection/docs. Don’t care about the introspection too much, because I doubt anyone would even use it. But it is nice to have documentation for potential contributors to look at how the APIs work from a higher level.

    GUADEC

    • Couldn’t attend GUADEC this year, so wrote up a talk on Foundry to share with those that are interested in where things are going. Given the number of downloads of the PDF, decided that maybe sharing my weekly status round-up is useful.

    • Watched a number of videos streamed from GUADEC. While watching Felipe demo his new boxes work, I fired up the repository with foundry and things seem to work on aarch64 (Asahi Fedora here).

      That was the first time ever I’ve had an easy experience running a virtual machine on aarch64 Linux. Really pleasing!

    foundry clone https://gitlab.gnome.org/felipeborges/boxes/
    cd boxes/
    foundry init
    foundry run

    LibMKS

    • While testing Boxes on aarch64 I noticed it is using the Cairo framebuffer fallback paintable. That would be fine except I’m running on 150% here and when I wrote that code we didn’t even have real fractional scaling in the Wayland protocol defined.

      That means there are stitch marks showing up for this non-accelerated path. We probably want to choose a tile-size based on the scale- factor and be done with it.

      The accelerated path shouldn’t have this problem since it uses one DMABUF paintable and sets the damage regions for the GSK renderer to do proper damage calculation.