call_end

    • Pl chevron_right

      Thibault Martin: TIL that You can spot base64 encoded JSON

      news.movim.eu / PlanetGnome • 5 August • 1 minute

    I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

    {
      "serial": 13,
      "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
      "meta": {
        "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0="
      },
      "encrypted_data": "ONXZsJhz37eJA[...]",
      "encryption_version": "v0"
    }
    

    Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypted the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

    I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

    I was incredulous but gave it a go, and it worked!!

    $ echo "eyJzYW[...]" | base64 -d
    {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}
    

    I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

    He replied,

    Whenever you see ey , that's {" and then if it's followed by a letter, you'll get J followed by a letter.

    I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

    $ echo "{" | base64
    ewo=
    $ echo "{\"" | base64
    eyIK
    $ echo "{\"s" | base64
    eyJzCg==
    $ echo "{\"a" | base64
    eyJhCg==
    $ echo "{\"word\"" | base64
    eyJ3b3JkIgo=
    

    Thanks Davide and Denis for showing me this simple but pretty useful trick!

    • Pl chevron_right

      Matthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

      news.movim.eu / PlanetGnome • 5 August • 11 minutes

    There's a lovely device called a pistorm , an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

    These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

    We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

    So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

    And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

    Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

    The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

    But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

    The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

    We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

    First, let's talk about Amiga graphics . We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

    Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

    Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom , the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

    And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

    Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does
    and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

    Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

    So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available .

    [1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

    comment count unavailable comments
    • Pl chevron_right

      Christian Hergert: Week 31 Status

      news.movim.eu / PlanetGnome • 4 August • 7 minutes

    Foundry

    • Added a new gutter renderer for diagnostics using the FoundryOnTypeDiagnostics described last week.

    • Write another new gutter renderer for “line changes”.

      I’m really happy with how I can use fibers w/ GWeakRef to do worker loops but not keep the “owner object” alive. As long as you have a nice way to break out of the fiber loop when the object disposes (e.g. trigger a DexCancellable / DexPromise /etc) then writing this sort of widget is cleaner/simpler than before w/ GAsyncReadyCallback .

      foundry-changes-gutter-renderer.c

    • Added a :show-overview property to the line changes renderer which conveniently allows it to work as both a per-line change status and be placed in the right-side gutter as an overview of the whole document to see your place in it. Builder just recently got this feature implemented by Nokse and this is basically just a simplified version of that thanks to fibers.

    • Abstract TTY auth input into a new FoundryInput abstraction. This is currently used by the git subsystem to acquire credentials for SSH, krb, user, user/pass, etc depending on what the peer supports. However, it became pretty obvious to me that we can use it for more than just Git. It maps pretty well to at least two more features coming down the pipeline.

      Since the input mechanisms are used on a thread for TTY input (to avoid blocking main loops, fiber schedulers, etc), they needed to be thread-safe. Most things are immutable and a few well controlled places are mutable.

      The concept of a validator is implemented externally as a FoundryInputValidator which allows for re-use and separating the mechanism from policy. Quite like how it turned out honestly.

      There are abstractions for text, switches, choices, files. You might notice they will map fairly well to AdwPreferenceRow things and that is by design, since in the apps I manage, that would be their intended display mechanism.

    • Templates have finally landed in Foundry with the introduction of a FoundryTemplateManager , FoundryTemplateProvider , and FoundryTemplate . They use the new generalized FoundryInput abstractions that were discussed above.

      That allows for a foundry template list command to list templates and foundry template create to expand a certain template.

      The FoundryInput of the templates are queried via the PTY just like username/password auth works via FoundryInput . Questions are asked, input received, template expansion may continue.

      This will also allow for dynamic creation of the “Create Template” widgetry in Builder later on without sacrificing on design.

    • Meson templates from Builder have also been ported over which means that you can actually use those foundry template commands above to replace your use of Builder if that is all you used it for.

      All the normal ones are there (GTK, Adwaita, library, cli, etc).

    • A new license abstraction was created so that libraries and tooling can get access to licenses/snippets in a simple form w/o duplication. That generally gets used for template expansion and file headers.

    • The FoundryBuildPipeline gained a new vfunc for prepare_to_run() . We always had this in Builder but it never came over to Foundry until now.

      This is the core mechanism behind being able to run a command as if it were the target application (e.g. unit tests).

    • After doing the template work, I realized that we should probably just auto initialize the project so you don’t have to run foundry init afterwards. Extracted the mechanism for setting up the initial .foundry directory state and made templates use that.

    • One of the build pipeline mechanisms still missing from Builder is the ability to sit in the middle of a PTY and extract build diagnostics. This is how errors from GCC are extracted during the build (as well as for other languages).

      So I brought over our “PTY intercept” which takes your consumer FD and creates a producer FD which is bridged to another consumer FD.

      Then the JIT’d error extract regexes may be run over the middle and then create diagnostics as necessary.

      To make this simple to consume in applications, a new FoundryPtyDiagnostics object is created. You set the PTY to use for that and attach it’s intercept PTY to the build/run managers default PTY and then all the GAction will wire up correctly. That object is also a GListModel making it easy to display in application UI.

    • A FoundryService is managed by the FoundryContext . They are just subsystems that combine to useful things in Foundry. One way they can be interacted with is GAction as the base class implements GActionGroup .

      I did some cleanup to make this work well and now you can just attach the FoundryContext s GActionGroup using foundry_context_dup_action_group() to a GtkWindow using gtk_widget_insert_action_group() . At that point your buttons are basically just "context.build-manager.build" for the action-name property.

      All sorts of services export actions now for operations like build, run, clean, invalidate, purge, update dependencies, etc.

      There is a test GTK app in testsuite/tools/ you can play with this all to get ideas and/or integrate into your own app. It also integrates the live diagnostics/PTY code to exemplify that.

    • Fixed the FoundryNoRun tool to connect to the proper PTY in the deployment/run phase.

    • The purge operation now writes information about what files are being deleted to the default build PTY.

    • The new FoundryTextSettings abstraction has landed which is roughly similar to IdeFileSettings in Builder. This time it is much cleaned up now that we have DexFuture to work with.

      I’ve ported the editorconfig support over to use this as well as a new implementation of modeline support which again, is a lot simpler now that we can use fibers/threadpools effectively.

      Plugins can set their text-settings priority in their .plugin file. That way settings can have a specific order such as user-overrides, modelines, editorconfig, gsettings overrides, language defaults, and what-not.

    • The FoundryVcs gained a new foundry_vcs_query_file_status() API which allows querying for the, shocking, file status. That will give you bitflags to know in both the stage or working tree if a file is new/modified/deleted.

      To make this even more useful, you can use the FoundryDirectoryListing class (which is a GListModel of FoundryDirectoryItem ) to include vcs::status file-attribute and your GFileInfo will be populated with the uint32 bitflags for a key under the same name.

      It’s also provided as a property on the FoundryDirectoryItem to make writing those git “status icons” dead simple in file panels.

    Boxes

    • Found an issue w/ trailing \x00 in paths when new Boxes is opening an ISO from disk on a system with older xdg portals. Sent a pointer on the issue tracker to what Text Editor had to do as well here.

    Libpeas

    • GJS gained support for pkgconfig variables and we use that now to determine which mozjs version to link against. That is required to be able to use the proper JS API we need to setup the context.

    Ptyxis

    • Merged some improves to the custom link support in Ptyxis. This is used to allow you to highlight custom URL regexes. So you can turn things like “RHEL-1234” into a link to the RHEL issue tracker.

    • Track down an issue filed about titles not updating tab/window titles. It was just an issue with $PROMPT_COMMAND overwriting what they had just changed.

    Text Editor

    • A lot of the maintainership of this program is just directing people to the right place. Be that GtkSourceView, GTK, shared-mime-info, etc. Do more of that.

      As an aside, I really wish people spent more time understanding how things work rather than fire-and-forget. The FOSS community used to take pride in ensuring the issue reports landed in the right place to avoid over burdening maintainers, and I’m sad that is been lost in the past decade or so. Probably just a sign of success.

    Builder

    • Did a quick and dirty fix for a hang that could slow down startup due to the Manuals code going to the worker process to get the default architecture. Builder doesn’t link against Flatpak in the UI process hence why that did it. But it’s also super easy to put a couple line hard-coded #ifdef and avoid the whole RPC.

    Libdex

    • Released 0.11.1 for GNOME 49.beta. I’m strongly considering making the actual 49 release our 1.0. Things have really solidified over the past year with libdex and I’m happy enough to put my stamp of approval on that.

    Libspelling

    • Fix an issue with discovery of the no-spellcheck-tag which is used to avoid spellchecking things that are general syntax in language specifications. Helps a bunch when loading a large document and that can get out of sync/changed before the worker discovers it.

    • Fixed a LSAN discovered leak in the testsuite. Still one more to go. Fought LSAN and CI a bit because I can’t seem to reproduce what the CI systems get.

    Other

    • Told Chat-GPT to spit me out a throw away script that parses my status reports and converts them into something generally usable by WordPress. Obviously there is a lot of dislike/scrutiny/distrust of LLMs and their creators/operators, but I really don’t see the metaphorical cat going back in the bag when you enable people in a few seconds to scratch an itch. I certainly hope we continue to scrutinize and control scope though.

    • Pl chevron_right

      Peter Hutterer: unplug - a tool to test input devices via uinput

      news.movim.eu / PlanetGnome • 4 August • 1 minute

    Yet another day, yet another need for testing a device I don't have. That's fine and that's why many years ago I wrote libinput record and libinput replay (more powerful successors to evemu and evtest). Alas, this time I had a dependency on multiple devices to be present in the system, in a specific order, sending specific events. And juggling this many terminal windows with libinput replay open was annoying. So I decided it's worth the time fixing this once and for all (haha, lolz) and wrote unplug . The target market for this is niche, but if you're in the same situation, it'll be quite useful.

    Pictures cause a thousand words to finally shut up and be quiet so here's the screenshot after running pip install unplug [1]:

    This shows the currently pre-packaged set of recordings that you get for free when you install unplug. For your use-case you can run libinput record , save the output in a directory and then start unplug path/to/directory . The navigation is as expected, hitting enter on the devices plugs them in, hitting enter on the selected sequence sends that event sequence through the previously plugged device.

    Annotation of the recordings (which must end in .yml to be found) can be done by adding a YAML unplug: entry with a name and optionally a multiline description . If you have recordings that should be included in the default set, please file a merge request . Happy emulating!

    [1] And allowing access to /dev/uinput . Details, schmetails...

    • Pl chevron_right

      Julian Hofer: Git Forges Made Simple: gh & glab

      news.movim.eu / PlanetGnome • 4 August • 6 minutes

    When I set the goal for myself to contribute to open source back in 2018, I mostly struggled with two technical challenges:

    • Python virtual environments, and
    • Git together with GitHub.

    Solving the former is nowadays my job , so let me write up my current workflow for the latter.

    Most people use Git in combination with modern Git forges like GitHub and GitLab. Git doesn't know anything about these forges, which is why CLI tools exist to close that gap. It's still good to know how to handle things without them, so I will also explain how to do things with only Git. For GitHub there's gh and for GitLab there's glab . Both of them are Go binaries without any dependencies that work on Linux, macOS and Windows. If you don't like any of the provided installation methods, you can simply download the binary, make it executable and put it in your PATH .

    Luckily, they also have mostly the same command line interface. First, you have to login with the command that corresponds to your git forge:

    gh auth login
    glab auth login
    

    In the case of gh this even authenticates Git with GitHub. With GitLab, you still have to set up authentication via SSH .

    Working Solo #

    The simplest way to use Git is to use it like a backup system. First, you create a new repository on either Github or GitLab . Then you clone it with git clone <REPO> . From that point on, all you have to do is:

    • do some work
    • commit
    • push
    • repeat

    On its own there aren't a lot of reasons to choose this approach over a file syncing service like Nextcloud . No, the main reason you do this, is because you are either already familiar with the git workflow, or want to get used to it.

    Contributing #

    Git truly shines as soon as you start collaborating with others. On a high level this works like this:

    • You modify some files in a Git repository,
    • you propose your changes via the Git forge,
    • maintainers of the repository review your changes, and
    • as soon as they are happy with your changes, they will integrate your changes into the main branch of the repository.

    As before, you clone the repository with git clone <REPO> . Change directories into that repository and run git status . The branch that it shows is the default branch which is probably called main or master . Before you start a new branch, you will run the following two commands to make sure you start with the latest state of the repository:

    git switch <DEFAULT-BRANCH>
    git pull
    

    You switch and create a new branch with:

    git switch --create <BRANCH>
    

    That way you can work on multiple features at the same time and easily keep your default branch synchronized with the remote repository.

    The next step is to open a pull request on GitHub or merge request on GitLab. They are equivalent, so I will call both of them pull requests from now on. The idea of a pull request is to integrate the changes from one branch into another branch (typically the default branch). However, you don't necessarily want to give every potential contributor the power to create new branches on your repository. That is why the concept of forks exists. Forks are copies of a repository that are hosted on the same Git forge. Contributors can now create branches on their forks and open pull requests based on these branches.

    If you don't have push access to the repository, now it's time to create your own fork. Without the forge CLI tools, you first fork the repository in the web interface.

    Then, you run the following commands:

    git remote rename origin upstream
    git remote add origin <FORK>
    

    When you cloned your repository, Git set the default branch of the original repo as the upstream branch for your local default branch. This is preserved by the remote rename, which is why the default branch can still be updated from upstream with git pull and no additional arguments.

    Git Forge Tools #

    Alternatively, you can use the forge tool that corresponds to your git forge. You still clone and switch branches with Git as shown before. However, you only need a single command to both fork the repository and set up the git remotes.

    gh repo fork --remote
    glab repo fork --remote
    

    Then, you need to push your local branch. With Git, you first have to tell it that it should create the corresponding branch on the remote and set it as upstream branch.

    git push --set-upstream origin <BRANCH>
    

    Next, you open the repository in the web interface, where it will suggest opening a pull request. The upstream branch of your local branch is now configured, which means you can update your remote by running git push without any additional arguments.

    Using pr create directly pushes and sets up your branch, and opens the pull request for you. If you have a fork available, it will ask whether you want to push your branch there:

    gh pr create
    glab mr create
    

    Checking out Pull Requests #

    Often, you want to check out a pull request on your own machine to verify that it works as expected. This is surprisingly difficult with Git alone.

    First, navigate to the repository where the pull request originates in your web browser. This might be the same repository, or it could be a fork.

    If it's the same repository, checking out their branch is not too difficult: you run git switch <BRANCH> , and it's done.

    However, if it's a fork, the simplest way is to add a remote for the user who opened the pull request, fetch their repo, and finally check out their branch.

    This looks like this:

    git remote add <USER> <FORK_OF_USER>
    git fetch <USER>
    git switch <USER>/<BRANCH>
    

    With the forge CLIs, all you have to do is:

    gh pr checkout <PR_NUMBER>
    glab mr checkout <MR_NUMBER>
    

    It's a one-liner, works no matter if the pull request is coming from the repo itself or a fork, and it doesn't set up any additional remotes.

    You don't even have to open your browser to get the pull request number. Simply run the following commands, and it will give you a list of all open pull requests:

    gh pr list
    glab mr list
    

    If you have push access to the original repository, you will also be able to push to the branch of the pull request unless the author explicitly opted out of that. This is useful for changes that are easier to do yourself than communicating via a comment.

    Finally, you can check out the status of your pull request with pr view . By adding --web , it will directly open your web browser for you:

    gh pr view --web
    glab mr view --web
    

    Conclusion #

    When I first heard of gh 's predecessor hub , I thought this is merely a tool for people who insist on doing everything in the terminal. I only realized relatively recently that Git forge tools are in fact the missing piece to an efficient Git workflow. Hopefully, you now have a better idea of the appeal of these tools!

    Many thanks to Sabrina and Lucas for their comments and suggestions on this article

    You can find the discussion at this Mastodon post .

    • Pl chevron_right

      Tobias Bernard: GUADEC 2025

      news.movim.eu / PlanetGnome • 3 August • 5 minutes

    Last week was this year’s GUADEC, the first ever in Italy! Here are a few impressions.

    Local-First

    One of my main focus areas this year was local-first, since that’s what we’re working on right now with the Reflection project (see the previous blog post ). Together with Julian and Andreas we did two lightning talks (one on local-first generally, and one on Reflection in particular), and two BoF sessions.

    Local-First BoF

    At the BoFs we did a bit of Reflection testing, and reached a new record of people simultaneously trying the app:

    This also uncovered a padding bug in the users popover :)

    Andreas also explained the p2anda stack in detail using a new diagram we made a few weeks back, which visualizes how the various components fit together in a real app.

    p2panda stack diagram

    We also discussed some longer-term plans, particularly around having a system-level sync service. The motivation for this is twofold: We want to make it as easy as possible for app developers to add sync to their app. It’s never going to be “free”, but if we can at least abstract some of this in a useful way that’s a big win for developer experience. More importantly though, from a security/privacy point of view we really don’t want every app to have unrestricted access to network, Bluetooth, etc. which would be required if every app does its own p2p sync.

    One option being discussed is taking the networking part of p2panda (including iroh for p2p networking) and making it a portal API which apps can use to talk to other instances of themselves on other devices.

    Another idea was a more high-level portal that works more like a file “share” system that can sync arbitary files by just attaching the sync context to files as xattrs and having a centralized service handle all the syncing. This would have the advantage of not requiring special UI in apps, just a portal and some integration in Files. Real-time collaboration would of course not be possible without actual app integration, but for many use cases that’s not needed anyway, so perhaps we could have both a high- and low-level API to cover different scenarios?

    There are still a lot of open questions here, but it’s cool to see how things get a little bit more concrete every time :)

    If you’re interested in the details, check out the full notes from both BoF sessions .

    Design

    Jakub and I gave the traditional design team talk – a bit underprepared and last-minute (thanks Jakub for doing most of the heavy lifting), but it was cool to see in retrospect how much we got done in the past year despite how much energy unfortunately went into unrelated things. The all-new slate of websites is especially cool after so many years of gnome.org et al looking very stale. You can find the slides here .

    Jakub giving the design talk

    Inspired by the Summer of GNOME OS challenge many of us are doing, we worked on concepts for a new app to make testing sysexts of merge requests (and nightly Flatpaks) easier. The working title is “Test Ride” (a more sustainable version of Apple’s “Test Flight” :P) and we had fun drawing bicycles for the icon.

    Test Ride app mockup

    Jakub and I also worked on new designs for Georges’ presentation app Spiel (which is being rebranded to “Podium” to avoid the name clash with the a11y project ). The idea is to make the app more resilient and data more future-proof, by going with a file-based approach and simple syntax on top of Markdown for (limited) layout customization.

    Georges and Jakub discussing Podium designs

    Miscellaneous

    • There was a lot of energy and excitement around GNOME OS. It feels like we’ve turned a corner, finally leaving the “science experiment” stage and moving towards “daily-drivable beta”.
    • I was very happy that the community appreciation award went to Alice Mikhaylenko this year. The impact her libadwaita work has had on the growth of the app ecosystem over the past 5 years can not be overstated. Not only did she build dozens of useful and beautiful new adaptive widgets, she also has a great sense for designing APIs in a way that will get people to adopt them, which is no small thing. Kudos, very well deserved!
    • While some of the Foundation conflicts of the past year remain unresolved, I was happy to see that Steven’s and the board’s plans are going in the right direction.

    Brescia

    The conference was really well-organized (thanks to Pietro and the local team!), and the venue and city of Brescia had a number of advantages that were not always present at previous GUADECs:

    • The city center is small and walkable, and everyone was staying relatively close by
    • The university is 20 min by metro from the city center, so it didn’t feel like a huge ordeal to go back and forth
    • Multiple vegan lunch options within a few minutes walk from the university
    • Lots of tables (with electrical outlets!) for hacking at the venue
    • Lots of nice places for dinner/drinks outdoors in the city center
    • Many dope ice cream places
    Piazza della Loggia at sunset

    A few (minor) points that could be improved next time:

    • The timetable started veeery early every day, which contributed to a general lack of sleep. Realistically people are not going to sleep before 02:00, so starting the program at 09:00 is just too early. My experience from multi-day events in Berlin is that 12:00 is a good time to start if you want everyone to be awake :)
    • The BoFs could have been spread out a bit more over the two days, there were slots with three parallel ones and times with nothing on the program.
    • The venue closing at 19:00 is not ideal when people are in the zone hacking. Doesn’t have to be all night, but the option to hack until after dinner (e.g. 22:00) would be nice.
    • Since that the conference is a week long accommodation can get a bit expensive, which is not ideal since most people are paying for their own travel and accommodation nowadays. It’d have been great if there was a more affordable option for accommodation, e.g. at student dorms, like at previous GUADECs.
    • A frequent topic was how it’s not ideal to have everyone be traveling and mostly unavailable for reviews a week before feature freeze. It’s also not ideal because any plans you make at GUADEC are not going to make it into the September release, but will have to wait for March. What if the conference was closer to the beginning of the cycle, e.g. in May or June?

    A few more random photos:

    Matthias showing us fancy new dynamic icon stuff Dinner on the first night feat. yours truly, Robert, Jordan, Antonio, Maximiliano, Sam, Javier, Julian, Adrian, Markus, Adrien, and Andreas Adrian and Javier having an ad-hoc Buildstream BoF at the pizzeria Robert and Maximiliano hacking on Snapshot
    • Pl chevron_right

      Daiki Ueno: Optimizing CI resource usage in upstream projects

      news.movim.eu / PlanetGnome • 3 August • 5 minutes

    At GnuTLS , our journey into optimizing GitLab CI began when we faced a significant challenge: we lost our GitLab.com Open Source subscription. While we are still hoping that this limitation is temporary , this meant our available CI/CD resources became significantly lower. We took this opportunity to find smarter ways to manage our pipelines and reduce our footprint.

    This blog post shares the strategies we employed to optimize our GitLab CI usage, focusing on reducing running time and network resources, which are crucial for any open-source project operating with constrained resources.

    CI on every PR: a best practice, but not cheap

    While running CI on every commit is considered a best practice for secure software development, our experience setting up a self-hosted GitLab runner on a modest Virtual Private Server (VPS) highlighted its cost implications, especially with limited resources. We provisioned a VPS with 2GB of memory and 3 CPU cores, intending to support our GnuTLS CI pipelines.

    The reality, however, was a stark reminder of the resource demands. A single CI pipeline for GnuTLS took an excessively long time to complete, often stretching beyond acceptable durations. Furthermore, the extensive data transfer involved in fetching container images, dependencies, building artifacts, and pushing results quickly led us to reach the bandwidth limits imposed by our VPS provider, resulting in throttled connections and further delays.

    This experience underscored the importance of balancing CI best practices with available infrastructure and budget, particularly for resource-intensive projects.

    Reducing CI Running Time

    Efficient CI pipeline execution is paramount, especially when resources are scarce. GitLab provides an excellent article on pipeline efficiency, though in practice, project specific optimization is needed. We focused on three key areas to achieve faster pipelines:

    • Tiering tests
    • Layering container images
    • De-duplicating build artifacts

    Tiering tests

    Not all tests need to run on every PR. For more exotic or costly tasks, such as extensive fuzzing, generating documentation, or large-scale integration tests, we adopted a tiering approach. These types of tests are resource-intensive and often provide value even when run less frequently. Instead of scheduling them for every PR, they are triggered manually or on a periodic basis (e.g., nightly or weekly builds). This ensures that critical daily development workflows remain fast and efficient, while still providing comprehensive testing coverage for the project without incurring excessive resource usage on every minor change.

    Layering container images

    The tiering of tests gives us an idea which CI images are more commonly used in the pipeline. For those common CI images, we transitioned to using a more minimal base container image, such as fedora-minimal or debian:<flavor>-slim . This reduced the initial download size and the overall footprint of our build environment.

    For specialized tasks, such as generating documentation or running cross-compiled tests that require additional tools, we adopted a layering approach. Instead of building a monolithic image with all possible dependencies, we created dedicated, smaller images for these specific purposes and layered them on top of our minimal base image as needed within the CI pipeline. This modular approach ensures that only the necessary tools are present for each job, minimizing unnecessary overhead.

    De-duplicating build artifacts

    Historically, our CI pipelines involved many “configure && make” steps for various options. One of the major culprits of long build times is repeatedly compiling source code, oftentimes resulting in almost identical results.

    We realized that many of these compile-time options could be handled at runtime. By moving configurations that didn’t fundamentally alter the core compilation process to runtime, we simplified our build process and reduced the number of compilation steps required. This approach transforms a lengthy compile-time dependency into a quicker runtime check.

    Of course, this approach cuts both ways: while it simplifies the compilation process, it could increase the code size and attack surface. For example, support for legacy protocol features such as SSL 3.0 or SHA-1 that may lower the entire security should still be able to be switched off at the compile time.

    Another caveat is that some compilation options are inherently incompatible with each other. One example is that thread sanitizer cannot be enabled with address sanitizer at the same time. In such cases a separate build artifact is still needed.

    The impact: tangible results

    The efforts put into optimizing our GitLab CI configuration yielded significant benefits:

    • The size of the container image used for our standard build jobs is now 2.5GB smaller than before. This substantial reduction in image size translates to faster job startup times and reduced storage consumption on our runners.
    • 9 “configure && make” steps were removed from our standard build jobs. This streamlined the build process and directly contributed to faster execution times.

    By implementing these strategies, we not only adapted to our reduced resources but also built a more efficient, cost-effective, and faster CI/CD pipeline for the GnuTLS project. These optimizations highlight that even small changes can lead to substantial improvements, especially in the context of open-source projects with limited resources.

    For further information on this, please consult the actual changes .

    Next steps

    While the current optimizations have significantly improved our CI efficiency, we are continuously exploring further enhancements. Our future plans include:

    • Distributed GitLab runners with external cache: To further scale and improve resource utilization, we are considering running GitLab runners on multiple VPS instances. To coordinate these distributed runners and avoid redundant data transfers, we could set up an external cache, potentially using a solution like MinIO . This would allow shared access to build artifacts, reducing bandwidth consumption and build times.
    • Addressing flaky tests: Flaky tests, which intermittently pass or fail without code changes, are a major bottleneck in any CI pipeline. They not only consume valuable CI resources by requiring entire jobs to be rerun but also erode developer confidence in the test suite. In TLS testing, it is common to write a test script that sets up a server and a client as a separate process, let the server bind a unique port to which the client connects, and instruct the client to initiate a certain event through a control channel. This kind of test could fail in many ways regardless of the test itself, e.g., the port might be already used by other tests. Therefore, rewriting tests without requiring a complex setup would be a good first step.
    • Pl chevron_right

      Jonathan Blandford: GUADEC 2025: Thoughts and Reflections

      news.movim.eu / PlanetGnome • 2 August • 4 minutes

    Another year, another GUADEC. This was the 25th anniversary of the first GUADEC, and the 25th one I’ve gone to. Although there have been multiple bids for Italy during the past quarter century, this was the first successful one. It was definitely worth the wait, as it was one of the best GUADECs in recent memory.

    A birthday cake with sparklers stands next to a sign saying GUADEC 2025 GUADEC’s 25th anniversary cake

    This was an extremely smooth conference — way smoother than previous years. The staff and volunteers really came through in a big way and did heroic work! I watched Deepesha, Asmit, Aryan, Maria, Zana, Kristi, Anisa, and especially Pietro all running around making this conference happen. I’m super grateful for their continued hard work in the project. GNOME couldn’t happen without their effort.

    A brioche and espresso cup sit on a table La GNOME vita

    Favorite Talks

    I commented on some talks as they happened ( Day 1 , Day 2 , Day 3 ). I could only attend one track at a time so missed a lot of them, but the talks I saw were fabulous. They were much higher quality than usual this year, and I’m really impressed at this community’s creativity and knowledge. As I said, it really was a strong conference.

    I did an informal poll of assorted attendees I ran into on the streets of Brescia on the last night, asking what their favorite talks were. Here are the results:

    Emmanuele standing in front of a slide that says "Whither Maintainers" Whither Maintainers
    • Emmanuele’s talk on “Getting Things Done In GNOME”: This talk clearly struck a chord amongst attendees. He proposed a path forward on the technical governance of the project. It also had a “Wither Maintainers” slide that lead to a lot of great conversations.
    • George’s talk on streaming: This was very personal, very brave, and extremely inspiring. I left the talk wanting to try my hand at live streaming my coding sessions, and I’m not the only one.
    • The poop talk, by Niels: This was a very entertaining lightning talk with a really important message at the end.
    • Enhancing Screen Reader Functionality in Modern Gnome by Lukas: Unfortunately, I was at the other track when this one happened so I don’t know much about it. I’ll have to go back and watch it! That being said, it was so inspiring to see how many people at GUADEC were working on accessibility, and how much progress has been made across the board. I’m in awe of everyone that works in this space.

    Honorable mentions

    In reality, there were many amazing talks beyond the ones I listed. I highly recommend you go back and see them. I know I’m planning on it!

    Crosswords at GUADEC

    Refactoring gnome-crosswords

    We didn’t have a Crosswords update talk this cycle. However we still had some appearances worth mentioning:

    • Federico gave a talk about how to use unidirectional programming to add tests to your application , and used Crosswords as his example. This probably the 5th time one of the two of us have talked about this topic. This was the best one to date, though we keep giving it because we don’t think we’ve gotten the explanation right. It’s a complex architectural change which has a lot of nuance, and is hard for us to explain succinctly. Nevertheless, we keep trying, as we see how this could lead to a big revolution in the quality of GNOME applications. Crosswords is pushing 80KLOC, and this architecture is the only thing allowing us to keep it at a reasonable quality.
    • People keep thinking this is “just” MVC, or a minor variation thereof, but it’s different enough that it requires a new mindset, a disciplined approach, and a good data model. As an added twist, Federico sent an MR to GNOME Calendar to add initial unidirectional support to that application. If crosswords are too obscure a subject for you, then maybe the calendar section of the talk will help you understand it.
    • I gave a lighting talk about some of the awesome work that our GSoC (and prospective GSoC) students are doing.
    • I gave a BOF on Words. It was lightly attended, but led to a good conversation with the always-helpful Martin.
    • Finally, I sat down with Tobias to do a UX review of the crossword game, with an eye to getting it into GNOME Circle. This has been in the works for a long-time and I’m really grateful for the chance to do it in person. We identified many papercuts to fix of course, but Tobias was also able to provide a suggestion to improve a long-standing problem with the interface. We sketched out a potential redesign that I’m really happy with. I hope the Circle team is able to continue to do reviews across the GNOME ecosystem as it provides so much value.

    Personal

    A final comment: I’m reminded again of how much of a family the GNOME community is. For personal reasons, it was a very tough GUADEC for Zana and I. It was objectively a fabulous GUADEC and we really wanted to enjoy it, but couldn’t. We’re humbled at the support and love this community is capable of. Thank you.

    • Pl chevron_right

      This Week in GNOME: #210 Periodic Updates

      news.movim.eu / PlanetGnome • 1 August • 4 minutes

    Update on what happened across the GNOME project in the week from July 25 to August 01.

    GNOME Core Apps and Libraries

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

    last cycle, libadwaita gained a way to query the system document font, even though it was same as the UI font. This cycle it has been made larger (12pt instead of 11pt) and we have a new .document style class that makes the specified widget use it (as well as increases line height) - intended to be used for the app content such as messages in a chat client.

    Meanwhile, the formerly useless .body style class also features increased line height now (along with .caption ) and can be used to make labels such as UI descriptions more legible compared to default styles. Some examples of where it’s already used:

    • Dialog body text
    • Preferences group description
    • Status page description
    • What’s new, legal and troubleshooting sections in about dialogs

    GNOME Circle Apps and Libraries

    revisto announces

    Drum Machine 1.4.0 is out! 🥁

    Drum Machine , a GNOME Circle application, now supports more pages (bars) for longer beats, mobile improvements, and translations in 17 languages ! You can try it out and do more creativity with it.

    What’s new: • Extended pattern grid for longer, more complex rhythms • Mobile-responsive UI that adapts to different screen sizes • Global reach with translations including Farsi, Chinese, Russian, Arabic, Hebrew, and more

    If you have any suggestions and ideas, you can always contribute and make Drum Machine better, all ideas (even better/more default presets) are welcome!

    Available on Flathub: https://flathub.org/apps/io.github.revisto.drum-machine

    Happy drumming! 🎶

    drum-machine-1.4.0-screenshot.DJEasryX_1LnpcL.webp

    Third Party Projects

    lo reports

    After a while of working on and off, I have finally released the first version of Nucleus, a periodic table app for searching and viewing various properties of the chemical elements!

    You can get it on Flathub: https://flathub.org/apps/page.codeberg.lo_vely.Nucleus

    Nucleus_electron_shell_dialog.DOKtXHUa_2k9kRS.webp

    Nucleus_periodic_table_view.Ch8TANwn_Z1XWg5K.webp

    francescocaracciolo says

    Newelle 1.0.0 has been released! Huge release for this AI assistant for Gnome .

    • 📱 Mini Apps support! Extensions can now show custom mini apps on the sidebar
    • 🌐 Added integrated browser Mini App: browse the web directly in Newelle and attach web pages
    • 📁 Improved integrated file manager , supporting multiple file operations
    • 👨‍💻 Integrated file editor : edit files and codeblocks directly in Newelle
    • 🖥 Integrated Terminal mini app: open the terminal directly in Newelle
    • 💬 Programmable prompts : add dynamic content to prompts with conditionals and random strings
    • ✍️ Add ability to manually edit chat name
    • 🪲 Minor bug fixes
    • 🚩 Added support for multiple languages for Kokoro TTS and Whisper.CPP
    • 💻 Run HTML/CSS/JS websited directly in app
    • ✨ New animation on chat change

    Get it on FlatHub

    Fractal

    Matrix messaging app for GNOME written in Rust.

    Kévin Commaille announces

    Want to get a head start and try out Fractal 12 before its release? That’s what this Release Candidate is for! New since 12.beta:

    • The upcoming room version 12 is supported, with the special power level of room creators
    • Requesting invites to rooms (aka knocking) is now possible
    • Clicking on the name of the sender of a message adds a mention to them in the composer

    As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

    It is available to install via Flathub Beta, see the instructions in our README .

    As the version implies, it should be mostly stable and we expect to only include minor improvements until the release of Fractal 12.

    If you want to join the fun, you can try to fix one of our newcomers issues . We are always looking for new contributors!

    GNOME Websites

    Guillaume Bernard reports

    After months of work and test, it’s now possible to connect to Damned Lies using third party providers. You can now use your GNOME SSO account as well as other common providers used by translators: Fedora, Launchpad, GitHub and GitLab.com. The login/password authentication has been disabled and email addresses are validated for security reasons.

    On another hand, under the hood, Damned Lies has been modernized: we upgrade to Fedora 42, which provides a fresher gettext (0.23) and we moved to Django 5.2 LTS and Python 3.13. Users can expect performances improvements, as we replaced the Apache mod_wsgi by gunicorn that is said as more CPU and RAM efficient.

    Next step is working on async git pushes and merge requests support. Help is very welcome on these topics!

    Shell Extensions

    Just Perfection reports

    The GNOME Shell 49 port guide for extensions is ready! We are now accepting GNOME Shell 49 extension packages on EGO . Please join us on the GNOME Extensions Matrix Channel if you have any issues porting your extension. Also, thanks to Florian Müllner, gnome-extensions has added a new upload command for GNOME Shell 49, making it easier to upload your extensions to EGO. You can also use it with CI.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!