call_end

    • Pl chevron_right

      Debarshi Ray: Ollama on Fedora Silverblue

      news.movim.eu / PlanetGnome • 1 October • 7 minutes

    I found myself dealing with various rough edges and questions around running Ollama on Fedora Silverblue for the past few months. These arise from the fact that there are a few different ways of installing Ollama, /usr is a read-only mount point on Silverblue, people have different kinds of GPUs or none at all, the program that’s using Ollama might be a graphical application in a Flatpak or part of the operating system image, and so on. So, I thought I’ll document a few different use-cases in one place for future reference or maybe someone will find it useful.

    Different ways of installing Ollama

    There are at least three different ways of installing Ollama on Fedora Silverblue. Each of those have their own nuances and trade-offs that we will explore later.

    First, there’s the popular single command POSIX shell script installer :

    $ curl -fsSL https://ollama.com/install.sh | sh

    There is a manual step by step variant for those who are uncomfortable with running a script straight off the Internet. They both install Ollama in the operating system’s /usr/local or /usr or / prefix, depending on which one comes first in the PATH environment variable, and attempts to enable and activate a systemd service unit that runs ollama serve .

    Second, there’s a docker.io/ollama/ollama OCI image that can be used to put Ollama in a container. The container runs ollama serve by default.

    Finally, there’s Fedora’s ollama RPM.

    Surprise

    Astute readers might be wondering why I mentioned the shell script installer in the context of Fedora Silverblue, because /usr is a read-only mount point. Won’t it break the script? Not really, or the script breaks but not in the way one might expect.

    Even though, /usr is read-only on Silverblue, /usr/local is not, because it’s a symbolic link to /var/usrlocal, and Fedora defaults to putting /usr/local/bin earlier in the PATH environment variable than the other prefixes that the installer attempts to use, as long as pkexec(1) isn’t being used. This happy coincidence allows the installer to place the Ollama binaries in their right places.

    The script does fail eventually when attempting to create the systemd service unit to run ollama serve , because it tries to create an ollama user with /usr/share/ollama as its home directory. However, this half-baked installation works surprisingly well as long as nobody is trying to use an AMD GPU.

    NVIDIA GPUs work, if the proprietary driver and nvidia-smi(1) are present in the operating system, which are provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion ; and so does CPU fallback.

    Unfortunately, the results would be the same if the shell script installer is used inside a Toolbx container. It will fail to create the systemd service unit because it can’t connect to the system-wide instance of systemd.

    Using AMD GPUs with Ollama is an important use-case. So, let’s see if we can do better than trying to manually work around the hurdles faced by the script.

    OCI image

    The docker.io/ollama/ollama OCI image requires the user to know what processing hardware they have or want to use. To use it only with the CPU without any GPU acceleration:

    $ podman run \
        --name ollama \
        --publish 11434:11434 \
        --rm \
        --security-opt label=disable \
        --volume ~/.ollama:/root/.ollama \
        docker.io/ollama/ollama:latest

    This will be used as the baseline to enable different kinds of GPUs. Port 11434 is the default port on which the Ollama server listens, and ~/.ollama is the default directory where it stores its SSH keys and artificial intelligence models.

    To enable NVIDIA GPUs, the proprietary driver and nvidia-smi(1) must be present on the host operating system, as provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion . The user space driver has to be injected into the container from the host using NVIDIA Container Toolkit , provided by the nvidia-container-toolkit package from Fedora, for Ollama to be able to use the GPUs.

    The first step is to generate a Container Device Interface (or CDI) specification for the user space driver:

    $ sudo nvidia-ctk cdi generate --output /etc/cdi/nvidia.yaml
    …
    …

    Then the container needs to be run with access to the GPUs, by adding the --gpus option to the baseline command above:

    $ podman run \
        --gpus all \
        --name ollama \
        --publish 11434:11434 \
        --rm \
        --security-opt label=disable \
        --volume ~/.ollama:/root/.ollama \
        docker.io/ollama/ollama:latest

    AMD GPUs don’t need the driver to be injected into the container from the host, because it can be bundled with the OCI image. Therefore, instead of generating a CDI specification for them, an image that bundles the driver must be used. This is done by using the rocm tag for the docker.io/ollama/ollama image.

    Then container needs to be run with access to the GPUs. However, the --gpus option only works for NVIDIA GPUs. So, the specific devices need to be spelled out by adding the --devices option to the baseline command above:

    $ podman run \
        --device /dev/dri \
        --device /dev/kfd \
        --name ollama \
        --publish 11434:11434 \
        --rm \
        --security-opt label=disable \
        --volume ~/.ollama:/root/.ollama \
        docker.io/ollama/ollama:rocm

    However, because of how AMD GPUs are programmed with ROCm , it’s possible that some decent GPUs might not be supported by the docker.io/ollama/ollama:rocm image. The ROCm compiler needs to explicitly support the GPU in question, and Ollama needs to be built with such a compiler. Unfortunately, the binaries in the image leave out support for some GPUs that would otherwise work. For example, my AMD Radeon RX 6700 XT isn’t supported.

    This can be verified with nvtop(1) in a Toolbx container. If there’s no spike in the GPU and its memory then its not being used.

    It will be good to support as many AMD GPUs as possible with Ollama. So, let’s see if we can do better.

    Fedora’s ollama RPM

    Fedora offers a very capable ollama RPM, as far as AMD GPUs are concerned, because Fedora’s ROCm stack supports a lot more GPUs than other builds out there. It’s possible to check if a GPU is supported either by using the RPM and keeping an eye on nvtop(1) , or by comparing the name of the GPU shown by rocminfo with those listed in the rocm-rpm-macros RPM.

    For example, according to rocminfo , the name for my AMD Radeon RX 6700 XT is gfx1031 , which is listed in rocm-rpm-macros :

    $ rocminfo
    ROCk module is loaded
    =====================    
    HSA System Attributes    
    =====================    
    Runtime Version:         1.1
    Runtime Ext Version:     1.6
    System Timestamp Freq.:  1000.000000MHz
    Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
    Machine Model:           LARGE                              
    System Endianness:       LITTLE                             
    Mwaitx:                  DISABLED
    DMAbuf Support:          YES
    
    ==========               
    HSA Agents               
    ==========               
    *******                  
    Agent 1                  
    *******                  
      Name:                    AMD Ryzen 7 5800X 8-Core Processor 
      Uuid:                    CPU-XX                             
      Marketing Name:          AMD Ryzen 7 5800X 8-Core Processor 
      Vendor Name:             CPU                                
      Feature:                 None specified                     
      Profile:                 FULL_PROFILE                       
      Float Round Mode:        NEAR                               
      Max Queue Number:        0(0x0)                             
      Queue Min Size:          0(0x0)                             
      Queue Max Size:          0(0x0)                             
      Queue Type:              MULTI                              
      Node:                    0                                  
      Device Type:             CPU                                
    …
    …
    *******                  
    Agent 2                  
    *******                  
      Name:                    gfx1031                            
      Uuid:                    GPU-XX                             
      Marketing Name:          AMD Radeon RX 6700 XT              
      Vendor Name:             AMD                                
      Feature:                 KERNEL_DISPATCH                    
      Profile:                 BASE_PROFILE                       
      Float Round Mode:        NEAR                               
      Max Queue Number:        128(0x80)                          
      Queue Min Size:          64(0x40)                           
      Queue Max Size:          131072(0x20000)                    
      Queue Type:              MULTI                              
      Node:                    1                                  
      Device Type:             GPU
    …
    …

    The ollama RPM can be installed inside a Toolbx container, or it can be layered on top of the base registry.fedoraproject.org/fedora image to replace the docker.io/ollama/ollama:rocm image:

    FROM registry.fedoraproject.org/fedora:42
    RUN dnf --assumeyes upgrade
    RUN dnf --assumeyes install ollama
    RUN dnf clean all
    ENV OLLAMA_HOST=0.0.0.0:11434
    EXPOSE 11434
    ENTRYPOINT ["/usr/bin/ollama"]
    CMD ["serve"]

    Unfortunately, for obvious reasons, Fedora’s ollama RPM doesn’t support NVIDIA GPUs.

    Conclusion

    From the puristic perspective of not touching the operating system’s OSTree image, and being able to easily remove or upgrade Ollama, using an OCI container is the best option for using Ollama on Fedora Silverblue. Tools like Podman offer a suite of features to manage OCI containers and images that are far beyond what the POSIX shell script installer can hope to offer.

    It seems that the realities of GPUs from AMD and NVIDIA prevent the use of the same OCI image, if we want to maximize our hardware support, and force the use of slightly different Podman commands and associated set-up. We have to create our own image using Fedora’s ollama RPM for AMD, and the docker.io/ollama/ollama:latest image with NVIDIA Container Toolkit for NVIDIA.

    • Pl chevron_right

      Hans de Goede: Fedora 43 will ship with FOSS Meteor, Lunar and Arrow Lake MIPI camera support

      news.movim.eu / PlanetGnome • 30 September • 1 minute

    Good news the just released 6.17 kernel has support for the IPU7 CSI2 receiver and the missing USBIO drivers have recently landed in linux-next. I have backported the USBIO drivers + a few other camera fixes to the Fedora 6.17 kernel .

    I've also prepared an updated libcamera-0.5.2 Fedora package with support for IPU7 (Lunar Lake) CSI2 receivers as well as backporting a set of upstream SwStats and AGC fixes, fixing various crashes as well as the bad flicker MIPI camera users have been hitting with libcamera 0.5.2.

    Together these 2 updates should make Fedora 43's FOSS MIPI camera support work on most Meteor Lake, Lunar Lake and Arrow Lake laptops!

    If you want to give this a try, install / upgrade to Fedora 43 beta and install all updates. If you've installed rpmfusion's binary IPU6 stack please run:

    sudo dnf remove akmod-intel-ipu6 'kmod-intel-ipu6*'

    to remove it as it may interfere with the FOSS stack and finally reboot. Please first try with qcam:

    sudo dnf install libcamera-qcam
    qcam

    which only tests libcamera and after that give apps which use the camera through pipewire a try like gnome's "Camera" app (snapshot) or video-conferencing in Firefox.

    Note snapshot on Lunar Lake triggers a bug in the LNL Vulkan code, to avoid this start snapshot from a terminal with:

    GSK_RENDERER=gl snapshot

    If you have a MIPI camera which still does not work please file a bug following these instructions and drop me an email with the bugzilla link at hansg@kernel.org.

    comment count unavailable comments
    • Pl chevron_right

      Christian Hergert: Status Week 39

      news.movim.eu / PlanetGnome • 30 September • 1 minute

    It’s the time of year where the Oregon allergens has me laid out. Managed to get some stuff done while cranking up the air purifier.

    VTE

    • Work on GtkAccessibleHypertext implementation. Somewhat complicated to track persistent accessible objects for the hyperlinks but it seems like I have a direction to move forward.

    Ptyxis

    • Fix CSS causing too much padding in header bar 49.0

    • Make --working-directory work properly with --new-window when you want you also are restoring a session.

    • Add support for cell-width control in preferences

    • Move an issue to mutter for more triage. Super+key doesn’t get delivered to the app until it’s pressed a second time. Not sure if this is by design or not.

    Foundry

    • Merge support for peel template in GTK/Adwaita

    • Lots of work on DAP support for FoundryDebugger

    • Some improvements to FOUNDRY_JSON_OBJECT_PARSE and related helper macros.

    • Handle failure to access session bus gracefully with flatpak build pipeline integration.

    • Helper to track changes to .git repository with file monitors.

    • Improve new SARIF support so we can get GCC diagnostics using their new GCC 16.0 feature set (still to land).

    • Lots of iteration on debugger API found by actually implementing the debugger support for GDB this time using DAP instead of MI2.

    • Blog post on how to write live-updating directory list models.

    • Merge FoundryAdw which will contain our IDE abstractions on top of libpanel to make it easy to create Builder like tooling.

    • Revive EggLine so I have a simple readline helper to implement a testing tool around the debugger API.

    • Rearrange some API on debuggers

    • Prototype a LLDB implementation to go along with GDB. Looks like it doesn’t implement thread events which is pretty annoying.

    • Fix issue with podman plugin over-zealously detecting it’s own work.

    Builder

    • Make help show up in active workspace instead of custom window so that it can be moved around.

    Libdex

    • Write some blog posts

    • Add an async g_unlink() wrapper

    • Add g_find_program_in_path() helper

    Manuals

    • Fix app-id when using devel builds

    • Merge support for new-window GAction

    Libpanel

    • Merge some improvements to the changes dialog

    Text Editor

    • Numerous bugs which boil down to “ibus not installed”

    D-Spy

    • Issue tracker chatter.

    • Pl chevron_right

      Jussi Pakkanen: In C++ modules globally unique module names seem to be unavoidable, so let's use that fact for good instead of complexshittification

      news.movim.eu / PlanetGnome • 28 September • 4 minutes

    Writing out C++ module files and importing them is awfully complicated. The main cause for this complexity is that the C++ standard can not give requirements like "do not engage in Vogon-level stupidity, as that is not supported". As a result implementations have to support anything and everything under the sun. For module integration there are multiple different approaches ranging from custom on-the-fly generated JSON files (which neither Ninja nor Make can read so you need to spawn an extra process per file just to do the data conversion, but I digress) to custom on-the-fly spawned socket server daemons that do something. It's not really clear to me what.

    Instead of diving to that hole, let's instead approach the problem from first principles from the opposite side.

    The common setup

    A single project consists of a single source tree. It consists of a single executable E and a bunch of libraries L1 to L99, say. Some of those are internal to the project and some are external dependencies. For simplicity we assume that they are embedded as source within the parent project. All libraries are static and are all linked to the executable E.

    With a non-module setup each library can have its own header/source pair with file names like utils.hpp and utils.cpp . All of those can be built and linked in the same executable and, assuming their symbol names won't clash, work just fine. This is not only supported, but in fact quite common.

    What people actually want going forward

    The dream, then, is to convert everything to modules and have things work just as they used to.

    If all libraries were internal, it could be possible to enforce that the different util libraries get different module names. If they are external, you clearly can't. The name is whatever upstream chooses it to be. There are now two modules called utils in the build and it is the responsibility of someone (typically the build system, because no-one else seems to want to touch this) to ensure that the two module files are exposed to the correct compilation commands in the correct order.

    This is complex and difficult, but once you get it done, things should just work again. Right?

    That is what I thought too, but that is actually not the case. This very common setup does not work, and can not be made to work. You don't have to take my word for it, here is a quote from Jonathan Wakely :

    This is already IFNDR , and can cause standard ODR-like issues as the name of the module is used as the discriminator for module-linkage entities and the module initialiser function.  Of course that only applies if both these modules get linked into the same executable;

    IFNDR (ill-formed, no diagnostic required) is a technical term for "if this happens to you, sucks to be you". The code is broken and the compiler is allowed to whatever it wants with it (including s.

    What does it mean in practice?

    According to my interpretation of Jonathan's comment (which, granted, might be incorrect as I am not a compiler implementer) if you have an executable and you link into it any code that has multiple modules with the same name, the end result is broken. It does not matter how the same module names get in, the end result is broken. No matter how much you personally do not like this and think that it should not happen, it will happen and the end result is broken.

    At a higher level this means that this property forms a namespace. Not a C++ namespace, but a sort of a virtual name space. This contains all "generally available" code, which in practice means all open source library code. As that public code can be combined in arbitrary ways it means that if you want things to work, module names must be globally unique in that set of code (and also in every final executable). Any duplicates will break things in ways that can only be fixed by renaming all but one of the clashing modules.

    Globally unique modules names is thus not a "recommendation", "nice to have" or "best practice". It is a technical requirement that comes directly from the compiler and standard definition.

    The silver lining

    If we accept this requirement and build things on top of it, things suddenly get a lot simpler. The build setup for modules reduces to the following for projects that build all of their own modules:

    • At the top of the build dir is a single directory for modules (GCC already does this, its directory is called gcm.cache )
    • All generated module files are written in that directory, as they all have unique names they can not clash
    • All module imports are done from that directory
    • Module mappers and all related complexity can be dropped to the floor and ignored
    Importing modules from the system might take some more work (maybe copy Fortran and have a -J flag for module search paths). However at the time of writing GCC and Clang module files are not stable and do not work between different compiler versions or even when compiler flags differ between export and import. Thus prebuilt libraries can not be imported as modules from the system until that is fixed. AFAIK there is no timeline for when that will be implemented.

    So now you have two choices:

    1. Accept reality and implement a system that is simple, reliable and working.
    2. Reject reality and implement a system that is complicated, unreliable and broken.

    • Pl chevron_right

      Michael Meeks: 2025-09-25 Thursday

      news.movim.eu / PlanetGnome • 25 September

    • Up early, mail, tech-planning call, tested latest CODE on share left & right. Lunch.
    • Published the next strip: on the excitement of delivering against a tender:
    • Sync with Lily.
    • Collabora Productivity is looking to hire a(nother) Open Source clueful UX Designer to further accelerate our UX research and improvements.
    • Pl chevron_right

      Matthew Garrett: Investigating a forged PDF

      news.movim.eu / PlanetGnome • 24 September • 6 minutes

    I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law , having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

    This post is not about that.

    Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation , but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

    The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account. , where IER is International Executive Rentals , the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:
    Text reading ENTIRE AGREEMENT: The foregoing constitutes the entire agreement between the parties and may bemodified only in writing signed by all parties. This agreement and any modifications, including anyphotocopy or facsimile, may be signed in one or more counterparts, each of which will be deemed anoriginal and all of which taken together will constitute one and the same instrument. The followingexhibits, if checked, have been made a part of this Agreement before the parties’ execution:۞Exhibit 1:Lead-Based Paint Disclosure (Required by Law for Rental Property Built Prior to 1978)۞Addendum 1 The security deposit will be held by (name removed) and applied, refunded, or forfeited in accordance with the terms of this lease agreement.
    Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

    Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:
    The same text as the previous picture, but addendum 1 is empty
    Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature , an online document signing platform, and they'd added a certification page that looked like this:
    A Signature Certificate, containing a bunch of data about the document including a checksum or the original
    Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

    First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

    These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

    Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

    My next move was pdfalyzer , which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

    But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

    You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

    But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf . Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

    Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery .

    There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

    comment count unavailable comments
    • Pl chevron_right

      Sebastian Wick: XDG Intents Updates

      news.movim.eu / PlanetGnome • 24 September • 4 minutes

    Andy Holmes wrote an excellent overview of XDG Intents in his “Best Intentions” blog post, covering the foundational concepts and early proposals. Unfortunately, due to GNOME Foundation issues, this work never fully materialized. As I have been running into more and more cases where this would provide a useful primitive for other features, I tried to continue the work.

    The specifications have evolved as I worked on implementing them in glib, desktop-file-utils and ptyxis. Here’s what’s changed:

    Intent-Apps Specification

    Andy showed this hypothetical syntax for scoped preferences:

    [Default Applications]
    org.freedesktop.Thumbnailer=org.gimp.GIMP
    org.freedesktop.Thumbnailer[image/svg+xml]=org.gnome.Loupe;org.gimp.GIMP
    

    We now use separate groups instead:

    [Default Applications]
    org.freedesktop.Thumbnailer=org.gimp.GIMP
    
    [org.freedesktop.Thumbnailer]
    image/svg+xml=org.gnome.Loupe;org.gimp.GIMP
    

    This approach creates a dedicated group for each intent, with keys representing the scopes. This way, we do not have to abuse the square brackets which were meant for translatable keys and allow only very limited values.

    The updated specification also adds support for intent.cache files to improve performance, containing up-to-date lists of applications supporting particular intents and scopes. This is very similar to the already existing cache for MIME types. The update-desktop-database tool is responsible for keeping the cache up-to-date.

    This is implemented in glib!4797 , desktop-file-utils!27 , and the updated specification is in xdg-specs!106 .

    Terminal Intent Specification

    While Andy mentioned the terminal intent as a use case, Zander Brown tried to upstream the intent in xdg-specs!46 multiple years ago. However, because it depended on the intent-apps specification, it unfortunately never went anywhere. With the fleshed-out version of the intent-apps specification, and an implementation in glib, I was able to implement the terminal-intent specification in glib as well. With some help from Christian, we also added support for the intent in the ptyxis terminal.

    This revealed some shortcomings in the proposed D-Bus interface. In particular, when a desktop file gets activated with multiple URIs, and the Exec line in the desktop entry only indicates support for a limited number of URIs, multiple commands need to be launched. To support opening those commands in a single window but in multiple tabs in the terminal emulator, for example, those multiple commands must be part of a single D-Bus method call. The resulting D-Bus interface looks like this:

    <interface name="org.freedesktop.Terminal1">
      <method name="LaunchCommand">
        <arg type='aa{sv}' name='commands' direction='in' />
        <arg type='ay' name='desktop_entry' direction='in' />
        <arg type='a{sv}' name='options' direction='in' />
        <arg type='a{sv}' name='platform_data' direction='in' />
      </method>
    </interface>
    

    This is implemented in glib!4797 , ptyxis!119 and the updated specification is in xdg-specs!107 .

    Deeplink Intent

    Andy’s post discussed a generic “org.freedesktop.UriHandler” with this example:

    [org.freedesktop.UriHandler]
    Supports=wise.com;
    Patterns=https://*.wise.com/link?urn=urn%3Awise%3Atransfers;
    

    The updated specification introduces a specific org.freedesktop.handler.Deeplink1 intent where the scheme is implicitly http or https and the host comes from the scope (i.e., the Supports part). The pattern matching is done on the path alone:

    [org.freedesktop.handler.Deeplink1]
    Supports=example.org;extensions.gnome.org
    example.org=/login;/test/a?a
    extensions.gnome.org=/extension/*/*/install;/extension/*/*/uninstall
    

    This allows us to focus on deeplinking alone and allows the user to set the order of handlers for specific hosts.

    In this example, the app would handle the URIs http://example.org/login , http://example.org/test/aba , http://extensions.gnome.org/extension/123456/BestExtension/install and so on.

    There is a draft implementation in glib!4833 and the specification is in xdg-specs!109 .

    Deeplinking Issues and Other Handlers

    I am still unsure about the Deeplink1 intent. Does it make sense to allow schemes other than http and https ? If yes, how should the priority of applications be determined when opening a URI? How complex does the pattern matching need to be?

    Similarly, should we add an org.freedesktop.handler.Scheme1 intent? We currently abuse MIME handlers for this, so it seems like a good idea, but then we need to take backwards compatibility into account. Maybe we can modify update-desktop-database to add entries from org.freedesktop.handler.Scheme1 to mimeapps.list for that?

    If we go down that route, is there a reason not to also do the same for MIME handlers and add an org.freedesktop.handler.Mime1 intent for that purpose with the same backwards compatibility mechanism?

    Deeplinking to App Locations

    While working on this, I noticed that we are not great at allowing linking to locations in our apps. For example, most email clients do not have a way to link to a specific email. Most calendars do not allow referencing a specific event. Some apps do support this. For example, Zotero allows linking to items in the app with URIs of the form zotero://select/items/0_USN95MJC .

    Maybe we can improve on this? If all our apps used a consistent scheme and queries (for example xdg-app-org.example.appid:/some/path/in/the/app?name=Example ), we could render those links differently and finally have a nice way to link to an email in our calendar.

    This definitely needs more thought, but I do like the idea.

    Security Considerations

    Allowing apps to describe more thoroughly which URIs they can handle is great, but we also live in a world where security has to be taken into account. If an app wants to handle the URI https://bank.example.org , we better be sure that this app actually is the correct banking app. This unfortunately is not a trivial issue, so I will leave it for the next time.

    • Pl chevron_right

      Sam Thursfield: Status update, 22/09/2025

      news.movim.eu / PlanetGnome • 22 September • 6 minutes

    For the first time in many years I can talk publicly about what I’m doing at work: a short engagement funded by Endless and Codethink to rebuild Endless OS as a GNOME OS derivative, instead of a Debian derivative.

    There is nothing wrong with Debian, of course, just that today GNOME OS aligns more closely with the direction the Endless OS team want to go in. A lot of the innovations from earlier versions of Endless OS over the last decade were copied and re-used in GNOME OS, so in a sense this is work coming full circle.

    I’ll tell you a bit more about the project but first I have a rant about complexity.

    Complexity

    I work for a consultancy and the way consultancy projects work is like this: you agree what the work is, you estimate how long the work will take, you agree a budget, and then you do the work.

    The problem with this approach is that in software engineering, most of your work is research. Endless OS is the work of thousands of different people, and hundreds of millions of lines of code. We reason and communicate about the code using abstractions, and there are hundreds of millions of abstractions too.

    If you ask me “how long will it take to change this thing in that abstraction over there”, I can research those abstractions and come up with an estimate for the job. How long to change a lightbulb? How long to rename a variable? How long to add an option in this command line tool ? Some hours of work.

    Most real world tasks involve many abstractions and, by the time youve researched them all, you’ve done 90% of the work. How long to port this app to Gtk4? How long to implement this new optimization in GCC? How long to write a driver for this new USB beard trimmer device? Some months or years of work.

    And then you have projects where it’s not even possible to research the related abstractions. So much changed between Debian 12 and GNOME OS 48 that you’d be a year just writing a comprehensive changelog. So, how can you possibly estimate the work involved when you can’t know in advance what the work is ?

    Of course, you can’t, you can only start and see what happens.

    But, allocating people to projects in a consultancy business is also a hard problem. You need to know project start and end dates because you are lining up more projects in advance, and your clients want to know when their work will start.

    So for projects involving such a huge number of abstractions, we have to effectively make up a number and hope for the best. When people say things like “try to do the best estimation you can”, it’s a bit like saying “try to count the sand on this beach as best as you can”.

    Another difficulty is around finding people who know the right abstractions. If you’re adding a feature to a program written in Rust, management won’t assign someone who never touched Rust before. If they do, you can ask for extra time to learn some Rust as part of the project. (Although since software is largely a cowboy industry, there are always managers who will tell you to just learn by doing.)

    But what abstractions do you need to know for OS development and integration? These projects can be harder than programming work, because the abstractions involved are larger, more complicated and more numerous. If you can code in C, can you can be a Linux integrator? I don’t know, but can a bus driver can fly a helicopter?

    If a project is so complex that you can’t predict in advance which abstractions are going to be problematic and which ones you won’t need to touch, then even if you wanted to include teaching time in your estimation you’ll need a crystal ball to know how much time the work will take.

    For this project, my knowledge of BuildStream and Freedesktop SDK is proving valuable. There’s a good reference manual for BuildStream, but no tutorials on how to use it for OS development. How do we expect people to learn it? Have we solved anything by introducing new abstractions that aren’t widely understood — even if they’re genuinely better in some use cases?

    Endless OS 7

    Given I’ve started with a rant you might ask how the project is going. Actually, quite some good progress. Endless OS 7 exists, it’s being built and pushed as an ostree from eos-build-meta to Endless’ ostree server. You can install it as an update to eos6 if you like to live dangerously — see the “Switch master” documentation. (You can probably install it on other ostree based systems if you like to live really dangerously, but I’m not going to tell you how). I have it running on an IBM Thinkpad laptop. Actually my first time testing any GNOME OS derivative on hardware!

    Thinkpad P14s running Endless OS 7

    For a multitude of reasons the work has been more stressful than it needed to be, but I’m optimistic for a successful outcome. (Where success means, we don’t give up and decide the Debian base was easier after all). I think GNOME OS and Endless OS will both benefit from closer integration.

    The tooling is working well for me: reliability and repeatability were core principles when BuildStream was being designed, and it shows. Once you learn it you can do integration work fast. You don’t get flaky builds. I’ve never deleted my cache to fix a weird problem. It’s an advanced tool, and in some ways it’s less flexible than its friends in the integration tool world, but it’s a really good way to build an operating system.

    I’ve learned a bunch about some important new abstractions on this project too. UEFI and Secure Boot . The systemd-sysusers service and userdb. Dracut and initramfs debugging .

    I haven’t been able to contribute any effort upstream to GNOME OS so far. I did contribute some documentation comments to Freedesktop SDK, and I’m trying to at least document Endless OS 7 as clearly as I can . Nobody has ever had much to time to document how GNOME OS is built or tested, hopefully the documentation in eos-build-meta is a useful step forwards for GNOME OS as well.

    As always the GNOME OS community are super helpful. I’m sure it’s a big part of the success of GNOME OS that Valentín is so helpful whenever things break. I’m also privileged to be working with the highly talented engineers at Endless who built all this stuff.

    Abstractions

    Broadly, the software industry is fucked as long as we keep making an infinite number of new abstractions. I haven’t had a particularly good time on any project since I returned to software engineering five years ago, and I suspect it’s because we just can’t control the complexity enough to reason properly about what we are doing.

    This complexity is starting to inconvenience billionaires. In the UK the entire car industry has been stopped for weeks because system owners didn’t understand their work well enough to do a good job of securing systems. I wonder if it’s going to occur to them eventually that simplification is the best route to security. Capitalism doesn’t tend to reward that way of thinking — but it can reward anything that gives you a business advantage.

    I suppose computing abstractions are like living things, with a tendency to boundlessly multiply until they reach some natural limit, or destroy their habitat entirely. Maybe the last year of continual security breaches could be that natural limit. If your system is too complex for anyone to keep it secure, then your system is going to fail.

    • Pl chevron_right

      Christian Hergert: Directory Listings with Foundry

      news.movim.eu / PlanetGnome • 21 September • 3 minutes

    I took a different approach to directory listings in Foundry. They use GListModel as the interface but behind the scenes it is implemented with futures and and fibers.

    A primary use case for a directory listing is the project tree of an IDE. Since we use GtkListView for efficient trees in GTK it we expose a GListModel . Each item in the directory is represented as a FoundryDirectoryItem which acts just a bit like a specialized GFileInfo . It contains information about all the attributes requested in the directory listing.

    You can also request some other information that is not traditionally available via GFileInfo . You can request attributes that will be populated by the version control system such as if the file is modified or should be ignored.

    Use a GtkFilterListModel to look at specific properties or attributes. Sort them quickly using GtkSortListModel . All of this makes implementing a project tree browser very straight forward.

    Reining in the Complexity

    One of the most complex things in writing a directory listing is managing updates to the directory. To manage some level of correctness here Foundry does it with a fiber in the following ordering:

    • Start a file monitor on the directory and start queing up changes
    • Enumerate children in the directory and add to internal list
    • Start processing monitor events, starting with the backlog

    Performance

    There are a couple tricks to balancing the performance of new items being added and items being removed. We want both to be similarly well performing.

    To do this, new items are always placed at the end of the list. We don’t care about sorting here because that will be dealt with at a higher layer (in the GtkSortListModel ). That keeps adding new items quite fast because we can quickly access the end of the list. This saves us a lot of time sorting the tree just for the removal.

    But if you aren’t sorting the items, how do you make removals quick? Doesn’t that become O(n) ?

    Well not quite. If we keep a secondary index (in this case a simple GHashTable ) then we can store a key (the file’s name) which points to a stable pointer (a GSequenceIter ). That lookup is O(1) and the removal from a GSequence on average are O(log n) .

    Big O notation is often meaningless when you’re talking about different systems. So let’s be real for a moment. Your callback from GListModel::items-changed() can have a huge impact on performance compared to the internal data structures here.

    Reference Counting and Fibers

    When doing this with a fiber attention must be taken to avoid over referencing the object or you risk never disposing/finalizing.

    One way out of that mess is to use a GWeakRef and only request the object when it is truly necessary. That way, you can make your fiber cancel when the directory listing is disposed. In turn cleanup your monitor and other resources are automatically cleaned up.

    I do this by creating a future that will resolve when there is more work to be done. Then I release my self object followed by awaiting the future. At that point I’ll either resolve or reject from an error (including the fiber being cancelled).

    If I need self again because I got an event, it’s just a g_weak_ref_get() away.

    Future-based File Monitors

    I’m not a huge fan of “signal based” APIs like GFileMonitor so internally FoundryFileMonitor does something a bit different. It still uses GFileMonitor internally for outstanding portability, but the exposed API uses DexFuture .

    This allows you to call when you want to get the next event (which may already be queued). Call foundry_file_monotor_next() to get the next event and the future will resolve when an event has been received. This makes more complex awaiting operations feasible too (such as monitoring multiple directories for the next change).

    foundry-directory-listing.c , foundry-directory-item.c , and foundry-file-monitor.c .