• Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 2 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Matthias Klumpp: Hello old new “Projects” directory!

      news.movim.eu / PlanetGnome • 1 day ago • 2 minutes

    If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”

    Why?

    With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.

    The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.

    By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.

    This sucks, I don’t like it!

    As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.

    If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).

    What else is new?

    Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.

    Thanks to everyone who contributed to this release!

    • Pl chevron_right

      Allan Day: GNOME Foundation Update, 2026-04-17

      news.movim.eu / PlanetGnome • 2 days ago • 4 minutes

    Welcome to another update about everything that’s been happening at the GNOME Foundation. It’s been four weeks since my last post, due to a vacation and public holidays, so there’s lots to cover. This period included a major announcement, but there’s also been a lot of other notable work behind the scenes.

    Fellowship & Fundraising

    The really big news from the last four weeks was the launch of our new Fellowship program . This is something that the Board has been discussing for quite some time, so we were thrilled to be able to make the program a reality. We are optimistic that it will make a significant difference to the GNOME project.

    If you didn’t see it already, check out the announcement for details . Also, if you want to apply to be our first Fellow, you have just three days until the application deadline on 20th April!

    donate.gnome.org has been a great success for the GNOME Foundation, and it is only through the support of our existing donors that the Fellowship was possible. Despite these amazing contributions, the GNOME Foundation needs to grow our donations if we are going to be able to support future Fellowship rounds while simultaneously sustaining the organisation.

    To this end, there’s an effort happening to build our marketing and fundraising effort. This is primarily taking place in the GNOME Engagement Team , and we would love help from the community to help boost our outbound comms. If you are interested, please join the Engagement space and look out for announcements.

    Also, if you haven’t already, and are able to do so: please donate !

    Conferences

    We have two major events coming up, with Linux App Summit in May and GUADEC in July , so right now is a busy time for conferences.

    The schedules for both of these upcoming events are currently being worked on, and arrangements for catering, photographers, and audio visual services are all in the process of being finalized.

    The Travel Committee has also been busy handling GUADEC travel requests, and has sent out the first batch of approvals. There are some budget pressures right now due to rising flight prices, but budget has been put aside for more GUADEC travel, so please apply if you want to attend and need support .

    April 2026 Board Meeting

    This week was the Board’s regular monthly meeting for April. Highlights from the meeting included:

    • I gave a general report on the Foundation’s activities, and we discussed progress on programs and initiatives, including the new Fellowship program and fundraising.
    • Deepa gave a finance report for October to December 2025.
    • Andrea Veri joined us to give an update on the Membership & Elections Committee, as well as the Infrastructure team. Andrea has been doing this work for a long time and has been instrumental in helping to keep the Foundation running, so this was a great opportunity to thank him for his work.
    • One key takeaway from this month’s discussion was the very high level of support that GNOME receives from our infrastructure partners, particularly AWS and also Fastly . We are hugely appreciative of this support, which represents a major financial contribution to GNOME, and want to make sure that these partners get positive exposure from us and feel appreciated.
    • We reviewed the timeline for the upcoming 2026 board elections, which we are tweaking a little this year, in order to ensure that there is opportunity to discuss every candidacy, and reduce some unnecessary delay in final result.

    Infrastructure

    As usual, plenty has been happening on the infrastructure side over the past month. This has included:

    • Ongoing work to tune our Fastly configuration and managing the resource usage of GNOME’s infra.
    • Deployment of a LiberaForms instance on GNOME infrastructure. This is hooked up to GNOME’s SSO, so is available to anyone with an account who wants to use it – just head over to forms.gnome.org to give it a try.
    • Changes to the Foundation’s internal email setup, to allow easier management of the generic contact email addresses, as well as better organisation of the role-based email addresses that we have.
    • New translation support for donate.gnome.org.
    • Ongoing work in Flathub, around OAuth and flat-manager.

    Admin & Finance

    On the accounting side, the team has been busy catching up on regular work that got put to one side during last month’s audit. There were some significant delays to our account process as a result of this, but we are now almost up to date.

    Reorganisation of many of our finance processes has also continued over the past four weeks. Progress has included a new structure and cadence for our internal accounting calls, continued configuration of our new payments platform, and new forms for handling reimbursement requests.

    Finally, we have officially kicked off the process of migrating to our new physical mail service. Work on this is ongoing and will take some time to complete. Our new address is on the website , if anyone needs it.

    That’s it for this report! Thanks for reading, and feel free to use the comments if you have questions!

    • Pl chevron_right

      Andrea Veri: GNOME GitLab Git traffic caching

      news.movim.eu / PlanetGnome • 2 days ago • 15 minutes

    Table of Contents

    Introduction

    One of the most visible signs that GNOME’s infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab’s webservice pods, generating redundant load for work that was essentially identical.

    GNOME’s infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.

    This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly’s CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time.

    The problem

    The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main , GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.

    The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has ( have lines) and what it wants ( want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.

    For a fresh clone the body contains only want lines — one per ref the client is requesting:

    0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78
    0032want 93e944c9f728a4b9da506e622592e4e3688a805c
    0032want ef2cbad5843a607236b45e5f50fa4318e0580e04
    ...
    

    For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:

    00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff
    0000
    0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3
    0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479
    0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8
    ...
    

    The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses — exactly the property a cache can help with.

    Architecture overview

    The overall setup involves four components:

    • OpenResty (Nginx + LuaJIT) running as a reverse proxy in front of GitLab’s webservice
    • Fastly acting as the CDN, with custom VCL to handle the non-standard caching behaviour
    • Valkey (a Redis-compatible store) holding the denylist of private repositories
    • gitlab-git-cache-webhook , a small Python/FastAPI service that keeps the denylist in sync with GitLab
    flowchart TD
    client["Git client / CI runner"]
    gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"]
    nginx["OpenResty Nginx"]
    lua["Lua: git_upload_pack.lua"]
    cdn_origin["/cdn-origin internal location"]
    fastly_cdn["Fastly CDN"]
    origin["gitlab.gnome.org via its origin (second pass)"]
    gitlab["GitLab webservice"]
    valkey["Valkey denylist"]
    webhook["gitlab-git-cache-webhook"]
    gitlab_events["GitLab project events"]
    client --> gitlab_gnome
    gitlab_gnome --> nginx
    nginx --> lua
    lua -- "check denylist" --> valkey
    lua -- "private repo: BYPASS" --> gitlab
    lua -- "public/internal: internal redirect" --> cdn_origin
    cdn_origin --> fastly_cdn
    fastly_cdn -- "HIT" --> cdn_origin
    fastly_cdn -- "MISS: origin fetch" --> origin
    origin --> gitlab
    gitlab_events --> webhook
    webhook -- "SET/DEL git:deny:" --> valkey
    

    The request path for a public or internal repository looks like this:

    1. The Git client runs git fetch or git clone . Git’s smart HTTP protocol translates this into two HTTP requests: a GET /Namespace/Project.git/info/refs?service=git-upload-pack for ref discovery, followed by a POST /Namespace/Project.git/git-upload-pack carrying the negotiation body. It is that second request — the expensive pack-generating one — that the cache targets.
    2. It arrives at gitlab.gnome.org ’s Nginx server, which acts as the reverse proxy in front of GitLab’s webservice.
    3. The git-upload-pack location runs a Lua script that parses the repo path, reads the request body, and SHA256-hashes it. The hash is the foundation of the cache key: because the body encodes the exact set of want and have SHAs the client is negotiating, two jobs fetching the same commit from the same repository will produce byte-for-byte identical bodies and therefore the same hash — making the cached packfile safe to reuse.
    4. Lua checks Valkey: is this repo in the denylist? If yes, the request is proxied directly to GitLab with no caching.
    5. For public/internal repos, Lua strips the Authorization header, builds a cache key, converts the POST to a GET , and does an internal redirect to /cdn-origin . The POST-to-GET conversion is necessary because Fastly does not apply consistent hashing to POST requests — each of the hundreds of nodes within a POP maintains its own independent cache storage, so the same POST request hitting different nodes will always be a miss. By converting to a GET, Fastly’s consistent hashing kicks in and routes requests with the same cache key to the same node, which means the cache is actually shared across all concurrent jobs hitting that POP.
    6. The /cdn-origin location proxies to the Fastly git cache CDN with the X-Git-Cache-Key header set.
    7. Fastly’s VCL sees the key and does a cache lookup. On a HIT it returns the cached pack. On a MISS it fetches from gitlab.gnome.org directly via its origin (bypassing the CDN to avoid a loop) — the same Nginx instance — and caches the response for 30 days.
    8. On that second pass (origin fetch), Nginx detects the X-Git-Cache-Internal header, decodes the original POST body from X-Git-Original-Body , restores the request method, and proxies to GitLab.

    The Nginx and Lua layer

    The Nginx configuration exposes two relevant locations. The first is the internal one used for the CDN proxy leg:

    location ^~ /cdn-origin/ {
     internal;
     rewrite ^/cdn-origin(/.*)$ $1 break;
     proxy_pass $cdn_upstream;
     proxy_ssl_server_name on;
     proxy_ssl_name <cdn-hostname>;
     proxy_set_header Host <cdn-hostname>;
     proxy_set_header Accept-Encoding "";
     proxy_http_version 1.1;
     proxy_buffering on;
     proxy_request_buffering on;
     proxy_connect_timeout 10s;
     proxy_send_timeout 60s;
     proxy_read_timeout 60s;
    
     header_filter_by_lua_block {
     ngx.header["X-Git-Cache-Key"] = ngx.req.get_headers()["X-Git-Cache-Key"]
     ngx.header["X-Git-Body-Hash"] = ngx.req.get_headers()["X-Git-Body-Hash"]
    
     local xcache = ngx.header["X-Cache"] or ""
     if xcache:find("HIT") then
     ngx.header["X-Git-Cache-Status"] = "HIT"
     else
     ngx.header["X-Git-Cache-Status"] = "MISS"
     end
     }
    }
    

    The header_filter_by_lua_block here is doing something specific: it reads X-Cache from the response Fastly returns and translates it into a clean X-Git-Cache-Status header for observability. The X-Git-Cache-Key and X-Git-Body-Hash are also passed through so that callers can see what cache entry was involved.

    The second location is git-upload-pack itself, which delegates all the logic to a Lua file:

    location ~ /git-upload-pack$ {
     client_body_buffer_size 5m;
     client_max_body_size 5m;
    
     access_by_lua_file /etc/nginx/lua/git_upload_pack.lua;
    
     header_filter_by_lua_block {
     local key = ngx.req.get_headers()["X-Git-Cache-Key"]
     if key then
     ngx.header["X-Git-Cache-Key"] = key
     end
     }
    
     proxy_pass http://gitlab-webservice;
     proxy_http_version 1.1;
     proxy_set_header Host gitlab.gnome.org;
     proxy_set_header X-Real-IP $http_fastly_client_ip;
     proxy_set_header X-Forwarded-For $http_fastly_client_ip;
     proxy_set_header X-Forwarded-Proto https;
     proxy_set_header X-Forwarded-Port 443;
     proxy_set_header X-Forwarded-Ssl on;
     proxy_set_header Connection "";
     proxy_buffering on;
     proxy_request_buffering on;
     proxy_connect_timeout 10s;
     proxy_send_timeout 60s;
     proxy_read_timeout 60s;
    }
    

    The access_by_lua_file directive runs before the request is proxied. If the Lua script calls ngx.exec("/cdn-origin" .. uri) , Nginx performs an internal redirect to the CDN location and the proxy_pass to GitLab is never reached. If the script returns normally (for private repos or non-fetch commands), the request falls through to the proxy_pass .

    Building the cache key

    The full Lua script that runs in access_by_lua_file handles both passes of the request. The first pass (client → nginx) does the heavy lifting:

    local resty_sha256 = require("resty.sha256")
    local resty_str = require("resty.string")
    local redis_helper = require("redis_helper")
    
    local redis_host = os.getenv("REDIS_HOST") or "localhost"
    local redis_port = os.getenv("REDIS_PORT") or "6379"
    
    -- Second pass: request arriving from CDN origin fetch.
    -- Decode the original POST body from the header and restore the method.
    if ngx.req.get_headers()["X-Git-Cache-Internal"] then
     local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"]
     if encoded_body then
     ngx.req.read_body()
     local body = ngx.decode_base64(encoded_body)
     ngx.req.set_method(ngx.HTTP_POST)
     ngx.req.set_body_data(body)
     ngx.req.set_header("Content-Length", tostring(#body))
     ngx.req.clear_header("X-Git-Original-Body")
     end
     return
    end
    

    The second-pass guard is at the top of the script. When Fastly’s origin fetch arrives, it will carry X-Git-Cache-Internal: 1 . The script detects that, reconstructs the POST body from the base64-encoded header, restores the POST method, and returns — allowing Nginx to proxy the real request to GitLab.

    For the first pass, the script parses the repo path from the URI, reads and buffers the full request body, and computes a SHA256 over it:

    -- Only cache "fetch" commands; ls-refs responses are small, fast, and
    -- become stale on every push (the body hash is constant so a long TTL
    -- would serve outdated ref listings).
    if not body:find("command=fetch", 1, true) then
     ngx.header["X-Git-Cache-Status"] = "BYPASS"
     return
    end
    
    -- Hash the body
    local sha256 = resty_sha256:new()
    sha256:update(body)
    local body_hash = resty_str.to_hex(sha256:final())
    
    -- Build cache key: cache_versioning + repo path + body hash
    local cache_key = "v2:" .. repo_path .. ":" .. body_hash
    

    A few things worth noting here. The ls-refs command is explicitly excluded from caching. The reason is that ls-refs is used to list references and its request body is essentially static (just a capability advertisement). If we cached it with a 30-day TTL, a push to the repository would not invalidate the cache — the key would be the same — and clients would get stale ref listings. Fetch bodies, on the other hand, encode exactly the SHAs the client wants and already has. The same set of want / have lines always maps to the same pack, which makes them safe to cache for a long time.

    The v2: prefix is a cache version string. It makes it straightforward to invalidate all existing cache entries if we ever need to change the key scheme, without touching Fastly’s purge API.

    The POST-to-GET conversion

    This is probably the most unusual part of the design:

    -- Carry the POST body as a base64 header and convert to GET so that
    -- Fastly's intra-POP consistent hashing routes identical cache keys
    -- to the same server (Fastly only does this for GET, not POST).
    ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body))
    ngx.req.set_method(ngx.HTTP_GET)
    ngx.req.set_body_data("")
    
    return ngx.exec("/cdn-origin" .. uri)
    

    Fastly’s shield feature routes cache misses through a designated intra-POP “shield” node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important for us because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times anyway.

    The catch is that Fastly’s consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache — by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch — but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result. By converting the POST to a GET and encoding the body in a header, we get consistent hashing and shield-level request collapsing for free.

    The VCL on the Fastly side uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.

    Protecting private repositories

    We cannot route private repository traffic through an external CDN — that would mean sending authenticated git content to a third-party cache. The way we prevent this is a denylist stored in Valkey. Before doing anything else, the Lua script checks whether the repository is listed there:

    local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path)
    
    if err then
     ngx.log(ngx.ERR, "git-cache: Redis error for ", repo_path, ": ", err,
     " — cannot verify project visibility, bypassing CDN")
     ngx.header["X-Git-Cache-Status"] = "BYPASS"
     return
    end
    
    if denied then
     ngx.header["X-Git-Cache-Status"] = "BYPASS"
     ngx.header["X-Git-Body-Hash"] = body_hash:sub(1, 12)
     return
    end
    
    -- Public/internal repo: strip credentials before routing through CDN
    ngx.req.clear_header("Authorization")
    

    If Valkey is unreachable, the script logs an error and bypasses the CDN entirely, treating the repository as if it were private. This is the safe default: the cost of a Redis failure is slightly increased load on GitLab, not the risk of routing private repository content through an external cache. In practice, Valkey runs alongside Nginx on the same node, so true availability failures are uncommon.

    The denylist is maintained by gitlab-git-cache-webhook , a small FastAPI service. It listens for GitLab system hooks on project_create and project_update events:

    HANDLED_EVENTS = {"project_create", "project_update"}
    
    @router.post("/webhook")
    async def webhook(request: Request, ...) -> Response:
     ...
     event = body.get("event_name", "")
     if event not in HANDLED_EVENTS:
     return Response(status_code=204)
    
     project = body.get("project", {})
     path = project.get("path_with_namespace", "")
     visibility_level = project.get("visibility_level")
    
     if visibility_level == 0:
     await deny_repo(path)
     else:
     removed = await allow_repo(path)
     return Response(status_code=204)
    

    GitLab’s visibility_level is 0 for private, 10 for internal, and 20 for public. Internal repositories are intentionally treated the same as public ones here: they are accessible to any authenticated user on the instance, so routing them through the CDN is acceptable. Only truly private repositories go into the denylist.

    The key format in Valkey is git:deny:<path_with_namespace> . The Lua redis_helper module does an EXISTS check on that key. The webhook service also ships a reconciliation command ( python -m app.reconcile ) that does a full resync of all private repositories via the GitLab API, which is useful to run on first deployment or after any extended Valkey downtime.

    The Fastly VCL

    On the Fastly side, three VCL subroutines carry the relevant logic. In vcl_recv :

    if (req.url ~ "/info/refs") {
    return(pass);
    }
    if (req.http.X-Git-Cache-Key) {
    set req.backend = F_Host_1;
    if (req.restarts == 0) {
    set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1);
    }
    return(lookup);
    }
    

    /info/refs is always passed through uncached — it is the capability advertisement step and caching it would cause problems with protocol negotiation. Requests carrying X-Git-Cache-Key get an explicit lookup directive and are routed through the shield. Everything else falls through to Fastly’s default behaviour.

    In vcl_hash , the cache key overrides the default URL-based key:

    if (req.http.X-Git-Cache-Key) {
    set req.hash += req.http.X-Git-Cache-Key;
    return(hash);
    }
    

    And in vcl_fetch , responses are marked cacheable when they come back with a 200 and a non-empty body:

    if (req.http.X-Git-Cache-Key && beresp.status == 200) {
    if (beresp.http.Content-Length == "0") {
    set beresp.ttl = 0s;
    set beresp.cacheable = false;
    return(deliver);
    }
    set beresp.cacheable = true;
    set beresp.ttl = 30d;
    set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key;
    unset beresp.http.Cache-Control;
    unset beresp.http.Pragma;
    unset beresp.http.Expires;
    unset beresp.http.Set-Cookie;
    return(deliver);
    }
    

    The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want / have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME’s GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.

    Empty responses ( Content-Length: 0 ) are explicitly not cached. GitLab can return an empty body in edge cases and caching that would break all subsequent fetches for that key.

    Conclusions

    The system has been running in production for a few days now and the cache hit rate on fetch traffic has been overall consistently high (over 80%). If something goes wrong with the cache layer, the worst case is that requests fall back to BYPASS and GitLab handles them directly, which is how things worked before. This also means we don’t redirect any traffic to github.com anymore.

    That should be all for today, stay tuned!

    • Pl chevron_right

      Jussi Pakkanen: Multi merge sort, or when optimizations aren't

      news.movim.eu / PlanetGnome • 2 days ago • 2 minutes

    In our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue.

    This seems like a slam dunk for performance.

    • Doubling the number of arrays to merge at a time halves the number of total passes needed
    • The priority queue has a known static maximum size, so it can be put on the stack, which is guaranteed to be in the cache all the time
    • Processing an element takes only log(#lists) comparisons
    Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so.

    Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing.

    The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round.

    A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds.

    Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here . I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.

    • Pl chevron_right

      This Week in GNOME: #245 Infinite Ranges

      news.movim.eu / PlanetGnome • 2 days ago • 4 minutes

    Update on what happened across the GNOME project in the week from April 10 to April 17.

    GNOME Core Apps and Libraries

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 reports

    AdwAboutDialog ’s Other Apps section title can now be overridden to say something other than “Other Apps by developer-name

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

    AdwEnumListModel has been deprecated in favor of the recently added GtkEnumList . They work identically and so migrating should be as simple as find-and-replace

    Maps

    Maps gives you quick access to maps all across the world.

    mlundblad announces

    Maps now shows track/stop location for boarding and disembarking stations/stops on public transit journeys (when available in upstream data)

    maps-transit-tracks.CgAXBVRc_Z1KOqBW.webp

    GNOME Circle Apps and Libraries

    Graphs

    Plot and manipulate data

    Sjoerd Stendahl says

    After two years without a major feature-update, we are happy to announce Graphs 2.0. It’s by far our biggest update yet. We are targeting a stable release next month, but in the meantime we are running an official beta testing period. We are very happy for any feedback, especially in this period!

    The upcoming Graphs 2.0, features some major long-requested changes: equations now span an infinite range and can be edited and manipulated analytically, the style editor has been redesigned with a live preview, we revamped the import dialog, and imported data now supports error bars. Equations with infinite values in them such as y=tan(x) now also render properly with values being drawn all the way to infinity and without having a line going from plus to minus infinity. We’ve also added support for spreadsheet and SQLite database files, drag-and-drop importing, improved curve fitting with residuals and better confidence bands, and now have proper mobile support.

    These are just some highlights, a more complete list of changes, including a description of how to get the beta version, can be found here: https://blogs.gnome.org/sstendahl/2026/04/14/announcing-the-upcoming-graphs-2-0/

    graphs_showtanx.V5wQ8f51_ZV9rF3.webp

    graphs_betaimage.C6kYkny9_1d0pFb.webp

    Gaphor

    A simple UML and SysML modeling tool.

    Arjan announces

    Mareike Keil of the University of Mannheim published her article “NEST‑UX: Neurodivergent and Neurotypical Style Guide for Enhanced User Experience”. The paper explores how user interfaces can be designed to be accessible for both neurotypical and neurodivergent users, including people with autism, ADHD or giftedness.

    The Gaphor team worked together with Mareike to implement suggestions she found during her research, allowing us to test how well these ideas work in practice.

    The article can be found at https://academic.oup.com/iwc/advance-article-abstract/doi/10.1093/iwc/iwag011/8571596 .

    Mareike’s LinkedIn announcement can be found at https://www.linkedin.com/feed/update/urn:li:activity:7447176733759352832/ .

    Third Party Projects

    Bilal Elmoussaoui announces

    Now that most of the basic features work as expected, I would like to publicly introduce you to Goblin, a GObject Linter, for C codebases. You can read more about it at https://belmoussaoui.com/blog/23-goblin-linter/

    lint.BsseUq9-_2ldQUe.webp

    Anton Isaiev says

    RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)

    Versions 0.10.15–0.10.22 bring a week of polish across the UI, security, and terminal experience.

    Terminal got better. Font zoom (Ctrl+Scroll, Ctrl+Plus/Minus) and optional copy-on-select landed. The context menu now works properly — VTE’s native API replaced the custom popover that was stealing focus and breaking clipboard actions. On X11 sessions (MATE, XFCE) where GTK4’s NGL renderer caused blank popovers, RustConn auto-detects and falls back to Cairo.

    Sidebar and navigation. Groups expand/collapse on double-click anywhere on the row. The Local Shell button moved to the header bar so it’s always visible. Protocol filter bar is now optional and togglable. Tab groups show as a [GroupName] prefix in the tab title, and a new “Close All in Group” action cleans up grouped tabs at once. A tab group chooser dialog with clickable pill buttons replaces manual retyping.

    RDP fixes. Multiple shared folders now map correctly in embedded IronRDP mode — previously only the first path was used. SSH Port Forwarding UI, which had silently disappeared from the connection dialog, is back.

    Security hardened. Machine key encryption dropped the predictable hostname+username fallback; the /etc/machine-id path now uses HKDF-SHA256 with app-specific salt. Context menu labels and sidebar accessible labels are localized for screen readers.

    Ctrl+K no longer hijacks the terminal — it was removed from the global search shortcut, so nano and other terminal apps get it back. Terminal auto-focus after connection means you can type immediately.

    Export and import. Export dialog gained a group filter, and RustConn Native (.rcn) is now the default format in both import and export dialogs.

    Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn

    main_white_en.LE8jVXnl_ZiloOX.webp

    Mufeed Ali reports

    Wordbook 1.0.0 was released

    Wordbook is now a fully offline application with no in-app downloads. Pronunciation data is now sourced from WordNet where possible, allowing better grouping of definitions in homonyms like “bass”. In general, many UI/UX improvements and bug fixes were also made. The community also helped by localizing the app for a total of 6 new languages.

    Try it on Flathub .

    Pods

    Keep track of your podman containers.

    marhkb says

    Pods 3.0.0 is out!

    This major release introduces a brand-new container engine abstraction layer allowing for greater flexibility.

    Based on this new layer, Pods now features initial Docker support, making it easier for users to manage their containers regardless of their preferred backend.

    Check it out on Flathub .

    Pods.DXGbuqN1_KYyow.webp

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Thibault Martin: TIL that Pagefind does great client-side search

      news.movim.eu / PlanetGnome • 3 days ago

    I post more and more content on my website. What was visible at glance then is now more difficult to look for. I wanted to implement search, but it is a static website. It means that everything is built once, and then published somewhere as final, immutable pages. I can't send a request for search and get results in return.

    Or that's what I thought! Pagefind is a neat javascript library that does two things:

    1. It produces an index of the content right after building the static site.
    2. It provides 2 web components to insert in my pages: <pagefind-modal> that is the search modal itself, hidden by default, and <pagefind-modal-trigger> that looks like a search field and opens the modal.

    The pagefind-modal component looks up the index when the user types a request. The index is a static file, so there is not need for a backend that processes queries. Of course this only works for basic queries, but it's a great rool already!

    Pagefind is also easy to customize via a list of CSS variables . Adding it to this website was very straightforward.

    • Pl chevron_right

      Thibault Martin: I realized that Niri can have gorgeous animation

      news.movim.eu / PlanetGnome • 4 days ago • 1 minute

    I was a huge fan of Niri already. It's a scrolling tiling window manager. Roughly:

    • When I open an app it takes the full height and a pre-configured width on the screen.
    • When I open more apps and the screen was already full, it pushes the existing apps off screen.
    • It can stack windows in columns to have a more compact view

    It means that windows always take the optimal amount of space, and they're very neatly organized. It's extremely pleasant to use and keyboard friendly.

    Don't mind the apparent slowness: this was recorded on a 10 year old laptop, opening OBS is enough to make its CPU go brr. When OBS is not running, Niri is buttery smooth.

    But now I've learned that Niri supports user-provided GLSL shaders for several animations. Roughly: you can animate how windows appear and disappear (and other events, but let's keep things simple).

    Some people out there have created collections of shaders that work wonderfully for Niri:

    My personal favorite is the glitchy one.

    In a world of uniform UIs, these frivolous, unnecessary and creative ways to interact with users are a breath of fresh air! Those animations are healing my inner 14 year old.

    • Pl chevron_right

      Steven Deobald: End of 10 Handout

      news.movim.eu / PlanetGnome • 4 days ago • 1 minute

    There was a silly little project I’d tried to encourage many folks to attempt last summer. Sri picked it up back in September and after many months, I decided to wrap it up and publish what’s there.

    The intention is a simple, 2-sided A4 that folks can print and give out at repair cafes, like the End of 10 event series. Here’s the original issue , if you’d like to look at the initial thought process.

    When I hear fairly technical folks talk about Linux in 2026, I still consistently hear things like “I don’t want to use the command line.” The fact that Spotify, Discord, Slack, Zoom, and Steam all run smoothly on Linux is far removed from these folks’ conception of the Linux desktop they might have formed back in 2009. Most people won’t come to Linux because it’s free of ✨ shlop ✨ and ads — they’re accustomed to choking on that stuff. They’ll come to Linux because they can open a spreadsheet for free, play Slay The Spire 2, or install Slack even though they promised themselves they wouldn’t use their personal computer for work.

    The GNOME we all know and love is one we take for granted… and the benefits of which we assume everyone wants. But the efficiency, the privacy, the universality, the hackability, the gorgeous design, and the lack of ads? All these things are the icing on the cake. The cake, like it or not, is installing Discord so you can join the Sunday book club.

    Here’s the A4 . And here’s a snippet:

    An A4 snippet including "where's the start menu?", "where are my exes?", and "how do I install programs?"

    If you try this out at a local repair cafe, I’d love to know which bits work and which don’t. Good luck! ❤