call_end

    • Pl chevron_right

      Steven Deobald: 2025-07-12 Foundation Update

      news.movim.eu / PlanetGnome • 12 July • 4 minutes

    Gah. Every week I’m like “I’ll do a short one this week” and then I… do not.

    ## New Treasurers

    We recently announced our new treasurer, Deepa Venkatraman . We will also have a new vice-treasurer joining us in October.

    This is really exciting. It’s important that Deepa and I can see with absolute clarity what is happening with the Foundation’s finances, and in turn present our understanding to the Board so they share that clarity. She and I also need to start drafting the annual budget soon, which itself must be built on clear financial reporting. Few people I know ask the kind of incisive questions Deepa asks and I’m really looking forward to tackling the following three issues with her:

    • solve our financial reporting problems:
      • cash flow as a “burndown chart” that most hackers will identify with
      • clearer accrual reporting so it’s obvious whether we’re growing or crashing
    • passing a budget on time that the Board really understands
    • help the Board pass safer policies

    ## postmarketOS

    We are excited to announce that postmarketOS has joined the GNOME Advisory Board ! This is particularly fun, because it breaks GNOME out of its safe shell. GNOME has had a complete desktop product for 15 years. Phones and tablets are the most common computers in the world today and the obvious next step for GNOME app developers. It’s a long hard road to win the mobile market, but we will. 🙂

    (I’m just going to keep saying that because I know some people think it’s extraordinarily silly… but I do mean it.)

    ## Sustain? Funding? Jobs?

    We’ve started work this week on the other side of the coin for donate.gnome.org . We’re not entirely sure which subdomain it will live at yet, but the process of funding contributors needs its own home. This page will celebrate the existing grant and contract work going on in GNOME right now (such as Digital Wellbeing) but it will also act as the gateway where contributors can apply for travel grants, contracts, fellowships, and other jobs.

    ## PayPal

    Thanks to Bart, donate.gnome.org now supports PayPal recurring donations, for folks who do not have credit cards.

    We hear you: EUR presentment currency is a highly-requested feature and so are yearly donations. We’re still working away at this. 🙂

    ## Hardware Pals

    We’re making some steady progress toward relationships with Framework Computer and Slimbook where GNOME developers can help them ensure their hardware always works perfectly, out of the box. Great folks at both companies and I’m excited to see all the bugs get squashed. 🙂

    ## Stuff I’m Dropping

    Oh, friends. I should really be working on the Annual Report… but other junk keeps coming up! Same goes for my GUADEC talk. And the copy for jobs.gnome.org … argh. Sorry Sam! haha

    Thanks to everyone who’s contributed your thoughts and ideas to the Successes for Annual Report 2025 issue. GNOME development is a firehose and you’re helping me drink it. More thoughts and ideas still welcome!

    ## It’s Not 1998

    Emmanuele and I had a call this week. There was plenty of nuance and history behind that conversation that would be too difficult to repeat here. However, he and I have similar concerns surrounding communication, tone, tools, media, and moderation: we both want GNOME to be as welcoming a community as it is a computing and development platform. We also agreed the values which bind us as a community are those values directly linked to GNOME’s mission.

    This is a significant challenge. Earth is a big place, with plenty of opinions, cultures, languages, and ideas. We are all trying out best to resolve the forces in tension. Carefully, thoughtfully.

    We both had a laugh at the truism, “it’s not 1998.” There’s a lot that was fun and exciting and uplifting about the earlier internet… but there was also plenty of space for nastiness. Those of us old enough to remember it (read: me) occasionally make the mistake of speaking in the snarky, biting tones that were acceptable back then. As Official Old People , Emmanuele and I agreed we had to work even harder to set an example for the kind of dialogue we hope to see in the community.

    Part of that effort is boosting other peoples’ work. You don’t have to go full shouty Twitter venture capitalist about it or anything… just remember how good it felt the first time someone congratulated you on some good work you did, and pass that along. A quick DM or email can go a long way to making someone’s week.

    Thanks Emmanuele, Brage, Bart, Sid, Sri, Alice, Michael, and all the other mods for keeping our spaces safe and inviting. It’s thankless work most of the time but we’re always grateful.

    ## Office Hours

    We tried out “office hours” today: one hour for Foundation Members to come and chat. Bring a tea or coffee, tell me about your favourite GUADEC, tell me what a bad job I’m doing, explain where the Foundation needs to spend money to make GNOME better, ask a question… anything. The URL is only published on private channels for, uh, obvious reasons. See you next week!

    Donate to GNOME

    • Pl chevron_right

      Jussi Pakkanen: AI slop is the new smoking

      news.movim.eu / PlanetGnome • 11 July

    • Pl chevron_right

      Ignacy Kuchciński: Digital Wellbeing Contract

      news.movim.eu / PlanetGnome • 11 July • 1 minute

    This month I have been accepted as a contractor to work on the Parental Controls frontent and integration as part of the Digital Wellbeing project. I’m very happy to take part in this cool endeavour, and very grateful to the GNOME Foundation for giving me this opportunity – special thanks to Steven Deobald and Allan Day for interviewing me and helping me connect with the team, despite our timezone compatibility 😉

    The idea is to redesign the Parental Controls app UI to bring it on par with modern GNOME apps, and integrate the parental controls in the GNOME Shell lock screen with collaboration with gnome-control-center. There also new features to be added, such as Screen Time monitoring and setting limits, Bedtime Schedule and Web Filtering support. The project has been going for quite some time, and there has been a lot of great work put into both designs by Sam Hewitt and the backend by Philip Withnall, who’s been really helpful teaching me about the project code practices and reviewing my MR. See the designs in the app-mockups ticket and the os-mockups ticket .

    We started implementing the design mockup MVP for Parental Controls, which you can find in the app-mockup ticket. We’re trying to meet the GNOME 49 release deadlines, but as always it’s a goal rather than certain milestone. So far we have finished the redesign of the current Parent Controls app without adding any new features, which is to refresh the UI for unlock page, rework the user selector to be a list rather than a carousel, and changed navigation to use pages. This will be followed by adding pages for Screen Time and Web Filtering.

    Refreshed unlock page Reworked user selector Navigation using pages, Screen Time and Web Filtering to be added

    I want to thank the team for helping me get on board and being generally just awesome to work with 😀 Until next update!

    • Pl chevron_right

      This Week in GNOME: #208 Converting Colors

      news.movim.eu / PlanetGnome • 11 July • 3 minutes

    Update on what happened across the GNOME project in the week from July 04 to July 11.

    GNOME Core Apps and Libraries

    Calendar

    A simple calendar application.

    FineFindus reports

    GNOME Calendar now allows exporting events as .ics files, allowing them to be easily shared.

    gnome_calendar_export_event.BhjWI4cv_1zqHhw.webp

    GNOME Development Tools

    GNOME Builder

    IDE for writing GNOME-based software.

    Nokse reports

    This week GNOME Builder received some new features!

    • Inline git blame to see who last modified each line of code
    • Changes and diagnostics overview displayed directly in the scrollbar
    • Enhanced LSP markdown rendering with syntax highlighting

    builder-1.ChC8_xGy_1gYN9D.webp

    builder-3.B_GRLvlz_1yUntK.webp

    GNOME Circle Apps and Libraries

    Déjà Dup Backups

    A simple backup tool.

    Michael Terry says

    Déjà Dup Backups 49.alpha is out for testing!

    It features a UI refresh and file-manager-based restores (for Restic only).

    Read the announcement for install instructions and more info.

    Any feedback is appreciated!

    Third Party Projects

    Dev Toolbox

    Dev tools at your fingertips

    Alessandro Iepure reports

    When I first started Dev Toolbox, it was just a simple tool I built for myself, a weekend project born out of curiosity and the need for something useful. I never imagined anyone else would care about it, let alone use it regularly. I figured maybe a few developers here and there would find it helpful. But then people started using it. Reporting bugs. Translating it. Opening pull requests. Writing reviews. Sharing it with friends. And suddenly, it wasn’t just my toolbox anymore. Fast forward to today: over 50k downloads on Flathub and 300 stars on GitHub . I still can’t quite believe it.

    To every contributor, translator, tester, reviewer, or curious user who gave it a shot: thank you. You turned a small idea into something real, something useful, and something I’m proud to keep building.

    Enough feelings. Let’s talk about what’s new in v1.3.0!

    • New tool: Color Converter Convert between HEX, RGB, HSL, and other formats. (Thanks @Flachz )
    • JWT tool improvements You can now encode payloads and verify signatures. (Thanks @Flachz )
    • Chmod tool upgrade Added support for setuid, setgid, and the sticky bit. (Thanks @Flachz )
    • Improved search, inside and out
      • The app now includes extra keywords and metadata, making it easier to discover in app stores and desktops
      • In-app search now matches tool keywords, not just their names. (Thanks @freeducks-debug )
    • Now a GNOME search provider You can search and launch Dev Toolbox tools straight from the Overview
    • Updated translations Many new translatable strings were added this release. Thank you to all translators who chipped in.

    devToolbox-colorConverter.WAvtznUG_Z1zedVW.webp

    GNOME Websites

    Victoria 🏳️‍⚧️🏳️‍🌈 she/her reports

    On welcome.gnome.org, of all of the listed teams, only Translation and Documentation Teams linked to their wikis instead of their respective Welcome pages. But now this changes for Translation Team! After several months of working on this we finally have our own Welcome page . Now is the best time to make GNOME speak your language!

    Miscellaneous

    Arjan reports

    PyGObject has support for async functions since 3.50. Now the async functions and methods are also discoverable from the GNOME Python API documentation .

    PyGObject_API_docs.JyB62TFi_1UArFd.webp

    GNOME Foundation

    steven announces

    The 2025-07-05 Foundation Update is out:

    • Grants and Fellowships Plan
    • Friends of GNOME, social media partners, shell notification
    • Annual Report… I haven’t done it yet
    • Fiscal Controls and Operational Resilience… yay?
    • Digital Wellbeing Frontend Kickoff
    • Office Hours
    • A Hacker in Need

    https://blogs.gnome.org/steven/2025/07/05/2025-07-05-foundation-update/

    Digital Wellbeing Project

    Ignacy Kuchciński (ignapk) reports

    As part of the Digital Wellbeing project, sponsored by the GNOME Foundation, there is an initiative to redesign the Parental Controls to bring it on par with modern GNOME apps and implement new features such as Screen Time monitoring, Bedtime Schedule and Web Filtering. Recently the UI for the unlock page was refreshed, the user selector was reworked to be a list rather than a carousel, and navigation was changed to use pages. There’s more to come, see https://blogs.gnome.org/ignapk/2025/07/11/digital-wellbeing-contract/ for more information.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Thibault Martin: Kubernetes is not just for Black Friday

      news.movim.eu / PlanetGnome • 9 July • 12 minutes

    I self-host services mostly for myself. My threat model is particular: the highest threats I face are my own incompetence and hardware failures. To mitigate the weight of my incompetence, I relied on podman containers to minimize the amount of things I could misconfigure. I also wrote ansible playbooks to deploy the containers on my VPS, thus making it easy to redeploy them elsewhere if my VPS failed.

    I've always ruled out Kubernetes as too complex machinery designed for large organizations who face significant surges in traffic during specific events like Black Friday sales. I thought Kubernetes had too many moving parts and would work against my objectives.

    I was wrong. Kubernetes is not just for large organizations with scalability needs I will never have. Kubernetes makes perfect sense for a homelabber who cares about having a simple, sturdy setup. It has less moving parts than my podman and ansible setup, more standard development and deployment practices, and it allows me to rely on the cumulative expertise of thousands of experts.

    I don't want to do things manually or alone

    Self-hosting services is much more difficult than just putting them online . This is a hobby for me, something I do on my free time, so I need to spend as little time doing maintenance as possible. I also know I don't have peers to review my deployments. If I have to choose between using standardized methods that have been reviewed by others or doing things my own way, I will use the standardized method.

    My main threats are:

    I can and will make mistakes. I am an engineer, but my current job is not to maintain services online. In my homelab, I am also a team of one. This means I don't have colleagues to spot the mistakes I make.

    [!info] This means I need to use battle-tested and standardized software and deployment methods.

    I have limited time for it. I am not on call 24/7. I want to enjoy time off with my family. I have work to do. I can't afford to spend my life in front of a computer to figure out what's wrong.

    [!info] This means I need to have a reliable deployment. I need to be notified when something goes in the wrong direction, and when something went completely wrong.

    My hardware can fail, or be stolen. Having working backups is critical. But if my hardware failed, I would still need to restore backups somewhere.

    [!info] This means I need to be able to rebuild my infrastructure quickly and reliably, and restore backups on it.

    I was doing things too manually and alone

    Since I wanted to get standardized software, containers seemed like a good idea. podman was particularly interesting to me because it can generate systemd services that will keep the containers up and running across restarts.

    I could have deployed the containers manually on my VPS and generated the systemd services by invoking the CLI. But I would then risk making small tweaks on the spot, resulting in a deployment that is difficult to replicate elsewhere.

    Instead, I wrote an ansible-playbook based on the containers.podman collection and other ansible modules. This way, ansible deploys the right containers on my VPS, copies or updates the config files for my services, and I can easily replicate this elsewhere.

    It has served me well and worked decently for years now, but I'm starting to see the limits of this approach. Indeed, on their introduction , the ansible maintainers state

    Ansible uses simple, human-readable scripts called playbooks to automate your tasks. You declare the desired state of a local or remote system in your playbook. Ansible ensures that the system remains in that state.

    This is mostly true for ansible, but this is not really the case for the podman collection. In practice I still have to do manual steps in a specific order, like creating a pod first, then adding containers to the pod, then generating a systemd service for the pod, etc.

    To give you a very concrete example, this is what the tasks/main.yaml of my Synapse (Matrix) server deployment role looks like.

    - name: Create synapse pod
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        state: created
    
    - name: Stop synapse pod
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        state: stopped
    
    - name: Create synapse's postgresql
      containers.podman.podman_container:
        name: synapse-postgres
        image: docker.io/library/postgres:{{ synapse_container_pg_tag }}
        pod: pod-synapse
        volume:
          - synapse_pg_pdata:/var/lib/postgresql/data
          - synapse_backup:/tmp/backup
        env:
          {
            "POSTGRES_USER": "{{ synapse_pg_username }}",
            "POSTGRES_PASSWORD": "{{ synapse_pg_password }}",
            "POSTGRES_INITDB_ARGS": "--encoding=UTF-8 --lc-collate=C --lc-ctype=C",
          }
    
    - name: Copy Postgres config
      ansible.builtin.copy:
        src: postgresql.conf
        dest: /var/lib/containers/storage/volumes/synapse_pg_pdata/_data/postgresql.conf
        mode: "600"
    
    - name: Create synapse container and service
      containers.podman.podman_container:
        name: synapse
        image: docker.io/matrixdotorg/synapse:{{ synapse_container_tag }}
        pod: pod-synapse
        volume:
          - synapse_data:/data
          - synapse_backup:/tmp/backup
        labels:
          {
            "traefik.enable": "true",
            "traefik.http.routers.synapse.entrypoints": "websecure",
            "traefik.http.routers.synapse.rule": "Host(`matrix.{{ base_domain }}`)",
            "traefik.http.services.synapse.loadbalancer.server.port": "8008",
            "traefik.http.routers.synapse.tls": "true",
            "traefik.http.routers.synapse.tls.certresolver": "letls",
          }
    
    - name: Copy Synapse's homeserver configuration file
      ansible.builtin.template:
        src: homeserver.yaml.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/homeserver.yaml
        mode: "600"
    
    - name: Copy Synapse's logging configuration file
      ansible.builtin.template:
        src: log.config.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name}}.log.config
        mode: "600"
    
    - name: Copy Synapse's signing key
      ansible.builtin.template:
        src: signing.key.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name }}.signing.key
        mode: "600"
    
    - name: Generate the systemd unit for Synapse
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        generate_systemd:
          path: /etc/systemd/system
          restart_policy: always
    
    - name: Enable synapse unit
      ansible.builtin.systemd:
        name: pod-pod-synapse.service
        enabled: true
        daemon_reload: true
    
    - name: Make sure synapse is running
      ansible.builtin.systemd:
        name: pod-pod-synapse.service
        state: started
        daemon_reload: true
    
    - name: Allow traffic in monitoring firewalld zone for synapse metrics
      ansible.posix.firewalld:
        zone: internal
        port: "9000/tcp"
        permanent: true
        state: enabled
      notify: firewalld reload
    

    I'm certain I'm doing some things wrong and this file can be shortened and improved, but this is also my point: I'm writing a file specifically for my needs, that is not peer reviewed.

    Upgrades are also not necessarily trivial. While in theory it's as simple as updating the image tag in my playbook variables, in practice things get more complex when some containers depend on others.

    [!info] With Ansible, I must describe precisely the steps my server has to go through to deploy the new containers, how to check their health, and how to roll back if needed.

    Finally, discoverability of services is not great. I used traefik as a reverse proxy, and gave it access to the docker socket so it could read the labels of my other containers (like in the labels section of the yaml file above, containing the domain to use for Synapse), figure out what domain names I used, and route traffic to the correct containers. I wish a similar mechanism existed for e.g. prometheus to find new resources and scrape their metrics automatically, but didn't find any. Configuring Prometheus to scrape my pods was brittle and required a lot of manual work.

    Working with Kubernetes and its community

    What I need is a tool that lets me write "I want to deploy the version X of Keycloak." I need it to be able to figure by itself what version is currently running, what needs to be done to deploy the version X, whether the new deployment goes well, and how to roll back automatically if it can't deploy the new version.

    The good news is that this tool exists. It's called Kubernetes, and contrary to popular belief it's not just for large organizations that run services for millions of people and see surges in traffic for Black Friday sales. Kubernetes is software that runs on one or several servers, forming a cluster, and uses their resources to run the containers you asked it to.

    Kubernetes gives me more standardized deployments

    To deploy services on Kubernetes you have to describe what containers to use, how many of them will be deployed, how they're related to one another, etc. To describe these, you use yaml manifests that you apply to your cluster. Kubernetes takes care of the low-level implementation so you can describe what you want to run, how much resources you want to allocate to it, and how to expose it.

    The Kubernetes docs give the following manifest for an example Deployment that will spin up 3 nginx containers (without exposing them outside of the cluster)

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
    

    From my laptop, I can apply this file to my cluster by using kubectl , the CLI to control kubernetes clusters.

    $ kubectl apply -f nginx-deployment.yaml
    

    [!info] With Kubernetes, I can describe my infrastructure as yaml files. Kubernetes will read them, deploy the right containers, and monitor their health.

    If there is already a well established community around the project, chances are that some people already maintain Helm charts for it. Helm charts describe how the containers, volumes and all the Kubernetes object are related to one another. When a project is popular enough, people will write charts for it and publish them on https://artifacthub.io/.

    Those charts are open source, and can be peer reviewed. They already describe all the containers, services, and other Kubernetes objects that need to deploy to make a service run. To use the, I only have to define configuration variables, called Helm values , and I can get a service running in minutes.

    To deploy a fully fledged Keycloak instance on my cluster, I need to override default parameters in a values.yaml file, like for example

    ingress:
    	enabled: true
    	hostname: keycloak.ergaster.org
    	tls: true
    

    This is a short example. In practice, I need to override more values to fine tune my deployment and make it production ready. Once it's done I can deploy a configured Keycloak on my cluster by typing these commands on my laptop

    $ helm repo add bitnami https://charts.bitnami.com/bitnami
    $ helm install my-keycloak -f values.yaml bitnami/keycloak --version 24.7.4
    

    In practice I don't use helm on my laptop but I write yaml files in a git repository to describe what Helm charts I want to use on my cluster and how to configure them. Then, a software suite running on my Kubernetes cluster called Flux detects changes on the git repository and applies them to my cluster. It makes changes even easier to track and roll back. More on that in a further blog post.

    Of course, it would be reckless to deploy charts you don't understand, because you wouldn't be able to chase problems down as they arose. But having community-maintained, peer-reviewed charts gives you a solid base to configure and deploy services. This minimizes the room for error.

    [!info] Helm charts let me benefit from the expertise of thousands of experts. I need to understand what they did, but I have a solid foundation to build on

    But while a Kubernetes cluster is easy to use , it can be difficult to set up . Or is it?

    Kubernetes doesn't have to be complex

    A fully fledged Kubernetes cluster is a complex beast. Kubernetes, often abbreviated k8s, was initially designed to run on several machines. When setting up a cluster, you have to choose between components you know nothing about. What do I want for the network of my cluster? I don't know, I don't even know how the cluster network is supposed to work, and I don't want to know! I want a Kubernetes cluster, not a Lego toolset!

    [!info] A fully fledged cluster solves problems for large companies with public facing services, not problems for a home lab.

    Quite a few cloud providers offer managed cluster options so you don't have to worry about it, but they are expensive for an individual. In particular, they charge fees depending on the amount of outgoing traffic (egress fees). Those are difficult to predict.

    Fortunately, a team of brilliant people has created an opinionated bundle of software to deploy a Kubernetes cluster on a single server (though it can form cluster with several nodes too). They cheekily call it it k3s to advertise it as a small k8s.

    [!info] For a self-hosting enthusiast who want to run Kubernetes on a single server, k3s works like k8s, but installing and maintaining the cluster itself is much simpler.

    Since I don't have High Availability needs and can afford to get my services offline occasionally, k3s on a single node is more than enough to let me play with Kubernetes without the extra complexity.

    Installing k3s can be as simple as running the one-liner they advertise on their website

    $ curl -sfL https://get.k3s.io | sh -
    

    I've found their k3s-ansible playbook more useful since it does a few checks on the cluster host and copies the kubectl configuration files on your laptop automatically.

    Discoverability on Kubernetes is fantastic

    In my podman setup, I loved how traefik could read the labels of a container, figure out what domains I used, and where to route traffic for this domain.

    Not only is the same thing true for Kubernetes, it goes further. cert-manager, the standard way to retrieve certs in a Kubernetes cluster, will read Ingress properties to figure out what domains I use and retrieve certificates for them. If you're not familiar with Kubernetes, an Ingress is a Kubernetes object telling your cluster "I want to expose those containers to the web, and this is how they can be reached."

    Kubernetes has an Operator pattern. When I install a service on my Kubernetes cluster, it often comes with a specific Kubernetes object that Prometheus Operator can read to know how to scrape the service.

    All of this happens by adding an annotation or two in yaml files. I don't have to fiddle with networks. I don't have to configure things manually. Kubernetes handles all that complexity for me.

    Conclusion

    It was a surprise for me to realize that Kubernetes is not the complex beast I thought it was. Kubernetes internals can be difficult to grasp, and there is a steeper learning curve than with docker-compose. But it's well worth the effort.

    I got into Kubernetes by deploying k3s on my Raspberry Pi 4. I then moved to a beefier mini pc, nit because Kubernetes added too much overhead, but because the CPU of the Raspberry Pi 4 is too weak to handle my encrypted backups .

    With Kubernetes and Helm, I have more standardized deployments. The open source services I deploy have been crafted and reviewed by a community of enthusiasts and professionals. Kubernetes handles a lot of the complexity for me, so I don't have to. My k3s cluster runs on a single node. My volumes live on my disk (via Rancher's local path provisioner ). And I still don't do Black Friday sales!

    • Pl chevron_right

      Andy Wingo: guile lab notebook: on the move!

      news.movim.eu / PlanetGnome • 8 July • 5 minutes

    Hey, a quick update, then a little story. The big news is that I got Guile wired to a moving garbage collector !

    Specifically, this is the mostly-moving collector with conservative stack scanning. Most collections will be marked in place. When the collector wants to compact, it will scan ambiguous roots in the beginning of the collection cycle, marking objects referenced by such roots in place. Then the collector will select some blocks for evacuation, and when visiting an object in those blocks, it will try to copy the object to one of the evacuation target blocks that are held in reserve. If the collector runs out of space in the evacuation reserve, it falls back to marking in place.

    Given that the collector has to cope with failed evacuations, it is easy to give the it the ability to pin any object in place. This proved useful when making the needed modifications to Guile: for example, when we copy a stack slice containing ambiguous references to a heap-allocated continuation, we eagerly traverse that stack to pin the referents of those ambiguous edges. Also, whenever the address of an object is taken and exposed to Scheme, we pin that object. This happens frequently for identity hashes ( hashq ).

    Anyway, the bulk of the work here was a pile of refactors to Guile to allow a centralized scm_trace_object function to be written, exposing some object representation details to the internal object-tracing function definition while not exposing them to the user in the form of API or ABI.

    bugs

    I found quite a few bugs. Not many of them were in Whippet, but some were, and a few are still there; Guile exercises a GC more than my test workbench is able to. Today I’d like to write about a funny one that I haven’t fixed yet.

    So, small objects in this garbage collector are managed by a Nofl space . During a collection, each pointer-containing reachable object is traced by a global user-supplied tracing procedure. That tracing procedure should call a collector-supplied inline function on each of the object’s fields. Obviously the procedure needs a way to distinguish between different kinds of objects, to trace them appropriately; in Guile, we use an the low bits of the initial word of heap objects for this purpose.

    Object marks are stored in a side table in associated 4-MB aligned slabs, with one mark byte per granule (16 bytes). 4 MB is 0x400000, so for an object at address A, its slab base is at A & ~0x3fffff , and the mark byte is offset by (A & 0x3fffff) >> 4 . When the tracer sees an edge into a block scheduled for evacuation, it first checks the mark byte to see if it’s already marked in place; in that case there’s nothing to do. Otherwise it will try to evacuate the object, which proceeds as follows...

    But before you read, consider that there are a number of threads which all try to make progress on the worklist of outstanding objects needing tracing (the grey objects). The mutator threads are paused; though we will probably add concurrent tracing at some point, we are unlikely to implement concurrent evacuation. But it could be that two GC threads try to process two different edges to the same evacuatable object at the same time, and we need to do so correctly!

    With that caveat out of the way, the implementation is here . The user has to supply an annoyingly-large state machine to manage the storage for the forwarding word; Guile’s is here . Basically, a thread will try to claim the object by swapping in a busy value (-1) for the initial word. If that worked, it will allocate space for the object. If that failed, it first marks the object in place, then restores the first word. Otherwise it installs a forwarding pointer in the first word of the object’s old location, which has a specific tag in its low 3 bits allowing forwarded objects to be distinguished from other kinds of object.

    I don’t know how to prove this kind of operation correct, and probably I should learn how to do so. I think it’s right, though, in the sense that either the object gets marked in place or evacuated, all edges get updated to the tospace locations, and the thread that shades the object grey (and no other thread) will enqueue the object for further tracing (via its new location if it was evacuated).

    But there is an invisible bug, and one that is the reason for me writing these words :) Whichever thread manages to shade the object from white to grey will enqueue it on its grey worklist. Let’s say the object is on an block to be evacuated, but evacuation fails, and the object gets marked in place. But concurrently, another thread goes to do the same; it turns out there is a timeline in which the thread A has marked the object, published it to a worklist for tracing, but thread B has briefly swapped out the object’s the first word with the busy value before realizing the object was marked. The object might then be traced with its initial word stompled, which is totally invalid.

    What’s the fix? I do not know. Probably I need to manage the state machine within the side array of mark bytes, and not split between the two places (mark byte and in-object). Anyway, I thought that readers of this web log might enjoy a look in the window of this clown car.

    next?

    The obvious question is, how does it perform? Basically I don’t know yet; I haven’t done enough testing, and some of the heuristics need tweaking. As it is, it appears to be a net improvement over the non-moving configuration and a marginal improvement over BDW, but which currently has more variance. I am deliberately imprecise here because I have been more focused on correctness than performance; measuring properly takes time, and as you can see from the story above, there are still a couple correctness issues. I will be sure to let folks know when I have something. Until then, happy hacking!

    • Pl chevron_right

      Nancy Nyambura: Outreachy Update: Two Weeks of Configs, Word Lists, and GResource Scripting

      news.movim.eu / PlanetGnome • 7 July • 1 minute

    It has been a busy two weeks of learning as I continued to develop the GNOME Crosswords project. I have been mainly engaged in improving how word lists are managed and included using configuration files.

    I started by writing documentation for how to add a new word list to the project by using .conf files. The configuration files define properties like display name, language, and origin of the word list so that contributors can simply add new vocabulary datasets. Each word list can optionally pull in definitions from Wiktionary and parse them, converting them into resource files for use by the game.

    As an addition to this I also scripted a program that takes config file contents and turns them into GResource XML files. This isn’t the project bulk, but a useful tool that automates part of the setup and ensures consistency between different word list entries. It takes in a .conf file and outputs a corresponding .gresource.xml.in file, mapping the necessary resources to suitable aliases. This was a good chance for me to learn more about Python’s argparse and configparser modules.

    Beyond scripting, I’ve been in regular communication with my mentor, seeking feedback and guidance to improve both my technical and collaborative skills. One key takeaway has been the importance of sharing smaller, incremental commits rather than submitting a large block of work all at once, a practice that not only helps with clarity but also encourages consistent progress tracking. I was also advised to avoid relying on AI-generated code and instead focus on writing clear, simple, and understandable solutions, which I’ve consciously applied to both my code and documentation.

    Next, I’ll be looking into how definitions are extracted and how importer modules work. Lots more to discover, especially about the innards of the Wiktionary extractor tool.

    Looking forward to sharing more updates as I get deeper into the project

    • Pl chevron_right

      Bradley M. Kuhn: Copyleft-next Relaunched!

      news.movim.eu / PlanetGnome • 6 July

    I am excited that Richard Fontana and I have announced the relaunch of copyleft-next .

    The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.

    If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance) .

    I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org .

    • Pl chevron_right

      Jussi Pakkanen: Deoptimizing a red-black tree

      news.movim.eu / PlanetGnome • 5 July • 1 minute

    An ordered map is typically slower than a hash map, but it is needed every now and then. Thus I implemented one in Pystd . This implementation does not use individually allocated nodes, but instead stores all data in a single contiguous array.

    Implementing the basics was not particularly difficult. Debugging it to actually work took ages of staring at the debugger, drawing trees by hand on paper, printfing things out in Graphviz format and copypasting the output to a visualiser. But eventually I got it working. Performance measurements showed that my implementation is faster than std::map but slower than std::unordered_map .

    So far so good.

    The test application creates a tree with a million random integers. This means that the nodes are most likely in a random order in the backing store and searching through them causes a lot of cache misses. Having all nodes in an array means we can rearrange them for better memory access patterns.

    I wrote some code to reorganize the nodes so that the root is at the first spot and the remaining nodes are stored layer by layer. In this way the query always processes memory "left to right" making things easier for the branch predictor.

    Or that's the theory anyway. In practice this made things slower. And not even a bit slower, the "optimized version" was more than ten percent slower. Why? I don't know. Back to the drawing board.

    Maybe interleaving both the left and right child node next to each other is the problem? That places two mutually exclusive pieces of data on the same cache line. An alternative would be to place the entire left subtree in one memory area and the right one in a separate one. Thinking about this for a while, this can be accomplished by storing the nodes in tree traversal order, i.e. in numerical order.

    I did that. It was also slower than a random layout. Why? Again: no idea.

    Time to focus on something else, I guess.