call_end

    • chevron_right

      Jussi Pakkanen: AI slop is the new smoking

      news.movim.eu / PlanetGnome • 18:40

    • chevron_right

      Thibault Martin: Kubernetes is not just for Black Friday

      news.movim.eu / PlanetGnome • 2 days ago - 10:00 • 12 minutes

    I self-host services mostly for myself. My threat model is particular: the highest threats I face are my own incompetence and hardware failures. To mitigate the weight of my incompetence, I relied on podman containers to minimize the amount of things I could misconfigure. I also wrote ansible playbooks to deploy the containers on my VPS, thus making it easy to redeploy them elsewhere if my VPS failed.

    I've always ruled out Kubernetes as too complex machinery designed for large organizations who face significant surges in traffic during specific events like Black Friday sales. I thought Kubernetes had too many moving parts and would work against my objectives.

    I was wrong. Kubernetes is not just for large organizations with scalability needs I will never have. Kubernetes makes perfect sense for a homelabber who cares about having a simple, sturdy setup. It has less moving parts than my podman and ansible setup, more standard development and deployment practices, and it allows me to rely on the cumulative expertise of thousands of experts.

    I don't want to do things manually or alone

    Self-hosting services is much more difficult than just putting them online . This is a hobby for me, something I do on my free time, so I need to spend as little time doing maintenance as possible. I also know I don't have peers to review my deployments. If I have to choose between using standardized methods that have been reviewed by others or doing things my own way, I will use the standardized method.

    My main threats are:

    I can and will make mistakes. I am an engineer, but my current job is not to maintain services online. In my homelab, I am also a team of one. This means I don't have colleagues to spot the mistakes I make.

    [!info] This means I need to use battle-tested and standardized software and deployment methods.

    I have limited time for it. I am not on call 24/7. I want to enjoy time off with my family. I have work to do. I can't afford to spend my life in front of a computer to figure out what's wrong.

    [!info] This means I need to have a reliable deployment. I need to be notified when something goes in the wrong direction, and when something went completely wrong.

    My hardware can fail, or be stolen. Having working backups is critical. But if my hardware failed, I would still need to restore backups somewhere.

    [!info] This means I need to be able to rebuild my infrastructure quickly and reliably, and restore backups on it.

    I was doing things too manually and alone

    Since I wanted to get standardized software, containers seemed like a good idea. podman was particularly interesting to me because it can generate systemd services that will keep the containers up and running across restarts.

    I could have deployed the containers manually on my VPS and generated the systemd services by invoking the CLI. But I would then risk making small tweaks on the spot, resulting in a deployment that is difficult to replicate elsewhere.

    Instead, I wrote an ansible-playbook based on the containers.podman collection and other ansible modules. This way, ansible deploys the right containers on my VPS, copies or updates the config files for my services, and I can easily replicate this elsewhere.

    It has served me well and worked decently for years now, but I'm starting to see the limits of this approach. Indeed, on their introduction , the ansible maintainers state

    Ansible uses simple, human-readable scripts called playbooks to automate your tasks. You declare the desired state of a local or remote system in your playbook. Ansible ensures that the system remains in that state.

    This is mostly true for ansible, but this is not really the case for the podman collection. In practice I still have to do manual steps in a specific order, like creating a pod first, then adding containers to the pod, then generating a systemd service for the pod, etc.

    To give you a very concrete example, this is what the tasks/main.yaml of my Synapse (Matrix) server deployment role looks like.

    - name: Create synapse pod
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        state: created
    
    - name: Stop synapse pod
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        state: stopped
    
    - name: Create synapse's postgresql
      containers.podman.podman_container:
        name: synapse-postgres
        image: docker.io/library/postgres:{{ synapse_container_pg_tag }}
        pod: pod-synapse
        volume:
          - synapse_pg_pdata:/var/lib/postgresql/data
          - synapse_backup:/tmp/backup
        env:
          {
            "POSTGRES_USER": "{{ synapse_pg_username }}",
            "POSTGRES_PASSWORD": "{{ synapse_pg_password }}",
            "POSTGRES_INITDB_ARGS": "--encoding=UTF-8 --lc-collate=C --lc-ctype=C",
          }
    
    - name: Copy Postgres config
      ansible.builtin.copy:
        src: postgresql.conf
        dest: /var/lib/containers/storage/volumes/synapse_pg_pdata/_data/postgresql.conf
        mode: "600"
    
    - name: Create synapse container and service
      containers.podman.podman_container:
        name: synapse
        image: docker.io/matrixdotorg/synapse:{{ synapse_container_tag }}
        pod: pod-synapse
        volume:
          - synapse_data:/data
          - synapse_backup:/tmp/backup
        labels:
          {
            "traefik.enable": "true",
            "traefik.http.routers.synapse.entrypoints": "websecure",
            "traefik.http.routers.synapse.rule": "Host(`matrix.{{ base_domain }}`)",
            "traefik.http.services.synapse.loadbalancer.server.port": "8008",
            "traefik.http.routers.synapse.tls": "true",
            "traefik.http.routers.synapse.tls.certresolver": "letls",
          }
    
    - name: Copy Synapse's homeserver configuration file
      ansible.builtin.template:
        src: homeserver.yaml.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/homeserver.yaml
        mode: "600"
    
    - name: Copy Synapse's logging configuration file
      ansible.builtin.template:
        src: log.config.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name}}.log.config
        mode: "600"
    
    - name: Copy Synapse's signing key
      ansible.builtin.template:
        src: signing.key.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name }}.signing.key
        mode: "600"
    
    - name: Generate the systemd unit for Synapse
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        generate_systemd:
          path: /etc/systemd/system
          restart_policy: always
    
    - name: Enable synapse unit
      ansible.builtin.systemd:
        name: pod-pod-synapse.service
        enabled: true
        daemon_reload: true
    
    - name: Make sure synapse is running
      ansible.builtin.systemd:
        name: pod-pod-synapse.service
        state: started
        daemon_reload: true
    
    - name: Allow traffic in monitoring firewalld zone for synapse metrics
      ansible.posix.firewalld:
        zone: internal
        port: "9000/tcp"
        permanent: true
        state: enabled
      notify: firewalld reload
    

    I'm certain I'm doing some things wrong and this file can be shortened and improved, but this is also my point: I'm writing a file specifically for my needs, that is not peer reviewed.

    Upgrades are also not necessarily trivial. While in theory it's as simple as updating the image tag in my playbook variables, in practice things get more complex when some containers depend on others.

    [!info] With Ansible, I must describe precisely the steps my server has to go through to deploy the new containers, how to check their health, and how to roll back if needed.

    Finally, discoverability of services is not great. I used traefik as a reverse proxy, and gave it access to the docker socket so it could read the labels of my other containers (like in the labels section of the yaml file above, containing the domain to use for Synapse), figure out what domain names I used, and route traffic to the correct containers. I wish a similar mechanism existed for e.g. prometheus to find new resources and scrape their metrics automatically, but didn't find any. Configuring Prometheus to scrape my pods was brittle and required a lot of manual work.

    Working with Kubernetes and its community

    What I need is a tool that lets me write "I want to deploy the version X of Keycloak." I need it to be able to figure by itself what version is currently running, what needs to be done to deploy the version X, whether the new deployment goes well, and how to roll back automatically if it can't deploy the new version.

    The good news is that this tool exists. It's called Kubernetes, and contrary to popular belief it's not just for large organizations that run services for millions of people and see surges in traffic for Black Friday sales. Kubernetes is software that runs on one or several servers, forming a cluster, and uses their resources to run the containers you asked it to.

    Kubernetes gives me more standardized deployments

    To deploy services on Kubernetes you have to describe what containers to use, how many of them will be deployed, how they're related to one another, etc. To describe these, you use yaml manifests that you apply to your cluster. Kubernetes takes care of the low-level implementation so you can describe what you want to run, how much resources you want to allocate to it, and how to expose it.

    The Kubernetes docs give the following manifest for an example Deployment that will spin up 3 nginx containers (without exposing them outside of the cluster)

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
    

    From my laptop, I can apply this file to my cluster by using kubectl , the CLI to control kubernetes clusters.

    $ kubectl apply -f nginx-deployment.yaml
    

    [!info] With Kubernetes, I can describe my infrastructure as yaml files. Kubernetes will read them, deploy the right containers, and monitor their health.

    If there is already a well established community around the project, chances are that some people already maintain Helm charts for it. Helm charts describe how the containers, volumes and all the Kubernetes object are related to one another. When a project is popular enough, people will write charts for it and publish them on https://artifacthub.io/.

    Those charts are open source, and can be peer reviewed. They already describe all the containers, services, and other Kubernetes objects that need to deploy to make a service run. To use the, I only have to define configuration variables, called Helm values , and I can get a service running in minutes.

    To deploy a fully fledged Keycloak instance on my cluster, I need to override default parameters in a values.yaml file, like for example

    ingress:
    	enabled: true
    	hostname: keycloak.ergaster.org
    	tls: true
    

    This is a short example. In practice, I need to override more values to fine tune my deployment and make it production ready. Once it's done I can deploy a configured Keycloak on my cluster by typing these commands on my laptop

    $ helm repo add bitnami https://charts.bitnami.com/bitnami
    $ helm install my-keycloak -f values.yaml bitnami/keycloak --version 24.7.4
    

    In practice I don't use helm on my laptop but I write yaml files in a git repository to describe what Helm charts I want to use on my cluster and how to configure them. Then, a software suite running on my Kubernetes cluster called Flux detects changes on the git repository and applies them to my cluster. It makes changes even easier to track and roll back. More on that in a further blog post.

    Of course, it would be reckless to deploy charts you don't understand, because you wouldn't be able to chase problems down as they arose. But having community-maintained, peer-reviewed charts gives you a solid base to configure and deploy services. This minimizes the room for error.

    [!info] Helm charts let me benefit from the expertise of thousands of experts. I need to understand what they did, but I have a solid foundation to build on

    But while a Kubernetes cluster is easy to use , it can be difficult to set up . Or is it?

    Kubernetes doesn't have to be complex

    A fully fledged Kubernetes cluster is a complex beast. Kubernetes, often abbreviated k8s, was initially designed to run on several machines. When setting up a cluster, you have to choose between components you know nothing about. What do I want for the network of my cluster? I don't know, I don't even know how the cluster network is supposed to work, and I don't want to know! I want a Kubernetes cluster, not a Lego toolset!

    [!info] A fully fledged cluster solves problems for large companies with public facing services, not problems for a home lab.

    Quite a few cloud providers offer managed cluster options so you don't have to worry about it, but they are expensive for an individual. In particular, they charge fees depending on the amount of outgoing traffic (egress fees). Those are difficult to predict.

    Fortunately, a team of brilliant people has created an opinionated bundle of software to deploy a Kubernetes cluster on a single server (though it can form cluster with several nodes too). They cheekily call it it k3s to advertise it as a small k8s.

    [!info] For a self-hosting enthusiast who want to run Kubernetes on a single server, k3s works like k8s, but installing and maintaining the cluster itself is much simpler.

    Since I don't have High Availability needs and can afford to get my services offline occasionally, k3s on a single node is more than enough to let me play with Kubernetes without the extra complexity.

    Installing k3s can be as simple as running the one-liner they advertise on their website

    $ curl -sfL https://get.k3s.io | sh -
    

    I've found their k3s-ansible playbook more useful since it does a few checks on the cluster host and copies the kubectl configuration files on your laptop automatically.

    Discoverability on Kubernetes is fantastic

    In my podman setup, I loved how traefik could read the labels of a container, figure out what domains I used, and where to route traffic for this domain.

    Not only is the same thing true for Kubernetes, it goes further. cert-manager, the standard way to retrieve certs in a Kubernetes cluster, will read Ingress properties to figure out what domains I use and retrieve certificates for them. If you're not familiar with Kubernetes, an Ingress is a Kubernetes object telling your cluster "I want to expose those containers to the web, and this is how they can be reached."

    Kubernetes has an Operator pattern. When I install a service on my Kubernetes cluster, it often comes with a specific Kubernetes object that Prometheus Operator can read to know how to scrape the service.

    All of this happens by adding an annotation or two in yaml files. I don't have to fiddle with networks. I don't have to configure things manually. Kubernetes handles all that complexity for me.

    Conclusion

    It was a surprise for me to realize that Kubernetes is not the complex beast I thought it was. Kubernetes internals can be difficult to grasp, and there is a steeper learning curve than with docker-compose. But it's well worth the effort.

    I got into Kubernetes by deploying k3s on my Raspberry Pi 4. I then moved to a beefier mini pc, nit because Kubernetes added too much overhead, but because the CPU of the Raspberry Pi 4 is too weak to handle my encrypted backups .

    With Kubernetes and Helm, I have more standardized deployments. The open source services I deploy have been crafted and reviewed by a community of enthusiasts and professionals. Kubernetes handles a lot of the complexity for me, so I don't have to. My k3s cluster runs on a single node. My volumes live on my disk (via Rancher's local path provisioner ). And I still don't do Black Friday sales!

    • wifi_tethering open_in_new

      This post is public

      ergaster.org /posts/2025/07/09-kubernetes-black-friday/

    • chevron_right

      Andy Wingo: guile lab notebook: on the move!

      news.movim.eu / PlanetGnome • 3 days ago - 14:28 • 5 minutes

    Hey, a quick update, then a little story. The big news is that I got Guile wired to a moving garbage collector !

    Specifically, this is the mostly-moving collector with conservative stack scanning. Most collections will be marked in place. When the collector wants to compact, it will scan ambiguous roots in the beginning of the collection cycle, marking objects referenced by such roots in place. Then the collector will select some blocks for evacuation, and when visiting an object in those blocks, it will try to copy the object to one of the evacuation target blocks that are held in reserve. If the collector runs out of space in the evacuation reserve, it falls back to marking in place.

    Given that the collector has to cope with failed evacuations, it is easy to give the it the ability to pin any object in place. This proved useful when making the needed modifications to Guile: for example, when we copy a stack slice containing ambiguous references to a heap-allocated continuation, we eagerly traverse that stack to pin the referents of those ambiguous edges. Also, whenever the address of an object is taken and exposed to Scheme, we pin that object. This happens frequently for identity hashes ( hashq ).

    Anyway, the bulk of the work here was a pile of refactors to Guile to allow a centralized scm_trace_object function to be written, exposing some object representation details to the internal object-tracing function definition while not exposing them to the user in the form of API or ABI.

    bugs

    I found quite a few bugs. Not many of them were in Whippet, but some were, and a few are still there; Guile exercises a GC more than my test workbench is able to. Today I’d like to write about a funny one that I haven’t fixed yet.

    So, small objects in this garbage collector are managed by a Nofl space . During a collection, each pointer-containing reachable object is traced by a global user-supplied tracing procedure. That tracing procedure should call a collector-supplied inline function on each of the object’s fields. Obviously the procedure needs a way to distinguish between different kinds of objects, to trace them appropriately; in Guile, we use an the low bits of the initial word of heap objects for this purpose.

    Object marks are stored in a side table in associated 4-MB aligned slabs, with one mark byte per granule (16 bytes). 4 MB is 0x400000, so for an object at address A, its slab base is at A & ~0x3fffff , and the mark byte is offset by (A & 0x3fffff) >> 4 . When the tracer sees an edge into a block scheduled for evacuation, it first checks the mark byte to see if it’s already marked in place; in that case there’s nothing to do. Otherwise it will try to evacuate the object, which proceeds as follows...

    But before you read, consider that there are a number of threads which all try to make progress on the worklist of outstanding objects needing tracing (the grey objects). The mutator threads are paused; though we will probably add concurrent tracing at some point, we are unlikely to implement concurrent evacuation. But it could be that two GC threads try to process two different edges to the same evacuatable object at the same time, and we need to do so correctly!

    With that caveat out of the way, the implementation is here . The user has to supply an annoyingly-large state machine to manage the storage for the forwarding word; Guile’s is here . Basically, a thread will try to claim the object by swapping in a busy value (-1) for the initial word. If that worked, it will allocate space for the object. If that failed, it first marks the object in place, then restores the first word. Otherwise it installs a forwarding pointer in the first word of the object’s old location, which has a specific tag in its low 3 bits allowing forwarded objects to be distinguished from other kinds of object.

    I don’t know how to prove this kind of operation correct, and probably I should learn how to do so. I think it’s right, though, in the sense that either the object gets marked in place or evacuated, all edges get updated to the tospace locations, and the thread that shades the object grey (and no other thread) will enqueue the object for further tracing (via its new location if it was evacuated).

    But there is an invisible bug, and one that is the reason for me writing these words :) Whichever thread manages to shade the object from white to grey will enqueue it on its grey worklist. Let’s say the object is on an block to be evacuated, but evacuation fails, and the object gets marked in place. But concurrently, another thread goes to do the same; it turns out there is a timeline in which the thread A has marked the object, published it to a worklist for tracing, but thread B has briefly swapped out the object’s the first word with the busy value before realizing the object was marked. The object might then be traced with its initial word stompled, which is totally invalid.

    What’s the fix? I do not know. Probably I need to manage the state machine within the side array of mark bytes, and not split between the two places (mark byte and in-object). Anyway, I thought that readers of this web log might enjoy a look in the window of this clown car.

    next?

    The obvious question is, how does it perform? Basically I don’t know yet; I haven’t done enough testing, and some of the heuristics need tweaking. As it is, it appears to be a net improvement over the non-moving configuration and a marginal improvement over BDW, but which currently has more variance. I am deliberately imprecise here because I have been more focused on correctness than performance; measuring properly takes time, and as you can see from the story above, there are still a couple correctness issues. I will be sure to let folks know when I have something. Until then, happy hacking!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2025/07/08/guile-lab-notebook-on-the-move

    • chevron_right

      Nancy Nyambura: Outreachy Update: Two Weeks of Configs, Word Lists, and GResource Scripting

      news.movim.eu / PlanetGnome • 4 days ago - 08:41 • 1 minute

    It has been a busy two weeks of learning as I continued to develop the GNOME Crosswords project. I have been mainly engaged in improving how word lists are managed and included using configuration files.

    I started by writing documentation for how to add a new word list to the project by using .conf files. The configuration files define properties like display name, language, and origin of the word list so that contributors can simply add new vocabulary datasets. Each word list can optionally pull in definitions from Wiktionary and parse them, converting them into resource files for use by the game.

    As an addition to this I also scripted a program that takes config file contents and turns them into GResource XML files. This isn’t the project bulk, but a useful tool that automates part of the setup and ensures consistency between different word list entries. It takes in a .conf file and outputs a corresponding .gresource.xml.in file, mapping the necessary resources to suitable aliases. This was a good chance for me to learn more about Python’s argparse and configparser modules.

    Beyond scripting, I’ve been in regular communication with my mentor, seeking feedback and guidance to improve both my technical and collaborative skills. One key takeaway has been the importance of sharing smaller, incremental commits rather than submitting a large block of work all at once, a practice that not only helps with clarity but also encourages consistent progress tracking. I was also advised to avoid relying on AI-generated code and instead focus on writing clear, simple, and understandable solutions, which I’ve consciously applied to both my code and documentation.

    Next, I’ll be looking into how definitions are extracted and how importer modules work. Lots more to discover, especially about the innards of the Wiktionary extractor tool.

    Looking forward to sharing more updates as I get deeper into the project

    • wifi_tethering open_in_new

      This post is public

      nanciedev.wordpress.com /2025/07/07/outreachy-update-two-weeks-of-configs-word-lists-and-gresource-scripting/

    • chevron_right

      Bradley M. Kuhn: Copyleft-next Relaunched!

      news.movim.eu / PlanetGnome • 5 days ago - 09:30

    I am excited that Richard Fontana and I have announced the relaunch of copyleft-next .

    The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.

    If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance) .

    I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org .

    • wifi_tethering open_in_new

      This post is public

      ebb.org /bkuhn/blog/2025/07/06/copyleft-next-relaunch.html

    • chevron_right

      Jussi Pakkanen: Deoptimizing a red-black tree

      news.movim.eu / PlanetGnome • 6 days ago - 13:12 • 1 minute

    An ordered map is typically slower than a hash map, but it is needed every now and then. Thus I implemented one in Pystd . This implementation does not use individually allocated nodes, but instead stores all data in a single contiguous array.

    Implementing the basics was not particularly difficult. Debugging it to actually work took ages of staring at the debugger, drawing trees by hand on paper, printfing things out in Graphviz format and copypasting the output to a visualiser. But eventually I got it working. Performance measurements showed that my implementation is faster than std::map but slower than std::unordered_map .

    So far so good.

    The test application creates a tree with a million random integers. This means that the nodes are most likely in a random order in the backing store and searching through them causes a lot of cache misses. Having all nodes in an array means we can rearrange them for better memory access patterns.

    I wrote some code to reorganize the nodes so that the root is at the first spot and the remaining nodes are stored layer by layer. In this way the query always processes memory "left to right" making things easier for the branch predictor.

    Or that's the theory anyway. In practice this made things slower. And not even a bit slower, the "optimized version" was more than ten percent slower. Why? I don't know. Back to the drawing board.

    Maybe interleaving both the left and right child node next to each other is the problem? That places two mutually exclusive pieces of data on the same cache line. An alternative would be to place the entire left subtree in one memory area and the right one in a separate one. Thinking about this for a while, this can be accomplished by storing the nodes in tree traversal order, i.e. in numerical order.

    I did that. It was also slower than a random layout. Why? Again: no idea.

    Time to focus on something else, I guess.

    • chevron_right

      Steven Deobald: 2025-07-05 Foundation Update

      news.movim.eu / PlanetGnome • 6 days ago - 06:31 • 6 minutes

    ## The Cat’s Out Of The Bag

    Since some of you are bound to see this Reddit comment, and my reply , it’s probably useful for me to address it in a more public forum, even if it violates my “No Promises” rule.

    No, this wasn’t a shoot-from-the-hip reply. This has been the plan since I proposed a fundraising strategy to the Board. It is my intention to direct more of the Foundation’s resources toward GNOME development, once the Foundation’s basic expenses are taken care of. (Currently they are not.) The GNOME Foundation won’t stop running infrastructure, planning GUADEC, providing travel grants, or any of the other good things we do. But rather than the Foundation contributing to GNOME’s development exclusively through inbound/restricted grants, we will start to produce grants and fellowships ourselves.

    This will take time and it will demand more of the GNOME project. The project needs clear governance and management or we won’t know where to spend money, even if we have it. The Foundation won’t become a kingmaker, nor will we run lotteries — it’s up to the project to make recommendations and help us guide the deployment of capital toward our mission.

    ## Friends of GNOME

    So far, we have a cute little start to our fundraising campaign: I count 172 public Friends of GNOME over on https://donate.gnome.org/ … to everyone who contributes to GNOME and to everyone who donates to GNOME: thank you. Every contribution makes a huge difference and it’s been really heartwarming to see all this early support.

    We’ve taken the first step out of our cozy f/oss spaces: Reddit . One user even set up a “show me your donation!” thread . It’s really cute. 🙂 It’s hard to express just how important it is that we go out and meet our users for this exercise. We need them to know what an exciting time it is for GNOME: Windows 10 is dying, MacOS gets worse with every release, and they’re going to run GNOME on a phone soon. We also need them to know that GNOME needs their help.

    Big thanks to Sri for pushing this and to him and Brage for moderating /r/gnome . It matters a lot to find a shared space with users and if, as a contributor, you’ve been feeling like you need a little boost lately, I encourage you to head over to those Reddit threads. People love what you build, and it shows.

    ## Friends of GNOME: Partners

    The next big thing we need to do is to find partners who are willing to help us push a big message out across a lot of channels. We don’t even know who our users are, so it’s pretty hard to reach them. The more people see that GNOME needs their help, the more help we’ll get.

    Everyone I know who runs GNOME (but doesn’t pay much attention to the project) said the same thing when I asked what they wanted in return for a donation: “Nothing really… I just need you to ask me. I didn’t know GNOME needed donations!”

    If you know of someone with a large following or an organization with a lot of reach (or, heck, even a little reach), please email me and introduce me. I’m happy to get them involved to boost us.

    ## Friends of GNOME: Shell Notification

    KDE, Thunderbird, and Blender have had runaway success with their small donation notification. I’m not sure we can do this for GNOME 49 or not, but I’d love to try. I’ve opened an issue here:

    https://gitlab.gnome.org/Teams/Design/os-mockups/-/issues/274

    We may not know who our users are. But our software knows who our users are. 😉

    ## Annual Report

    I should really get on this but it’s been a busy week with other things. Thanks everyone who’s contributed their thoughts to the “Successes for 2025” issue so far. If you don’t see your name and you still want to contribute something, please go ahead!

    ## Fiscal Controls

    One of the aforementioned “other things” is Fiscal Controls.

    This concept goes by many names. “Fiscal Controls”, “Internal Controls”, “Internal Policies and Procedures”, etc. But they all refer to the same thing: how to manage financial risk. We’re taking a three-pronged approach to start with:

    1. Reduce spend and tighten up policies. We have put the travel policy on pause (barring GUADEC, which was already approved) and we intend to tighten up all our policies.
    2. Clarity on capital shortages. We need to know exactly what our P&L looks like in any given month, and what our 3-month, 6-month, and annual projections look like based on yesterday’s weather. Our bookkeepers, Ops team, and new treasurers are helping with this.
    3. Clarity in reporting. A 501(c)(3) is … kind of a weird shape. Not everyone in the Board is familiar with running a business and most certainly aren’t familiar with running a non-profit. So we need to make it painfully straightforward for everyone on the Board to understand the details of our financial position, without getting into the weeds: How much money are we responsible for, as a fiscal host? How much money is restricted? How much core money do we have? Accounting is more art than science and the nuances of reporting accurately (but without forcing everyone to read a balance sheet) is a large part of why that’s the case. Again, we have a lot of help from our bookkeepers, Ops team, and new treasurers.

    There’s a lot of work to do here and we’ll keep iterating, but these feel like strong starts.

    ## Organizational Resilience

    The other aforementioned “other thing” is resilience . We have a few things happening here.

    First, we need broader ownership, control, and access to bank accounts. This is, of course, the related to, but different from, fiscal controls — our controls ensure no one person can sign themselves a cheque for $50,000. Multiple signatories ensures that such responsibility doesn’t rest with a single individual. Everyone at the GNOME Foundation has impeccable moral standing but people do die, and we need to add resilience to that inevitability. More realistically (and immediately), we will be audited soon and the auditors will not care how trustworthy we believe one another to be.

    Second, we have our baseline processes: filing 990s, renewing our registration, renewing insurance, etc. All of these processes should be accessible to (and, preferably, executable by) multiple people.

    Third, we’re finally starting to make good use of Vaultwarden. Thanks again, Bart, for setting this up for us.

    Fourth, we need to ensure we have at least 3 administrators on each of our online accounts. Or, at worst, 2 administrators. Online accounts with an account owner should lean on an organizational account owner (not an individual) which multiple people control together. Thanks Rosanna for helping sort this out.

    Last, we need at least 2 folks with root level access to all our self-hosted services. This of course true in the most literal sense, but we also need our SREs to have accounts with each service.

    ## Digital Wellbeing Kickoff

    I’m pleased to announce that the Digital Wellbeing contract has kicked off! The developer who was awarded the contract is Ignacy Kuchciński and he has begun working with Philip and Sam as of Tuesday.

    ## Office Hours

    I had a couple pleasant conversations with hackers this week: Jordan Petridis and Sophie Harold. I asked Sophie what she thought about the idea of “office hours” as I feel like I’ve gotten increasingly disconnected from the community after my first few weeks. Her response was something to the effect of “you can only try.” 🙂

    So let’s do that. I’ll invite maintainers and if you’d like to join, please reach out to a maintainer to find out the BigBlueButton URL for next Friday.

    ## A Hacker In Need Of Help

    We have a hacker in the southwest United States who is currently in an unsafe living situation. This person has given me permission to ask for help on their behalf. If you or someone you know could provide a safe temporary living situation within the continental United States, please get in touch with me. They just want to hack in peace.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /steven/2025/07/05/2025-07-05-foundation-update/

    • chevron_right

      Hans de Goede: Recovering a FP2 which gives "flash write failure" errors

      news.movim.eu / PlanetGnome • 7 days ago - 16:14

    This blog post describes my successful os re-install on a fairphone 2 which was giving "flash write failure" errors when flashing it with fastboot, with the flash_FP2_factory.sh script. I'm writing down my recovery steps for this in case they are useful for anyone else.

    I believe that this is caused by the bootloader code which implements fastboot not having the ability to retry recoverable eMMC errors. It is still possible to write the eMMC from Linux which can retry these errors.

    So we can recover by directly fastboot-ing a recovery.img and then flashing things over adb.

    ( See step by step instructions... )


    comment count unavailable comments
    • chevron_right

      Richard Littauer: A handful of EDs

      news.movim.eu / PlanetGnome • 2 July

    I had the great privilege of going to UN Open Source Week at the UN, in New York City, last month. At one point, standing on the upper deck and looking out over the East River, I realized that there were more than a few former and current GNOME executive directors. So, we got a photo.

    Six people in front of a river on a building

    Stormy, Karen, Jeff, me, Steven, and Michael – not an ED, but the host of the event and the former board treasurer – all lined up.

    Fun.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /richardlitt/2025/07/02/a-handful-of-eds/