call_end

    • chevron_right

      Sam Thursfield: AI predictions for 2026

      news.movim.eu / PlanetGnome • Yesterday - 20:32 • 7 minutes

    Its a crazy time to be part of the tech world. I’m happy to be sat on the fringes here but I want to try and capture a bit of the madness, so in a few years we can look back on this blogpost and think “Oh yes, shit was wild in 2026”.

    (insert some AI slop image here of a raccoon driving a racing car or something)

    I have read the blog of Geoffrey Huntley for about 5 years since he famously right-clicked all the NFTs . Smart & interesting guy. I’ve also known the name Steve Yegge for a while, he has done enough notable things to get the honour of an entry in Wikipedia. Recently they’ve both written a lot about generating code with LLMs. I mean, I hope in 2026 we’ve all had some fun feeding freeform text and code into LLMs and playing with the results, they are a fascinating tool. But these two dudes are going into what looks like a sort of AI psychosis, where you feed so many LLMs into each other that you can see into the future, and in the process give most of your money to Anthropic.

    It’s worth reading some of their articles if you haven’t, there are interesting ideas in there, but I always pick up some bad energy. They’re big on the hook that, if you don’t study their techniques now, you’ll be out of a job by summer 2026. (Mark Zuckerborg promised this would happen by summer 2025, but somehow I still have to show up for work five days every week). The more I hear this, the more it feels like a sort of alpha-male flex, except online and in the context of the software industry. The alpha tech-bro is here, and he will Vibe Code the fuck out of you. The strong will reign, and the weak will wither. Is that how these guys see the world? Is that the only thing they think we can do with these here computers, is compete with each other in Silicon Valley’s Hunger Games?

    I felt a bit dizzy when I saw Geoffrey’s recent post about how he was now funded by cryptocurrency gamblers ( “two AI researchers are now funded by Solana”) who are betting on his project and gifting him the fees. I didn’t manage to understand what the gamblers would win. It seemed for a second like an interesting way to fund open research, although “Patreon but it’s also a casino” is definitely turn for the weird. Steve Yegge jumped on the bandwagon the same week ( “BAGS and the Creator Economy” ) and, without breaking any laws, gave us the faintest hint that something big is happening over there.

    Well…

    You’ll be surprised to know that both of them bailed on it within a week. I’m not sure why — I suspect maybe the gamblers got too annoying to deal with — but it seems some people lost some money. Although that’s really the only possible outcome from gambling. I’m sure the casino owners did OK out of it. Maybe its still wise to be wary of people who message you out of the blue wanting to sell you cryptocurrency.

    The excellent David Gerard had a write up immediately on Pivot To AI: “Steve Yegge’s Gas Town: Vibe coding goes crypto scam” . (David is not a crypto scammer and has a good old fashioned Patreon where you can support his journalism). He talks about addiction to AI, which I’m sure you know is a real thing.

    Addictive software was perfected back in the 2010s by social media giants. The same people who had been iterating on gambling machines for decades moved to California and gifted us infinite scroll . OpenAI and Anthropic are based in San Francisco. There’s something inherently addictive about a machine that takes your input, waits a second or two, and gives you back something that’s either interesting or not. Next time you use ChatGPT, look at how the interface leans into that!

    (Pivot To AI also have a great writeup of this: “Generative AI runs on gambling addiction — just one more prompt, bro!” )

    So, here we are in January 2026. There’s something very special about this post “Stevey’s Birthday Blog” . Happy birthday, Steve, and I’m glad you’re having fun. That said, I do wonder if we’ll look back in years to come on this post as something of an inflection point in the AI bubble.

    All though December I had weird sleeping patterns while I was building Gas Town. I’d work late at night, and then have to take deep naps in the middle of the day. I’d just be working along and boom, I’d drop. I have a pillow and blanket on the floor next to my workstation. I’ll just dive in and be knocked out for 90 minutes, once or often twice a day. At lunch, they surprised me by telling me that vibe coding at scale has messed up their sleep. They get blasted by the nap-strike almost daily, and are looking into installing nap pods in their shared workspace.

    Being addicted to something such that it fucks with your sleeping patterns isn’t a new invention. Ask around the punks in your local area. Humans can do amazing things. That story starts way before computers were invented. Scientists in the 16th century were absolute nutters who would like… drink mercury in the name of discovery. Isaac Newton came up with his theory of optics by skewering himself in the eye. (If you like science history, have a read of Neal Stephenson’s Baroque Cycle 🙂 Coding is fun and making computers do cool stuff can be very addictive. That story starts long before 2026 as well. Have you heard of the demoscene ?

    Part of what makes Geoffrey Huntley and Steve Yegge’s writing compelling is they are telling very interesting stories. They are leaning on existing cultural work to do that, of course. Every time I think about Geoffrey’s 5 line bash loop that feeds an LLMs output back into its input, the name reminds me of my favourite TV show when I was 12.

    Ralph Wiggum with his head glued to his shoulder. "Miss Hoover? I glued my head to my shoulder."

    Which is certainly better than the “human centipede” metaphor I might have gone with. I wasn’t built for this stuff.

    The Gas Town blog posts are similarly filled with steampunk metaphors and Steve Yegge’s blog posts are interspersed with generated images that, at first glance, look really cool. “Gas Town” looks like a point and click adventure, at first glance. In fact it’s a CLI that gives kooky names to otherwise dry concepts,… but look at the pictures! You can imagine gold coins spewing out of a factory into its moat while you use it.

    All the AI images in his posts look really cool at first glance. The beauty of real art is often in the details , so let’s take a look.

    What is that tower on the right? There’s an owl wearing goggles about to land on a tower… which is also wearing goggles?

    What’s that tiny train on the left that has indistinct creatures about the size of a foxes fist? I don’t know who on earth is on that bridge on the right, some horrific chimera of weasel and badger. The panda is stoicly ignoring the horrors of his creation like a good industrialist.

    What is the time on the clock tower? Where is the other half of the fox? Is the clock powered by …. oh no.

    Gas Town here is a huge factory with 37 chimneys all emitting good old sulphur and carbon dioxide, as God intended. But one question: if you had a factory that could produce large quantities of gold nuggets, would you store them on the outside ?

    Good engineering involves knowing when to look into the details, and when not to. Translating English to code with an LLM is fun and you can get some interesting results. But if you never look at the details, somewhere in your code is a horrific weasel badger chimera, a clock with crooked hands telling a time that doesn’t exist, and half a fox. Your program could make money… or it could spew gold coins all around town where everyone can grab them.

    So… my AI predictions for 2026. Let’s not worry too much about code. People and communities and friendships are the thing.

    The human world is 8 billion people. Many of us make a modest living growing and selling vegetables or fixing cars or teaching children to read and write. The tech industry is a big bubble that’s about to burst. Computers aren’t going anywhere, and our open source communities and foundations aren’t going anywhere. People and communities and friendships are the main thing. Helping out in small ways with some of the bad shit going on in the world. You don’t have to solve everything. Just one small step to help someone is more than many people do.

    Pay attention to what you’re doing. Take care of the details. Do your best to get a good night’s sleep.

    AI in 2026 is going to go about like this:

    • chevron_right

      Allan Day: GNOME Foundation Update, 2026-01-23

      news.movim.eu / PlanetGnome • 2 days ago - 17:07 • 1 minute

    It’s Friday so it’s time for another GNOME Foundation update. Much of this week has been a continuation of items from last week’s update , so I’m going to keep it fairly short and sweet.

    With FOSDEM happening next week (31st January to 1st February), preparation for the conference was the main standout item this week. There’s a lot happening around the conference for GNOME, including:

    • Three hackfests (GNOME OS, GTK, Board)
    • The Advisory Board meeting
    • A GNOME stand with merchandise
    • A social event on the Saturday
    • Plenty of GNOME-related talks on the schedule

    We’ve created a pad to keep track of everything. Feel free to edit it if anything is missing or incorrect.

    Other activities this week included:

    • Last week I reported that our Digital Wellbeing development program has completed its work. Ignacy provided a great writeup this week , with screenshots and a screencast of the new parental controls features. I’d like to take this opportunity to thank Endless for funding this important work which will make GNOME more accessible to young people and their carers.
    • On the infrastructure side, Bart landed a donate.gnome.org rewrite, which will make the site more maintainable. The rewrite also makes it possible to use the site’s core functionality to run other fundraisers, such as for Flathub or GIMP.
    • GUADEC 2026 planning continues, with a focus on securing arrangements for the venue and accommodation, as well as starting the sponsorship drive.
    • Accounting and systems work also continues in the run up to the audit. We are currently working through another application round to unlock features in the new payments processing platform. There’s also some work happening to phase out some financial services that are no longer used, and we are also working on some end of calendar year tax reports.

    That’s it for this update; I hope you found it interesting! Next week I will be busy at FOSDEM so there won’t be a regular weekly update, but hopefully the following week will contain a trip report from Brussels!

    • chevron_right

      Luis Villa: two questions on software “sovereignty”

      news.movim.eu / PlanetGnome • 2 days ago - 01:46 • 1 minute

    The EU looks to be getting more serious about software independence, often under the branding of “sovereignty”. India has been taking this path for a while. (A Wikipedia article on that needs a lot of love.) I don’t have coherent thoughts on this yet, but prompted by some recent discussions, two big questions:

    First: does software sovereignty for a geopolitical entity mean:

    1. we wrote the software from the bottom up
    2. we can change the software as necessary (not just hypothetically, but concretely: the technical skills and organizational capacity exist and are experienced)
    3. we sysadmin it (again, concretely: real skills, not just the legal license to download it)
    4. we can download it

    My understanding is that India increasingly demands one for important software systems, though apparently both their national desktop and mobile OSes are based on Ubuntu and Android, respectively, which would be more level 2. (FOSS only guarantees #4; it legally permits 2 and 3 but as I’ve said before , being legally permitted to do a thing is not the same as having the real capability to do the thing.)

    As the EU tries to set open source policy it will be interesting to see whether they can coherently ask this question, much less answer it.

    Second, and related: what would a Manhattan Project to make the EU reasonably independent in core operating system technologies (mobile, desktop, cloud) look like?

    It feels like, if well-managed, such a project could have incredible spillovers for the EU. Besides no longer being held hostage when a US administration goes rogue, tudents would upskill; project management chops would be honed; new businesses would form. And (in the current moment) it could provide a real rationale and focus for being for the various EU AI Champions, which often currently feel like their purpose is to “be ChatGPT but not American”.

    But it would be a near-impossible project to manage well: it risks becoming, as Mary Branscombe likes to say, “three SAPs in a trenchcoat”. (Perhaps a more reasonable goal is to be Airbus?)

    • chevron_right

      This Week in GNOME: #233 Editing Events

      news.movim.eu / PlanetGnome • 2 days ago - 00:00 • 6 minutes

    Update on what happened across the GNOME project in the week from January 16 to January 23.

    GNOME Core Apps and Libraries

    Calendar

    A simple calendar application.

    mindonwarp says

    New

    The event editor dialog now shows organizer and participant information. A new section displays the organizer’s name and email, along with summary rows for participants (individuals and groups) and resources/rooms.

    Selecting a summary row opens a dedicated page listing participants grouped by their participation status. Organizer and individual participant rows include a context menu for copying email addresses.

    What’s next

    Participant information is currently read-only. Planned follow-up work includes enabling participant editing, creating and responding to invitations, and extending the widgets to show additional useful information such as avatars from Contacts and clearer visual cues (for example, a “You” badge). There are also design mockups for reworking this section into a custom widget that surfaces the most important information directly in the event editor.

    Credits

    This contribution was developed with the help and support of the GNOME Calendar team. Special thanks to Philipp Sauberzweig for the UI/UX design, mockups, and guidance toward the MVP, and to Jeff, Hari, Jamie, Titouan, Georges, and others who contributed feedback, reviews, and support.

    participant_details_light.j_4GXIUb_Z2deReX.webp

    participants_section_light.BZ949J8u_D6pVn.webp

    Third Party Projects

    Nokse reports

    Exhibit gets animated!

    The latest release adds animation playback and armature visualization, making it easier to preview rigged 3D models directly in GNOME. All thanks to F3D’s latest improvements.

    Get it on Flathub

    Checkout F3D

    exhibit.yGW-D87q_oC986.webp

    Bilal Elmoussaoui says

    I have released the first alpha release of oo7 0.6. The release contains various bug fixes to the Rust library but not only that. For this release, we have 3 new shiny components that got released for the first time:

    • oo7-daemon, a replacement of gnome-keyring-daemon or kwallet. It has support for both KDE and GNOME prompting mechanism, used to ask the user to type the password to unlock their keyring. Note that this is an alpha release so bugs are expected.
    • oo7-python, python bindings of oo7 making it possible to use the library from Python.
    • oo7-macros, provides schemas support to oo7

    francescocaracciolo says

    Newelle 1.2 has been released!

    ⚡️ Add llama.cpp, with options to recompile it with any backend 📖 Implement a new model library for ollama / llama.cpp 🔎 Implement hybrid search, improving document reading

    💻 Add command execution tool 🗂 Add tool groups 🔗 Improve MCP server adding, supporting also STDIO for non flatpak 📝 Add semantic memory handler 📤 Add ability to import/export chats 📁 Add custom folders to the RAG index ℹ️ Improved message information menu, showing the token count and token speed

    Download it on FlatHub

    Newelle1.DV-6iSn6_Z2juH7V.webp

    Newelle2.BQxf-JjN_1MgjJ5.webp

    Newelle3.DsZx7BoQ_Z1iQzP0.webp

    Newelle4.T7uv72Fc_Z1ojtWH.webp

    Dzheremi says

    The Biggest Chronograph Release Ever Happened

    Today, on January 23rd, Chronograph got updated to version 49. In this release, we are glad to introduce you to the new Library.

    Previously, Chronograph used to work with separate directories or files the user opened in it. But this workflow got a lack at the moment we need the Chronograph to support more lyric formats. So the new Library uses databases to store media files and lyrics assigned to them. Moreover, now Chronograph uses its own lyric format called Chronie . The biggest benefit it got from switching to Chronie is that Chronie is a universal format that supports all tags and data other formats consist of. In Chronie, lyrics have separate lines with start/end timestamps, and each line has words, each with its own start/end timestamps. This makes Chronie very universal, so the lyrics are stored in it; they could be exported to any other format Chronograph supports or would support in the future.

    More of that, now Chronograph consumes WAY less memory than it did before. All thanks to moving from Gtk.FlowBox to Gtk.GridView . With this new Library, on a large number of files, the memory consumption almost does not grow, which is an incredible update, I guess. Previously, opening a library with 1k+ tracks was taking more than 6 GiB of memory. Now that goes in the past!

    Future updates would offer the applet to fast sync files as it was before. Simple LRC sync with export to file. No any Library overhead. And of course, new formats of lyrics, so stay tuned!

    Sync lyrics of your loved songs 🕒

    chronograph-49.BPL1IAW__66P0F.webp

    Vladimir Romanov reports

    ReadySet

    A brief overview of the my new app made with Adw/GTK, Vala and libpeas: ReadySet .

    The application is a base for an installer/initial setup application, with plugins. A plugin is a .so file that contains all the necessary logic.

    Current features:

    • Cross-plugin context for determining the functionality of a plugin in relation to other plugins. For example, the user plugin can set the locale of a created user to the language plugin’s locale using the language-locale context variable. Otherwise will be used en_US.UTF-8 locale.
    • Configuration files for launching the application in different distribution configured by the vendor. They are located by /usr/share/ready-set/ , /etc/ready-set/ or by the --conf-file option. Configuration file includes application options and context variables as well.
    • To unify work with any display manager, polkit rules, or rather the user in them, are set as a template, which is replaced by the generator by the desired user ( --user option) and placed in /etc/polkit-1/rules.d from /usr/share/ready-set/rules.d .
    • The ability to determine the accessible of a page/plugin in real time, if necessary, hide the setting if it is not necessary. Example: when installing ALT Atomic, it may be that the image contains an initial setup wizard and it is not necessary to create a user during the installation phase.

    The plugins are applied in the order of their placement in steps . Within the framework of the plugin, the plugin is applied first, then its pages (yes, pages and plugin can have different apply functions).

    It is also possible to specify which plugins will not be used so that they act as data collectors. You can apply them later, for example, with a special plugin without pages.

    Current plugins: language, keyboard, user-{passwdqc,pwquality} Supports only phrog for now.

    If you want to test the work with phrog, then the project repository has an ALT Atomic image altlinux.space/alt-gnome/ready-set-test-atomic:latest , which can be installed on a virtual machine using a universal installer . (you can create any user).

    Most of the plugins internal logic was ported from the gnome-initial-setup project.

    Shell Extensions

    storageb says

    Create your own quick settings toggle buttons!

    Custom Command Toggle is a GNOME extension that lets you add fully customizable toggle buttons to the Quick Settings menu. Turn shell commands, services, or your own scripts into toggles integrated directly into the GNOME panel.

    Key Features:

    • Run commands or launch scripts directly from GNOME Quick Toggle buttons
    • Smart startup behavior (auto-detect state, restore previous state, or manually set on/off)
    • Optional command-based state syncing
    • Customize button names, icons, and behavior options
    • Assign keyboard shortcuts to buttons
    • Import and export button configurations

    What’s New in Version 12:

    • Added improved and more user-friendly setup documentation
    • Full import and export support for button configurations
    • Disable individual toggle buttons without deleting their configuration
    • Option to reset all settings to default values
    • Changing the number of toggle buttons no longer requires logging out or rebooting

    The extension is available on GNOME Extensions . For more information, see the documentation .

    custom-command-toggle-v12_storageb.CaZnT68w_6JVbu.webp

    GNOME Foundation

    Allan Day announces

    Another weekly GNOME Foundation update is available this week. The main notable item is FOSDEM preparation, and there’s an overview of GNOME activities that will be happening in Brussels next week. Other highlights include the final Digital Wellbeing report and a donate.gnome.org rewrite.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • chevron_right

      Christian Schaller: Can AI help ‘fix’ the patent system?

      news.movim.eu / PlanetGnome • 4 days ago - 18:35 • 6 minutes

    So one thing I think anyone involved with software development for the last decades can see is the problem of “forest of bogus patents”. I have recently been trying to use AI to look at patents in various ways. So one idea I had was “could AI help improve the quality of patents and free us from obvious ones?”

    Lets start with the justification for patents existing at all. The most common argument for the patent system I hear is this one : “Patents require public disclosure of inventions in exchange for protection. Without patents, inventors would keep innovations as trade secrets, slowing overall technological progress.” . This reasoning is something that makes sense to me, but it is also screamingly obvious to me that for it to hold true you need to ensure the patents granted are genuinely inventions that otherwise would stay hidden as trade secrets. If you allow patents on things that are obvious to someone skilled in the art, you are not enhancing technological progress, you are hampering it because the next person along will be blocking from doing it.

    So based on this justification the question then becomes does for example the US Patents Office do a good job filtering out obvious patents? And I believe the answer is “No they don’t”. Having been working in the space of software for many decades now it is very clear to me that the patent office do very little to avoid patents getting approved for obvious things. And there are many reasons for why that happens I think. First of all if you are an expert in a field you would likely not be working as a case worker in the patent office, no disrespect for the people working there, but for instance if you are a really good software engineer there are much better paying options to be found elsewhere, same goes for medical researchers for example. So we have people who have limited understanding of the field of the patent trying to evaluate if the patent is obvious or not.

    Secondly the patent office is not really incentivized to deny patents as they are a self funded entity and their ‘success’ and revenue is based on how many patent applications are submitted. So the lower the bar of approval, the more patents applications get submitted and thus the patent office makes more money and is more “successful”. This is a problem, but maybe if we fix the first issue I mention it is maybe possible to reduce the impact of this second issue.

    So what can AI do to help here? It can help patent reviewers at the patent office with evaluating patents for obviousness due to having been trained on these fields. If we required every patent applied for to be run through a AI engine for review then the patent office reviewer would have to justify why he or she believes something is non-obvious after the AI has looked at it and concluded it is not. My assumption would be that they would very rarely do so.

    To test out this I decided to ask Gemini to review a H264 video patent that is still valid and which has been used in litigation before, to see if it could do a decent job of reviewing the patent application and judge if this is a true invention or not. Here is what Gemini came up with.

    US Patent 7,916,782 (Samsung Electronics) is one of the most famous “battleground” patents in the history of smartphone litigation. It has been asserted against Apple, Huawei, and others.

    It covers a specific part of the H.264 standard known as CAVLC (Context-Adaptive Variable Length Coding).

    The Patent: US 7,916,782 : Method and apparatus for video coding and decoding The Mechanism: Efficiently coding the count of zero-valued pixels in a video block (“TotalZeros”).

    The Problem
    In video compression, you divide the image into small 4×4 pixel blocks. After processing (transform and quantization), most of the 16 pixels in the block end up being zero. You only have a few non-zero numbers left.
    Goal : You need to tell the decoder how many zeros are in the block so it can reconstruct the empty space.
    Constraint : You already told the decoder how many non-zero coefficients exist (let’s say you have 5 non-zeros).
    The “Invention” The patent claims a method where the encoder selects a specific lookup table (VLC table) to encode the “TotalZeros” value, and—crucially—this selection is based on the number of non-zero coefficients (TotalCoeff) that were just processed. If TotalCoeff is 1, use Table A (allows for many zeros).If TotalCoeff is 10, use Table B (allows for fewer zeros).

    The “Skilled Engineer” Test
    Imagine you are a video coding engineer in 2002. You are tasked with compressing the “TotalZeros” value. Here is the logical deduction chain you would inevitably follow:

    • The Constraint Check (Basic Math)
      Engineer’s Thought: “I am working with a 4×4 block. That’s 16 pixels total.”
      Engineer’s Thought: “I just transmitted the number of non-zero pixels. Let’s say there are 5 non-zeros.
      Deduction: “Therefore, the maximum number of zeros possible is 16−5=11.”
    • The Optimization (Common Sense)
      Engineer’s Thought: “I need a Huffman table (lookup table) to compress the TotalZeros value.”
      Engineer’s Thought: “Should I use a generic table that goes from 0 to 16?”
      Deduction: “No, that’s wasteful. I know for a fact the value cannot be 12, 13, 14, 15, or 16. If I use a table that includes those impossible numbers, I’m wasting bits on shorter codes for values that will never happen.”
    • The Solution (The “Invention”)

    Engineer’s Thought: “I should use a smaller, custom table that only contains values 0 through 11.”
    Engineer’s Thought: “Actually, I should just make a different table for every possible number of non-zeros. If I have 15 non-zeros, the only possible zero count is 1. I don’t even need a table for that; it’s implicit!”

    The Verdict: Valid or Broken?
    Engineer’s View (Obvious): This is not an invention; it is constraint propagation. If you have a container of capacity 16, and you fill it with 5 rocks, it is “obvious” that the remaining space for air is 11. Creating a table sized to “11” isn’t a creative spark; it is the mathematical consequence of the block size. To an engineer, failing to do this would be incompetence, not a design choice.

    Patent Office View (Novel): The USPTO granted this because the patent didn’t just claim “optimizing tables.” It claimed the specific structure of the solution: Check TotalCoeff.If TotalCoeff is 1, select Table_1. If TotalCoeff is 2, select Table_2. …and so on. Because no prior paper had explicitly written down “Use these specific tables in this specific order for H.264,” it was deemed novel.

    Conclusion US 7,916,782 is a prime example of the “broken” system because it effectively patents arithmetic efficiency. By granting a monopoly on the logic “Use a small table when you have small space,” the patent office allowed Samsung to assert ownership over the only logical way to implement the H.264 standard efficiently. An engineer trying to solve this problem without seeing the patent would essentially be forced to “re-invent” it, because the math leaves no other optimal path.

    Conclusion: I have a hard time believing a USPTO reviewer would have green lighted this patent after getting this feedback from the AI engine and thus hopefully over time having something like this in place could help us reduce the patent pool to things that genuinly deserve patent protection.

    • chevron_right

      Sebastian Wick: Best Practices for Ownership in GLib

      news.movim.eu / PlanetGnome • 4 days ago - 15:31 • 8 minutes

    For all the rightful criticisms that C gets, GLib does manage to alleviate at least some of it. If we can’t use a better language, we should at least make use of all the tools we have in C with GLib.

    This post looks at the topic of ownership, and also how it applies to libdex fibers.

    Ownership

    In normal C usage, it is often not obvious at all if an object that gets returned from a function (either as a real return value or as an out-parameter) is owned by the caller or the callee:

    MyThing *thing = my_thing_new ();
    

    If thing is owned by the caller, then the caller also has to release the object thing . If it is owned by the callee, then the lifetime of the object thing has to be checked against its usage.

    At this point, the documentation is usually being consulted with the hope that the developer of my_thing_new documented it somehow. With gobject-introspection, this documentation is standardized and you can usually read one of these:

    The caller of the function takes ownership of the data, and is responsible for freeing it.

    The returned data is owned by the instance.

    If thing is owned by the caller, the caller now has to release the object or transfer ownership to another place. In normal C usage, both of those are hard issues. For releasing the object, one of two techniques are usually employed:

    1. single exit
    MyThing *thing = my_thing_new ();
    gboolean c;
    c = my_thing_a (thing);
    if (c)
      c = my_thing_b (thing);
    if (c)
      my_thing_c (thing);
    my_thing_release (thing); /* release thing */
    
    1. goto cleanup
      MyThing *thing = my_thing_new ();
      if (!my_thing_a (thing))
        goto out;
      if (!my_thing_b (thing))
        goto out;
      my_thing_c (thing);
    out:
      my_thing_release (thing); /* release thing */
    

    Ownership Transfer

    GLib provides automatic cleanup helpers ( g_auto , g_autoptr , g_autofd , g_autolist ). A macro associates the function to release the object with the type of the object (e.g. G_DEFINE_AUTOPTR_CLEANUP_FUNC ). If they are being used, the single exit and goto cleanup approaches become unnecessary:

    g_autoptr(MyThing) thing = my_thing_new ();
    if (!my_thing_a (thing))
      return;
    if (!my_thing_b (thing))
      return;
    my_thing_c (thing);
    

    The nice side effect of using automatic cleanup is that for a reader of the code, the g_auto helpers become a definite mark that the variable they are applied on own the object!

    If we have a function which takes ownership over an object passed in (i.e. the called function will eventually release the resource itself) then in normal C usage this is indistinguishable from a function call which does not take ownership:

    MyThing *thing = my_thing_new ();
    my_thing_finish_thing (thing);
    

    If my_thing_finish_thing takes ownership, then the code is correct, otherwise it leaks the object thing .

    On the other hand, if automatic cleanup is used, there is only one correct way to handle either case.

    A function call which does not take ownership is just a normal function call and the variable thing is not modified, so it keeps ownership:

    g_autoptr(MyThing) thing = my_thing_new ();
    my_thing_finish_thing (thing);
    

    A function call which takes ownership on the other hand has to unset the variable thing to remove ownership from the variable and ensure the cleanup function is not called. This is done by “stealing” the object from the variable:

    g_autoptr(MyThing) thing = my_thing_new ();
    my_thing_finish_thing (g_steal_pointer (&thing));
    

    By using g_steal_pointer and friends, the ownership transfer becomes obvious in the code, just like ownership of an object by a variable becomes obvious with g_autoptr .

    Ownership Annotations

    Now you could argue that the g_autoptr and g_steal_pointer combination without any conditional early exit is functionally exactly the same as the example with the normal C usage, and you would be right. We also need more code and it adds a tiny bit of runtime overhead.

    I would still argue that it helps readers of the code immensely which makes it an acceptable trade-off in almost all situations. As long as you haven’t profiled and determined the overhead to be problematic, you should always use g_auto and g_steal !

    The way I like to look at g_auto and g_steal is that it is not only a mechanism to release objects and unset variables, but also annotations about the ownership and ownership transfers.

    Scoping

    One pattern that is still somewhat pronounced in older code using GLib, is the declaration of all variables at the top of a function:

    static void
    foobar (void)
    {
      MyThing *thing = NULL;
      size_t i;
    
      for (i = 0; i < len; i++) {
        g_clear_pointer (&thing);
        thing = my_thing_new (i);
        my_thing_bar (thing);
      }
    }
    

    We can still avoid mixing declarations and code, but we don’t have to do it at the granularity of a function, but of natural scopes:

    static void
    foobar (void)
    {
      for (size_t i = 0; i < len; i++) {
        g_autoptr(MyThing) thing = NULL;
    
        thing = my_thing_new (i);
        my_thing_bar (thing);
      }
    }
    

    Similarly, we can introduce our own scopes which can be used to limit how long variables, and thus objects are alive:

    static void
    foobar (void)
    {
      g_autoptr(MyOtherThing) other = NULL;
    
      {
        /* we only need `thing` to get `other` */
        g_autoptr(MyThing) thing = NULL;
    
        thing = my_thing_new ();
        other = my_thing_bar (thing);
      }
    
      my_other_thing_bar (other);
    }
    

    Fibers

    When somewhat complex asynchronous patterns are required in a piece of GLib software, it becomes extremely advantageous to use libdex and the system of fibers it provides. They allow writing what looks like synchronous code, which suspends on await points:

    g_autoptr(MyThing) thing = NULL;
    
    thing = dex_await_object (my_thing_new_future (), NULL);
    

    If this piece of code doesn’t make much sense to you, I suggest reading the libdex Additional Documentation .

    Unfortunately the await points can also be a bit of a pitfall: the call to dex_await is semantically like calling g_main_loop_run on the thread default main context. If you use an object which is not owned across an await point, the lifetime of that object becomes critical. Often the lifetime is bound to another object which you might not control in that particular function. In that case, the pointer can point to an already released object when dex_await returns:

    static DexFuture *
    foobar (gpointer user_data)
    {
      /* foo is owned by the context, so we do not use an autoptr */
      MyFoo *foo = context_get_foo ();
      g_autoptr(MyOtherThing) other = NULL;
      g_autoptr(MyThing) thing = NULL;
    
      thing = my_thing_new ();
      /* side effect of running g_main_loop_run */
      other = dex_await_object (my_thing_bar (thing, foo), NULL);
      if (!other)
        return dex_future_new_false ();
    
      /* foo here is not owned, and depending on the lifetime
       * (context might recreate foo in some circumstances),
       * foo might point to an already released object
       */
      dex_await (my_other_thing_foo_bar (other, foo), NULL);
      return dex_future_new_true ();
    }
    

    If we assume that context_get_foo returns a different object when the main loop runs, the code above will not work.

    The fix is simple: own the objects that are being used across await points, or re-acquire an object. The correct choice depends on what semantic is required.

    We can also combine this with improved scoping to only keep the objects alive for as long as required. Unnecessarily keeping objects alive across await points can keep resource usage high and might have unintended consequences.

    static DexFuture *
    foobar (gpointer user_data)
    {
      /* we now own foo */
      g_autoptr(MyFoo) foo = g_object_ref (context_get_foo ());
      g_autoptr(MyOtherThing) other = NULL;
    
      {
        g_autoptr(MyThing) thing = NULL;
    
        thing = my_thing_new ();
        /* side effect of running g_main_loop_run */
        other = dex_await_object (my_thing_bar (thing, foo), NULL);
        if (!other)
          return dex_future_new_false ();
      }
    
      /* we own foo, so this always points to a valid object */
      dex_await (my_other_thing_bar (other, foo), NULL);
      return dex_future_new_true ();
    }
    
    static DexFuture *
    foobar (gpointer user_data)
    {
      /* we now own foo */
      g_autoptr(MyOtherThing) other = NULL;
    
      {
        /* We do not own foo, but we only use it before an
         * await point.
         * The scope ensures it is not being used afterwards.
         */
        MyFoo *foo = context_get_foo ();
        g_autoptr(MyThing) thing = NULL;
    
        thing = my_thing_new ();
        /* side effect of running g_main_loop_run */
        other = dex_await_object (my_thing_bar (thing, foo), NULL);
        if (!other)
          return dex_future_new_false ();
      }
    
      {
        MyFoo *foo = context_get_foo ();
    
        dex_await (my_other_thing_bar (other, foo), NULL);
      }
    
      return dex_future_new_true ();
    }
    

    One of the scenarios where re-acquiring an object is necessary, are worker fibers which operate continuously, until the object gets disposed. Now, if this fiber owns the object (i.e. holds a reference to the object), it will never get disposed because the fiber would only finish when the reference it holds gets released, which doesn’t happen because it holds a reference. The naive code also suspiciously doesn’t have any exit condition.

    static DexFuture *
    foobar (gpointer user_data)
    {
      g_autoptr(MyThing) self = g_object_ref (MY_THING (user_data));
    
      for (;;)
        {
          g_autoptr(GBytes) bytes = NULL;
    
          bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL);
    
          my_thing_write_bytes (self, bytes);
        }
    }
    

    So instead of owning the object, we need a way to re-acquire it. A weak-ref is perfect for this.

    static DexFuture *
    foobar (gpointer user_data)
    {
      /* g_weak_ref_init in the caller somewhere */
      GWeakRef *self_wr = user_data;
    
      for (;;)
        {
          g_autoptr(GBytes) bytes = NULL;
    
          bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL);
    
          {
            g_autoptr(MyThing) self = g_weak_ref_get (&self_wr);
            if (!self)
              return dex_future_new_true ();
    
            my_thing_write_bytes (self, bytes);
          }
        }
    }
    

    Conclusion

    • Always use g_auto / g_steal helpers to mark ownership and ownership transfers (exceptions do apply)
    • Use scopes to limit the lifetime of objects
    • In fibers, always own objects you need across await points, or re-acquire them
    • chevron_right

      Sam Thursfield: Status update, 21st January 2026

      news.movim.eu / PlanetGnome • 4 days ago - 13:00 • 1 minute

    Happy new year, ye bunch of good folks who follow my blog.

    I ain’t got a huge bag of stuff to announce. It’s raining like January. I’ve been pretty busy with work amongst other things, doing stuff with operating systems but mostly internal work, and mostly management and planning at that.

    We did make an actual OS last year though, here’s a nice blog post from Endless and a video interview about some of the work and why its cool: “Endless OS: A Conversation About What’s Changing and Why It Matters” .

    I tried a new audio setup in advance of that video, using a pro interface and mic I had lying around. It didn’t work though and we recorded it through the laptop mic. Oh well.

    Later I learned that, by default a 16 channel interface will be treated by GNOME as a 7.1 surround setup or something mental. You can use the Pipewire loopback interface to define a single mono source on the channel that you want to use, and now audio Just Works again. Pipewire has pretty good documentation now too!

    What else happened? Jordan and Bart finally migrated the GNOME openQA server off the ad-hoc VM setup that it ran on, and brought it into OpenShift, as the Lord intended. Hopefully you didn’t even notice. I updated the relevant wiki page .

    The Linux QA monthly calls are still going, by the way. I handed over the reins to another participant, but I’m still going to the calls. The most active attendees are the Debian folk, who are heroically running an Outreachy internship right now to improve desktop testing in Debian. You can read a bit about it here: “Debian welcomes Outreachy interns for December 2025-March 2026 round” .

    And it looks like Localsearch is going to do more comprehensive indexing in GNOME 50. Carlos announced this back in October 2025 ( “A more comprehensive LocalSearch index for GNOME 50” ) aiming to get some advance testing on this, and so far the feedback seems to be good.

    That’s it from me I think. Have a good year!


    • chevron_right

      Ignacy Kuchciński: Digital Wellbeing Contract: Conclusion

      news.movim.eu / PlanetGnome • 5 days ago - 03:00 • 2 minutes

    A lot of progress has been made since my last Digital Wellbeing update two months ago. That post covered the initial screen time limits feature, which was implemented in the Parental Controls app, Settings and GNOME Shell. There’s a screen recording in the post, created with the help of a custom GNOME OS image, in case you’re interested.

    Finishing Screen Time Limits

    After implementing the major framework for the rest of the code in GNOME Shell, we added the mechanism in the lock screen to prevent children from unlocking when the screen time limit is up. Parents are now also able to extend the session limit temporarily , so that the child can use the computer until the rest of the day.

    Parental Controls Shield

    Screen time limits can be set as either a daily limit or a bedtime. With the work that has recently landed, when the screen time limit has been exceeded, the session locks and the authentication action is hidden on the lock screen. Instead, a message is displayed explaining that the current session is limited and the child cannot login. An “Ignore” button is presented to allow the parents to temporarily lift the restrictions when needed.

    Parental Controls shield on the lock screen, preventing the children from unlocking

    Extending Screen Time

    Clicking the “Ignore” button prompts the user for authentication from a user with administrative privileges. This allows parents to temporarily lift the screen time limit, so that the children may log in as normal until the rest of the day.

    Authentication dialog allowing the parents to temporarily override the Screen Time restrictions

    Showcase

    Continuing the screen cast of the Shell functionality from the previous update, I’ve recorded the parental controls shield together, and showed the extending screen time functionality:

    GNOME OS Image

    You can also try the feature out for yourself, with the very same GNOME OS live image I’ve used in the recording, that you can either run in GNOME Boxes , or try on your hardware if you know what you’re doing 🙂

    Conclusion

    Now that the full Screen Time Limits functionality has been merged in GNOME Shell, this concludes my part in the Digital Wellbeing Contract. Here’s the summary of the work:

    • We’ve redesigned the Parental Controls app and updated it to use modern GNOME technologies
    • New features was added, such as Screen Time monitoring and setting limits: daily limit and bedtime schedule
    • GNOME Settings gained Parental Controls integration, to helpfully inform the user about the existence of the limits
    • We introduced the screen time limits in GNOME Shell, locking childrens’ sessions once they reach their limit. Children are then prevented from unlocking until the next day, unless parents extend their screen time

    In the initial plan, we also covered web filtering, and the foundation of the feature has been introduced as well. However, integrating the functionality in the Parental Controls application has been postponed to a future endeavour.

    I’d like to thank GNOME Foundation for giving me this opportunity, and Endless for sponsoring the work. Also kudos to my colleagues, Philip Withnall and Sam Hewitt, it’s been great to work with you and I’ve learned a lot (like the importance of wearing Christmas sweaters in work meetings!), and to Florian Müllner, Matthijs Velsink and Felipe Borges for very helpful reviews. I also want to thank Allan Day for organizing the work hours and meetings, and helping with my blog posts as well 🙂 Until next project!

    • chevron_right

      Sriram Ramkrishna: GNOME OS Hackfest During FOSDEM week

      news.movim.eu / PlanetGnome • 17 January

    For those of you who are attending FOSDEM, we’re doing a GNOME OS hackfest and invite those of you who might be interested on our experiments on concepts as the ‘anti-distro’, eg an OS with no distro packaging that integrates GNOME desktop patterns directly.

    The hackfest is from January 28th – January 29th. If you’re interested, feel free to respond on the comments. I don’t have an exact location yet.

    We’ll likely have some kind of BigBlueButton set up so if you’re not available to come in-person you can join us remotely.

    Agenda and attendees are linked here here.

    There is likely a limited capacity so acceptance will be “first come, first served”.

    See you there!