call_end

    • Pl chevron_right

      This Week in GNOME: #227 Circle Benefits

      news.movim.eu / PlanetGnome • Yesterday - 00:00 • 3 minutes

    Update on what happened across the GNOME project in the week from November 22 to November 29.

    GNOME Circle Apps and Libraries

    Sophie (she/her) says

    The benefits for GNOME Circle projects now explicitly include the participation in internship programs as well as the inclusion on the help.gnome.org , welcome.gnome.org , apps.gnome.org or developer.gnome.org pages.

    The circle.gnome.org page as been redesigned to link to the respective pages instead of having its own app and component list.

    NewsFlash feed reader

    Follow your favorite blogs & news sites.

    Jan Lukas reports

    Last week Newsflash 4.2 was released with the usual amount of small improvements and fixes. What makes it worthy of the minor version bump in my opinion is the new popover right above the article containing all relevant font and spacing options for a more direct interaction, a new slightly different layout for tablets (specifically the PineNote) and combining multiple image enclosures into a carousel.

    newsflash-twig.png

    Third Party Projects

    Alain announces

    Planify 4.16.1 is now available!

    A small but polished update focused on improving the overall experience: • UX improvements and minor bug fixes • Shift+Enter is back for quick “keep adding” • Zoom links now supported in calendar events • Updated Donate page • The Planify website has been refreshed with a cleaner design and several improvements → https://www.useplanify.com

    Thanks for following the journey of Planify 💙

    Dzheremi announces

    Mass Lyrics Downloading with Chronograph 5.3

    Chronograph got an update with, as promised before, mass lyrics downloading support . Now users allowed to query LRClib to give them lyrics for all tracks in their current library. Chronograph will try to find the closest lyrics among results for your tracks. This feature expands coverage of the app usefulness for users, who doesn’t want to sync lyrics themselves, but want to just download them.

    Sync lyrics of your loved songs 🕒

    chronograph-mass-lyrics.png

    Parabolic

    Download web video and audio.

    Nick announces

    Parabolic V2025.11.1 is here! This release contains many bug fixes for issues users were experiencing.

    A larger new feature update is in the works to address many long standing feature requests.

    Here’s the full changelog:

    • Fixed the sleep interval for multiple subtitle downloads
    • Fixed an issue where low-resolution media was being downloaded on Windows
    • Fixed an issue where aria2c couldn’t download media from certain sites
    • Fixed an issue where Remove Source Data was not clearing all identifiable metadata fields

    parabolic.png

    Shell Extensions

    boerdereinar announces

    Hey everyone, this week I’ve released my clipboard manager extension Copyous with the following features:

    • Supports text, code, images, files, links, characters and colors.
    • Can be opened at mouse pointer or text cursor
    • Pin favorite items
    • Group items with 9 colored tags
    • Customizable clipboard actions
    • Highly customizable

    copyous-screenshot.png

    PakoVM announces

    This week I publish Tinted Shell, a extension that simply adds a spalsh of color to the Gnome Shell theme based on the user’s current accent color while respecting the original look.

    You can get it from Gnome Extensions and contribute to it on Github .

    tinted-shell-preview.png

    Miscellaneous

    Krafting - Vincent says

    Hey everyone, this week I pushed updates to use the GNOME 49 runtime, and revamping the keyboard shortcuts pages on all my apps available on Flathub :

    In addition, SemantiK got some versions bumps and PedantiK got a lot of bug fixes (including fixing the broken Wikipedia API)

    Also, work started on a PedantiK English pack, to allow languages other than french.

    GNOME Foundation

    Allan Day announces

    A new GNOME Foundation update is available , covering what has happened at the GNOME Foundation over the past two weeks. It covers a fairly long list of topics, including the recent budget report, funding for Outreachy, banking and finance changes, Flathub progress, and more.

    Digital Wellbeing Project

    Philip Withnall reports

    This week in parental controls, Ignacy is working on changing the Shell lock screen to show when a child’s screen time limit has been reached; and I’ve spent a bit of time writing up the technical details of how web filtering will work at https://tecnocode.co.uk/2025/11/27/parental-controls-web-filtering-backend/ (the backend is written, the UI integration is future work)

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Christian Hergert: Libdex Futures from asyncio

      news.movim.eu / PlanetGnome • Yesterday - 21:18

    One of my original hopes for Libdex was to help us structure complex asynchronous code in the GNOME platform. If my work creating Foundry and Sysprof are any indicator, it has done leaps and bounds for my productivity and quality in that regard.

    Always in the back of my mind I hoped we could make those futures integrate with language runtimes.

    This morning I finally got around to learning enough of Python’s asyncio module to write the extremely minimal glue code. Here is a commit that implements integration as a PyGObject introspection override. It will automatically load when you from gi.repository import Dex .

    This only integrates with the asyncio side of things so your application is still responsible for integrating asyncio and GMainContext or else nothing will be pumping the DexScheduler . But that is application toolkit specific so I must leave that to you.

    I would absolutely love it if someone could work on the same level of integration for GJS as I have even less experience with that platform.

    • Pl chevron_right

      Philip Withnall: Parental controls web filtering backend

      news.movim.eu / PlanetGnome • 2 days ago - 13:25 • 5 minutes

    In my previous post I gave an overview of the backend for the screen time limits feature of parental controls in GNOME. In this post, I’ll try and do the same for the web filtering feature.

    We haven’t said much about web filtering so far, because the user interface for it isn’t finished yet. The backend is, though, and it will get plumbed up eventually. Currently we don’t have a GNOME release targeted for it yet.

    When is web filterings? What is web filtering?

    (Apologies to Radio 4 Friday Night Comedy .)

    Firstly, what is the aim of web filtering? As with screen time limits, we’ve written a design document which (hopefully) covers everything. But the summary is that it should allow parents to filter out age-inappropriate content on the web when it’s accessed by child accounts, while not breaking the web (for example, by breaking TLS for websites) and not requiring us (as a project) to become curators of filter lists. It needs to work for all apps on the system (lots of apps other than web browsers can show web content), and needs to be able to filter things differently for different users (two different children of different ages might use the same computer, as well as the parents themselves).

    After looking at various different possible ways of implementing it, the best solution seemed to be to write an NSS module to respond to name resolution (i.e. DNS) requests and potentially block them according to a per-user filter list.

    A brief introduction to NSS

    NSS (Name Service Switch) is a standardised name lookup API in libc. It’s used for hostname resolution, but also for user accounts and various other things. Names are resolved by various modules which are dlopen() ed into your process by libc and queried in the order given in /etc/nsswitch.conf . So for hostname resolution, a typical configuration in nsswitch.conf would cause libc to query the module which looks at /etc/hosts first, then the module which checks your machine’s hostname, then the mDNS module, then systemd-resolved.

    So, we can insert our NSS module into /etc/nsswitch.conf , have it run somewhere before systemd-resolved (which in this example does the actual DNS resolution), and have it return a sinkhole address for blocked domains. Because /etc/nsswitch.conf is read by libc within your process, this means that the configuration needs to be modified for containers (flatpak) as well as on the host system.

    Because the filter module is loaded into the name lookup layer, this means that content filtering (as opposed to domain name filtering) is not possible with this approach. That’s fine — content filtering is hard, I’m not sure it gives better results overall than domain name filtering, and means we can’t rely on existing domain name filter lists which are well maintained and regularly updated. We’re not planning on adding content filtering.

    It also means that DNS-over-HTTPS/-TLS can be supported, as long as the app doesn’t implement it natively (i.e. by talking HTTPS over a socket itself). Some browsers do that, so the module needs to set a canary to tell them to disable it. DNS-over-HTTPS/-TLS can still be used if it’s implemented by one of the NSS modules, like systemd-resolved.

    Nothing here stops apps from deliberately bypassing the filtering if they want, perhaps by talking DNS over UDP directly, or by calling secret internal glibc functions to override nsswitch.conf . In the future, we’d have to implement per-app network sandboxing to prevent bypasses. But for the moment, trusting the apps to cooperate with parental controls is fine.

    Filter update daemon

    So we have a way of blocking things; but how does it know what to block? There are a lot of filter lists out there on the internet, targeted at existing web filtering software. Basically, a filter list is a list of domain names to block. Some filter lists allow wildcards and regexps, others just allow plain strings. For simplicity, we’ve gone with plain strings.

    We allow the parent to choose zero or more filter lists to build a web filtering policy for a child. Typically, these filter lists will correspond to categories of content, so the parent could choose a filter list for advertising, and another for violent content, for example. The web filtering policy is basically the set of these filter lists, plus some options like “do you want to enforce safe search”. This policy is, like all other parental controls policies, stored against the child user in accounts-service .

    Combine these filter lists, and you have the filter list to give to NSS in the child’s session, right? Not quite — because the internet unfortunately keeps changing, filter lists need to be updated regularly. So actually what we need is a system daemon which can regularly check the filter lists for updates, combine them, and make them available as a compiled file to the child’s NSS module — for each user on the system.

    This daemon is malcontent-webd . It has a D-Bus interface to allow the parent to trigger compiling the filter for a child when changing the parental controls policy for that child in the UI, and to get detailed feedback on any errors. Since the filter lists come from third parties on the internet, there are various ways they could have an error.

    It also has a timer unit trigger, malcontent-webd-update , which is what triggers it to periodically check the filter lists for all users for updates.

    High-level diagram of the web filtering system, showing the major daemons and processes, files, and IPC calls. If it’s not clear, the awful squiggled line in the bottom left is meant to be a cloud. Maybe this representation is apt.

    And that’s it! Hopefully it’ll be available in a GNOME release once we’ve implemented the user interface for it and done some more end-to-end testing, but the screen time limits work is taking priority over it.

    • Pl chevron_right

      Sam Thursfield: Bollocks to Github

      news.movim.eu / PlanetGnome • 2 days ago - 00:27 • 3 minutes

    I am spending the evening deleting my Github.com account.

    There are plenty of reasons you might want to delete your Github account. I’d love to say that this is a coherently orchestrated boycott on my part, in sympathy with the No Azure for Apartheid movement. Microsoft, owner of Github, is a big pile of cash happy to do business with an apartheid state. That’s a great reason to delete your Github.com account.

    I will be honest with you though, the thing that pushed me over the edge was a spam email they sent entitled “GitHub Copilot: What’s in your free plan 🤖 ”. I was in a petty mood this morning.

    Offering free LLM access is a money loser. The long play is this: Microsoft would like to create a generation of computer users hooked on GitHub Copilot. And, I have to hand it to them, they have an excellent track record in monopolising how we interact with our PCs.

    Deleting my Github.com account isn’t going to solve any of that. But it feels good to be leaving, anyway. The one billionth Github repository was created recently and it has a single line README containing the word “shit” . I think that summarizes the situation more poetically than I could.

    I had 145 repositories in the ssssam/ namespace to delete. The oldest was Iris; forked in 2011.

    Quite a story to that one. A fork of a project by legendary GNOME hacker Christian Hergert. In early 2011, I’d finished university, had no desire to work in the software industry, and was hacking on a GTK based music player app from time to time. A rite of passage that every developer has to go through, I suppose. At some point I decided to overcomplicate some aspect of the app and ended up integrating libiris, a library to manage concurrent tasks. Then I started working professionally and abandoned the thing.

    Its fun to look at that with 13 years perspective. I since learned, largely thanks to Rust, that I cannot possibly ever write correct concurrent thread-based C code. (All my libiris changes had weird race conditions). I met Christian various times. Christian created libdex which does the same thing, but much better. I revived the music player app as a playlist generation toolkit . We all lived happily ever after.

    Except for the Github fork, which is gone.

    What else?

    This guy was an early attempt at creating a sort of GObject mapping for SPARQL data as exposed by Localsearch (then Tracker). Also created around 2011. Years later, I did a much better implementation in TrackerResource which we still use today.

    The Sam of 2011 would be surprised to hear that we organized GUADEC in Manchester only 6 years later. Back in those days we for some reason maintained our own registration system written in Node.js. I spent the first few weeks of 2017 hacking in support for accommodation bookings.

    I discovered another 10 year old gem called “Software Integration Ontology.” Nowadays we’d call that an SBOM. Did that term exist 10 years ago? I have spent too much time working on software integration.

    Various other artifacts research into software integration and complexity. A vestigal “Software Dependency Visualizer” project. (Which took on a life of its own, and many years later the idea is alive in KDE Codevis ). A fork of Aboriginal Linux, which we unwittingly brought to an end back in 2016. Bits of Baserock, which never went very far but was also led to the beginning of BuildStream .

    A fork of xdg-app, which is the original name of Flatpak. A library binding GLib to the MS RPC API on Windows, from the 2nd professional project I ever did. Thhese things are now dust.

    I had over 100 repositories on github.com. I sponsored one person, who I can’t sponsor any more as they only accept Github money. (I sponsor plenty of people in other platforms )

    Anyway, lots of nice people use Github, you can keep using Github. I will probably have to create the occasional burner account to push PRs where projects haven’t migrated away. Some of my projects are now in Gitlab.com and in GNOME’s Gitlab. Others are gone!

    It’s a weird thing but in 2025 I’m actually happy knowing that there’s a little bit less code in the world.

    • Pl chevron_right

      Michael Meeks: <h1>a new & beautiful Collabora Office</h1>

      news.movim.eu / PlanetGnome • 3 days ago - 13:23 • 3 minutes

    Just a short personal note to say how super excited I am to get our very first release of a new Collabora Office out that brings Collabora Online's lovely UX - created by the whole team to the desktop. You can read all about it in the press release . Please note - this is a first release - we expect all manner of unforseen problems, but still - it edits documents nicely.

    The heros behind the scenes

    There has been a huge amount of work behind the scenes, and people to say thank-you to. Let me try to get some of them:

    • First off - thanks to Jan 'Kendy' Holesovsky and Tor Lillqvist (who came out of retirement to create yet another foundational heavy-lift for FLOSS Office. I can't say how grateful we are for your hard work here on Mac and Windows respectively.
    • Then after the allotropia merger we had Thorsten Behrens to lead the project, and Sarper Akdemir to drive the Linux front-end.
    • Towards the end of the project we were thrilled to expand things to include a dream-team of FLOSS engineers to fix bugs and add features ...
    • Thanks to Rashesh Padia for the lovely first-start WebGL slideshow presentation added to Richard Brock 's content skills.
    • Thanks to Vivek Javiya for building a new file creation UI with Pedro Silva 's design skills here and elsewhere.
    • Lots of bug fix and polishing work from Parth Raiyani , and Jeremy Whiting (who also did multi-tabbed interface on Mac), and to Stephan Bergmann for digging out and clobbering the most hard-core races and horror bugs that we had hidden, Caolán McNamara too who made multiple documents work, and fixed crashes and multi-screen bits.
    • With Hubert Figuière making the flatpak beautiful, and of course the indomitable Andras Timar doing so much amazing work getting all of the CI, release-engineering, app-store, translation pieces and also bug-fixing done and completed in time.
    • Thanks too to our marketing team: from Chris Thornett getting the press briefing into a good state and multiplexing quotes left and right, to Richard Brock creating beautiful blog output, to Asja Čandić socializing it all, with Naomi Obbard leading the charge.
    • Thanks to all of our supporters who say nice things about us, and of course to so many translators who contribute to making Collabora Online great - hopefully now the strings are all public it should be easy to expand coverage.

    This is an outstanding result from so many - thank you!

    What is next technically ?

    There are lots of things we plan to do next, but there is so much that can be done. First - merging the work into our main product branches - and at the same time sharing much more of the code across platforms. We have some features in the pipeline already - starting to take more advantage of platform APIs for much improved slideshow / multi-screen presentation pieces that need merging and releasing, and ultimately better printing APIs, and better copy/paste. Then we need to make sure that all of the new features are present on all platforms - multi-tabbed UI, the new file creation UI, and of course much more polish and bug fixing - as well as better automated testing.

    Its exciting to have a big new release - but in general - we work really quite hard to avoid having big-bang deadlines, and a more steady development cadence. We will be trying to get the feature conveyer belt working alongside the Collabora Online development process - reasonably quickly - but now with another three platforms.

    Then over the next months - there are various fairly obvious directions to take the code in - one amusing feature was the ability to apparently collaborate with yourself when loading the same document on the same machine, that can be extended.

    Conclusion

    It has been really exciting to get feedback from many partners, customers and community members about the need for this. Again - this is a very first release - we plan to do lots of iteration and improvement around it. But, thank you again to the whole team and community for making Collabora Online something that people really want us to bring to the desktop. If you'd like to get involved (or just see pretty artwork of hard working animals) - why not head to community website , or our forum or code .

    Rock on =)

    • Pl chevron_right

      Sam Thursfield: Status update — 23rd November 2025

      news.movim.eu / PlanetGnome • 4 days ago - 00:03 • 8 minutes

    Bo día.

    I am writing this from a high speed train heading towards Madrid, en route to Manchester. I have a mild hangover, and a two hundred page printout of “The STPA Handbook”… so I will have no problem sleeping through the journey. I think the only thing keeping me awake is the stunning view.

    Sadly I havent got time to go all the way by train, in Madrid i will transfer to Easy Jet. It is indeed easy compared to trying to get from Madrid into France by train. Apparently this is mainly the fault of France’s SNCF .

    On the Spain side, fair play. The ministro de fomento (I think this translates as “Guy in charge of trains?”) just announced major works in Barcelona, including a new station in La Sagrera with space for more trains than they have now, and a more direct access from Madrid, and a speed boost via some new type of railway sleeper, which would make the top speed 350km/h instead of 300km/h as it is now. And some changes in Madrid, which would reduce the transfer time when arriving from the west and heading out further east. You can argue with many things about the trains in Spain… perhaps it would be useful if the regional trains here ran more than once per day … but you cant argue with the commitment to fast inter-city travel.

    If only we had similar investment to fix the cross border links between Spain and France, which are something of a joke. Engineers around the world will know this story. The problem is social: two different organizations, who speak different languages, have to agree on something. There is already a perfectly usable modern train line across the border. How many trains per day? Uh… two. Hope you planned your trip in advance because they’re fully booked next week.

    Anyway, this isn’t meant to a post on the status of the railways of western Europe.

    Digital Resilience Forum

    Last month I hopped on another Madrid-bound train to attend the Digital Resilience Forum . It’s a one day conference organized by Bitergia who you might know as world leaders in open source community analysis.

    I have mixed feelings about “community metrics” projects. As Nick Wellnhofer said regarding libxml , when you participate as a volunteer in a project that is being monitored, its easy to feel like you’re being somehow manipulated by the corporations who sponsor these things. How come you guys will spend time and money analyzing my project’s development processes and Git history, but you won’t spend time actually fixing bugs and improvements upstream? As the ffmpeg developers said : How come you will pay top calibre security researchers to read our code and find very specific exploits, but then wait for volunteers to fix them?

    The Bitergia team are great people who genuinely care about open source, and I really enjoyed the conference. The main themes were: digital sovereignty, geopolitics, the rise of open source, and that XKCD where all our digital
    infrastructure depends on a single unpaid volunteer in Nebraska. ( https://xkcd.com/2347/ ). (Coincidentally, one of the Bitergia guys actually does live in Nebraska).

    It was a day in a world where I am not used to participating: less engineering, more politics and campaigning. Yes, the Sovereign Tech Agency were around. We played a cool role play game simulating various hypothetical software crisis that might happen in the year 2027 (spoiler: in most cases a vendor-neutral, state-funded organization focused on open source was able to save the day : -). It is amazing what they’ve done so far with a relatively small investment, but it is a small organization and they maintain that citizens of every country should be campaigning and organizing to setting up an equivalent. Let’s not tie the health of open source infrastructure too closely to German politics.

    Also present, various campaign groups with “Open” at the start of their name: OpenForum Europe, OpenUK, OpenIreland, OpenRail. When I think about the future of Free Software platforms, such as our beloved GNOME, my mind always goes to funding contributors. There’s very little money here and meanwhile Apple and Microsoft have nearly all of the money and I feel like still GNOME succeeds largely thanks to the evenings and weekends of a small core of dedicated hackers; including some whose day job involves working on some other part of GNOME. It’s a bit depressing sometimes to see things this way, because the global economy gets more unequal every day, and how do you convince people who are already squeezed for cash to pay for something that’s freely available online? How do you get students facing a super competitive job market to hack on GTK instead of studying for university exams?

    There’s another side which I talk about less, and that’s education. There are more desktop Linux users than ever — apparently 5% of all desktop users or something — but there’s still very little agreement or understanding what “open source” is. Most computer users couldn’t tell you what an “operating system” does, and don’t know why “source code” can be an interesting thing to share and modify.

    I don’t like to espouse any dogmatic rule that the right way to solve any problem is to release software under the GPLv3. I think the problems society has today with technology come from over-complexity and under-study. (See also, my rant from last month. ). To tackle that, it is important to havesoftware infrastructure like drivers and compilers available under free software licenses. The Free Software movement has spent the last 40 years doing a pretty amazing job of that, and I think its surprising how widely software engineers accept that as normal and fight to maintain it. Things could easily be worse. But this effort is one part of a larger problem, of helping those people who think of themselves as “non-technical” to understand the fundamentals of computing and not see it as a magic box. Most people alive today have learned to read and write one or more languages, to do mathematics, to operate a car, to build spreadsheets, and operate a smartphone. Most people I know under 45 have learned to prompt a large language model in the last few years.

    With a basic grounding in how a computer operates, you can understand what an operating system does. And then you can see that whoever controls your OS has complete control over your digital life. And you will start to think twice about leaving that control to Apple, Google and Microsoft — big piles of cash where the concept of “ethics” barely has a name.

    Reading was once a special skill reserved largely for monks. And it was difficult: we only started spaces between the words later on. Now everyone knows what a capital letter is. We need to teach how computers work, we need to stop making them so complicated, and the idea of open development will come into focus for everyone.

    (and yes i realize this sounds a bit like the permacomputing manifesto ).

    Codethink work

    This is a long rant, isn’t it? My train only just left Zamora and I didnt fall asleep yet, so there’s more to come.

    I had a nice few months hacking on Endless OS 7, which has progressed from an experiment to a working system, bootable on bare metal, albeit with a various open issues that would block a stable release as yet. The overview docs in the repo tell you how to play with it.

    This is now fully in the hands of the team at Endless, and my next move is going to be in some internal research that has been ongoing for a number of years. Not much of it is secret, in fact quite a lot is being developed in the open, and it relates in part to regulatory compliance and safety-critical softare systems.

    Codethink dedicates more to open source than most companies its size. We never have trouble getting sponsorship for events like GUADEC. But I do wish I could spend more time maintaining open infrastructure that I use every day, like, you know, GNOME.

    This project isn’t going to solve that tomorrow, but it does occupy an interesting space in the intersection between industry and open source. The education gap I talked to you above is very much present in some industries where we work. Back in February a guy in a German car firm told me, “Nobody here wants open source. What they want is somebody to blame when the thing goes wrong.”

    Open source software comes with a big disclaimer that says, roughly, that if it breaks you get to keep both pieces. You get to blame yourself.

    And that’s a good thing! The people who understand a final, integrated system are the only people who can really define “correct behaviour”. If you’ve worked in the same industries I have you might recognise a common anti-pattern: teams who spend all their time arguing about ownership of a particular bug, and team A are convinced it’s a misbehaviour of component B and team B will try to prove the exact opposite. Meanwhile nobody actually spends the 15 minutes it would take to actually fix the bug. Another anti-pattern: team A would love to fix the bug in component B, but team B won’t let them even look at the source code. This happens muuuuuuuch more than you might think.

    So we’re not trying to teach the world how computers work, on this project, but we are trying to increase adoption and understanding at least in the software industry. There are some interesting ideas. Looking at software systems from new angles. This is where STPA comes in, by the way — it’s a way of breaking a system down not into components but rather into one or more control loops. Its going to take a while to make sense of everything in this new space… but you can expect some more 1500 word blog posts on the topic.

    • Pl chevron_right

      Christian Hergert: Status Week 47

      news.movim.eu / PlanetGnome • 5 days ago - 19:17 • 5 minutes

    Ptyxis

    • Issue filed about highlight colors. Seems like a fine addition but again, I can’t write features for everyone so it will need a MR.

    • Walk a user through changing shortcuts to get the behavior they want w/ ctrl+shift+page up/down. Find a bug along the way.

    • Copy GNOME Terminal behavior for placement of open/copy links from the context menu.

    • Add a ptyxis_tab_grab_focus() helper and use that everywhere instead of gtk_widget_grab_focus() on the tab (which then defers to the VteTerminal ). This may improve some state tracking of keyboard shift state, which while odd, is a fine enough workaround.

    • Issue about adding waypipe to the list of “network” command detection. Have to nack for now just because it is so useful in situations without SSH.

    • Issue about growing size of terminal window (well shrinking back) after changing foreground. Triaged enough to know this is still a GDK Wayland issue with too-aggressively caching toplevel sizes and then re-applying them even after first applying a different size. Punted to GTK for now since I don’t have the cycles for it.

    Foundry

    • Make progress icon look more like AdwSpinner so we can transition between the two paintables for progress based on if we get any sort of real fractional value from the operation.

    • Paintable support for FoundryTreeExpander which is getting used all over Foundry/Builder at this point.

    • Now that we have working progress from LSPs we need to display it somewhere. I copied the style of Nautilus progress operations into a new FoundryOperationBay which sits beneath the sidebar content. It’s not a perfect replicate but it is close enough for now to move on to other things.

      Sadly, we don’t have a way with the Language Server Protocol to tie together what operation caused a progress object to be created so cancellation of progress is rather meaningless there.

    • Bring over the whole “snapshot rewriting” we do in Ptyxis into the FoundryTerminal subclass of VTE to make it easy to use in the various applications getting built on Foundry.

    • Fix FoundryJsonrpcDriver to better handle incoming method calls. Specifically those lacking a params field. Also update things to fix reading from LF/Nil style streams where the underlying helper failed to skip bytes after read_upto() .

    • It can’t be used for much but there is a foundry mcp server now that will expose available tools. As part of this make a new Resources type that allows providing context in an automated fashion. They can be listed, read, track changes to list, subscribe to changes in a resource, etc.

      There are helpers for JSON style resources.

      For example, this makes the build diagnostics available at a diagnostics:// URI.

    • Make PluginFlatpakSdk explicitly skip using flatpak build before the flatpak build-init command in the pipeline should have run.

    • Allow locating PluginDevhelpBook by URI so that clicking on the header link in documentation can properly update path navigators.

    • Make FoundryPtyDiagnostics auto-register themselves with the FoundryDiagnosticManager so that things like foundry_diagnostic_manager_list_all() will include extracted warnings that came from the build pipeline automatically rather than just files with diagnostic providers attached.

    • Support for “linked pipelines” which allows you to insert a stage into your build pipeline which then executes another build pipeline from another project.

      For example, this can allow you to run a build pipeline for GTK or GtkSourceView in the early phase of your projects build. The goal here is to simplify the effort many of us go through working on multiple projects at the same time.

      Currently there are limitations here in that the pipeline you want to link needs to be setup correctly to land in the same place as your application. For jhbuild or any sort of --prefix= install prefix this is pretty simple an works.

      For the flatpak case we need more work so that we can install into a separate DESTDIR which is the staging directory of the app we’re building. I prefer this over setting up incremental builds just for this project manually as we have no configuration information to key off.

    • Add new FoundryVcsDiffHunk and FoundryVcsDiffLine objects and API necessary access them via FoundryVcsDelta . Implement the git plugin version of these too.

    • Add a new FoundryGitCommitBuilder high-level helper class to make writing commit tooling easier. Provides access to the delta, hunk, and lines as well as commit details. Still needs more work though to finish off the process of staging/unstaging individual lines and try to make the state easy for the UI side.

      Add a print-project-diff test program to test that this stuff works reasonably well.

    Builder

    • Lots of little things getting brought over for the new editor implementation. Go to line, position tracking, etc.

    • Searchable diagnostics page based on build output. Also fix up how we track PTY diagnostics using the DiagnosticManager now instead of only what we could see from the PTY diagnostics.

    Manuals

    • Merge support for back/forward mouse buttons from @pjungkamp

    • Merge support for updating pathbar elements when navigating via the WebKit back/forward list, also from @pjungkamp.

    Systemd

    • Libdex inspired fiber support for systemd which follows a core design principle which is that fibers are just a future like any other.

      https://github.com/systemd/systemd/pull/39771

    CentOS

    • Merge vte291/libadwaita updates for 10.2

    GTK

    • Lots of digging through GdkWayland/GtkWindow to see what we can do to improve the situation around resizing windows. There is some bit of state getting saved that is breaking our intent to have a window recalculate it’s preferred size.

    GNOME Settings Daemon

    • Took a look over a few things we can do to reduce periodic IO requests in the housekeeping plugin.

      Submitted a few merge requests around this.

    GLib

    • Submitted a merge request adding some more file-systems to those that are considered “system file system types”. We were missing a few that resulted in extraneous checking in gsd-housekeeping.

      While researching this, using a rather large /proc/mounts I have on hand, and modifying my GLib to let me parse it rather than the system mounts, I verified that we can spend double-digit milliseconds parsing this file. Pretty bad if you have a system where the mounts can change regularly and thus all users re-parse them.

      Looking at a Sysprof recording I made, we spend a huge amount of time in things like strcmp() , g_str_hash() , and in hashtable operations like g_hash_table_resize() .

      This is ripe for a better data-structure instead of strcmp() .

      I first tried to use gperf to generate perfect hashes for the things we care about and that was a 10% savings. But sometimes low-tech is even better.

      In the end I went with bsearch() which is within spitting distance of the faster solutions I came up with but a much more minimal change to the existing code-base at the simple cost of keeping lists sorted.

      There is likely still more that can be done on this with diminishing returns. Honestly, I was surprised a single change would be even this much.

    Red Hat

    This week also had me spending a significant amount of time on Red Hat related things.

    • Pl chevron_right

      Jussi Pakkanen: 3D models in PDF documents

      news.movim.eu / PlanetGnome • 5 days ago - 18:13

    PDF can do a lot of things. One them is embedding 3D models in the file and displaying them. The user can orient them freely in 3D space and even choose how they should be rendered (wireframe, solid, etc). The main use case for this is engineering applications .

    Supporting 3D annotations is, as expected, unexpectedly difficult because:

    1. No open source PDF viewer seems to support 3D models.
    2. Even though the format specification is available , no open source software seems to support generating files in this format (by which I mean Blender does not do it by default).
    But, again, given sufficient effort and submitting data to not-at-all-sketchy-looking 3D model conversion web sites, you can get 3D annotations to work. Almost.

    As you can probably tell, the picture above is not a screenshot. I had to take it with a cell phone camera, because while Acrobat Reader can open the file and display the result, it hard crashes before you can open Windows' screenshot tool.

    • Pl chevron_right

      Philip Withnall: Parental controls screen time limits backend

      news.movim.eu / PlanetGnome • 19 November • 4 minutes

    Ignacy blogged recently about all the parts of the user interface for screen time limits in parental controls in GNOME. He’s been doing great work pulling that all together, while I have been working on the backend side of things. We’re aiming for this screen time limits feature to appear in GNOME 50.

    There’s a design document which is the canonical reference for the design of the backend, but to summarise it at a high level: there’s a stateless daemon, malcontent-timerd , which receives logs of the child user’s time usage of the computer from gnome-shell in the child’s session. For example, when the child stops using the computer, gnome-shell will send the start and end times of the most recent period of usage. The daemon deduplicates/merges and stores them. The parent has set a screen time policy for the child, which says how much time they’re allowed on the computer per day (for example, 4h at most; or only allowed to use the computer between 15:00 and 17:00). The policy is stored against the child user in accounts-service .

    malcontent-timerd applies this policy to the child’s usage information to calculate an ‘estimated end time’ for the child’s current session, assuming that they continue to use the computer without taking a break. If they stop or take a break, their usage – and hence the estimated end time – is updated.

    The child’s gnome-shell is notified of changes to the estimated end time and, once it’s reached, locks the child’s session (with appropriate advance warning).

    Meanwhile, the parent can query the child’s computer usage via a separate API to malcontent-timerd . This returns the child’s total screen time usage per day, which allows the usage chart to be shown to the parent in the parental controls user interface ( malcontent-control ). The daemon imposes access controls on which users can query for usage information. Because the daemon can be accessed by the child and by the parent, and needs to be write-only for the child and read-only for the parent, it has to be a system daemon.

    There’s a third API flow which allows the child to request an extension to their screen time for the day, but that’s perhaps a topic for a separate post.

    IPC diagram of screen time limits support in malcontent. Screen time limit extensions are shown in dashed arrows.

    So, at its core, malcontent-timerd is a time range store with some policy and a couple of D-Bus interfaces built on top.

    Currently it only supports time limits for login sessions, but it is built in such a way that adding support for time limits for specific apps would be straightforward to add to malcontent-timerd in future. The main work required for that would be in gnome-shell — recording usage on a per-app basis (for apps which have limits applied), and enforcing those limits by freezing or blocking access to apps once the time runs out. There are some interesting user experience questions to think about there before anyone can implement it — how do you prevent a user from continuing to use an app without risking data loss (for example, by killing it)? How do you unambiguously remind the user they’re running out of time for a specific app? Can we reliably find all the windows associated with a certain app? Can we reliably instruct apps to save their state when they run out of time, to reduce the risk of data loss? There are a number of bits of architecture we’d need to get in place before per-app limits could happen.

    As it stands though, the grant funding for parental controls is coming to an end. Ignacy will be continuing to work on the UI for some more weeks, but my time on it is basically up. With the funding, we’ve managed to implement digital wellbeing (screen time limits and break reminders for adults) including a whole UI for it in gnome-control-center and a fairly complex state machine for tracking your usage in gnome-shell; a refreshed UI for parental controls; parental controls screen time limits as described above; the backend for web filtering (but more on that in a future post); and everything is structured so that the extra features we want in future should bolt on nicely.

    While the features may be simple to describe, the implementation spans four projects, two buses, contains three new system daemons, two new system data stores, and three fairly unique new widgets. It’s tackled all sorts of interesting user design questions (and continues to do so). It’s fully documented, has some unit tests (but not as many as I’d like), and can be integration tested using sysexts . The new widgets are localisable, accessible, and work in dark and light mode. There are even man pages. I’m quite pleased with how it’s all come together.

    It’s been a team effort from a lot of people! Code, design, input and review (in no particular order): Ignacy , Allan , Sam , Florian , Sebastian , Matthijs , Felipe , Rob . Thank you Endless for the grant and the original work on parental controls. Administratively, thank you to everyone at the GNOME Foundation for handling the grant and paperwork; and thank you to the freedesktop.org admins for providing project hosting for malcontent!