call_end

    • Pl chevron_right

      Michael Meeks: 2025-07-14 Monday

      news.movim.eu / PlanetGnome • 14 July

    • A day of 1:1's, marketing content review, topped off by minuting a PCC meeting.
    • Pl chevron_right

      Toluwaleke Ogundipe: Profiling Crosswords’ Rendering Pipeline

      news.movim.eu / PlanetGnome • 14 July • 10 minutes

    For the sake of formality, if you’re yet to read the [brief] introduction of my GSoC project, here you go.

    Rendering Puzzles in GNOME Crosswords

    GNOME Crosswords currently renders puzzles in two layers. The first is a grid of what we call the layout items: the grid cells, borders between cells and around the grid, and the intersections between borders. This is as illustrated below:

    Layout items

    The game and editor implement this item grid using a set of custom Gtk widgets: PlayCell for the cells and PlayBorder for the borders and intersections, all contained and aligned in a grid layout within a PlayGrid widget. For instance, the simple.ipuz puzzle, with just its item grid, looks like this:

    Rendered puzzle with layout item grid only

    Then it renders another layer, of what we call the layout overlays , above the grid. Overlays are elements of a puzzle which do not exactly fit into the grid, such as barred borders, enumerations (for word breaks, hyphens, etc), cell dividers and arrows (for arrowword puzzles), amongst others. The fact that overlays do not fit into the grid layout makes it practically impossible to render them using widgets. Hence, the need for another layer, and the term “overlay”.

    Overlays are currently implemented in the game and editor by generating an SVG and rendering it onto a GtkSnapshot of the PlayGrid using librsvg. The overlays for the same puzzle, the item grid of which is shown above, look like this:

    Rendered layout overlays

    When laid over the item grid, they together look like this:

    Rendered puzzle with layout item grid and overlays

    All these elements (items and overlays) and their various properties and styles are stored in a GridLayout instance, which encapsulates the appearance of a puzzle in a given state. Instances of this class can then be rendered into widgets, SVG, or any other kind of output.

    The project’s main source includes svg.c , a source file containing code to generate an SVG string from a GridLayout . It provides a function to render overlays only, another for the entire layout, and a function to create an RsvgHandle from the generated SVG string.

    Crosswords, the game, uses the SVG code to display thumbnails of puzzles (though currently only for the Cats and Dogs puzzle set), and Crossword Editor displays thumbnails of puzzle templates in its greeter for users to create new puzzles from.

    Crosswords’ puzzle picker grid
    Crossword Editor’s greeter

    Other than the game and editor, puzzles are also rendered by the crosswords-thumbnailer utility. This renders an entire puzzle layout by generating an SVG string containing both the layout items and overlays, and rendering/writing it to a PNG or SVG file. There is also my GSoC project, which ultimately aims to add support for printing puzzles in the game and editor.

    The Problem

    Whenever a sizeable, yet practical number of puzzles are to be displayed at the same time, or in quick succession, there is a noticeable lag in the user interface resulting from the rendering of thumbnails. In other words, thumbnails take too long to render!

    There are also ongoing efforts to add thumbnails to the list view in the game, for every puzzle, but the current rendering facility just can’t cut it. As for printing, this doesn’t really affect it since it’s not particularly a performance-critical operation.

    The Task

    My task was to profile the puzzle rendering pipeline to determine what the bottleneck(s) was/were. We would later use this information to determine the way forward, whether it be optimising the slow stages of the pipeline, replacing them or eliminating them altogether.

    The Rendering Pipeline

    The following is an outline of the main/top-level stages involved in rendering a puzzle from an IPUZ file to an image (say, an in-memory image buffer):

    1. ipuz_puzzle_from_file(): parses a file in the IPUZ format and returns an IpuzPuzzle object representing the puzzle.
    2. grid_state_new(): creates a GridState instance which represents the state of a puzzle grid at a particular instant, and contains a reference to the IpuzPuzzle object. This class is at the core of Crosswords and is pretty quick to instantiate.
    3. grid_layout_new(): creates a GridLayout instance (as earlier described) from the GridState .
    4. svg_from_layout(): generates an SVG string from the GridLayout .
    5. svg_handle_from_string(): creates an RsvgHandle (from librsvg ) from the generated SVG string and sets a stylesheet on the handle, defining the colours of layout items and overlays.
    6. rsvg_handle_render_document(): renders the generated SVG onto a Cairo surface (say, an image surface, which is essentially an image buffer), via a Cairo context.

    Profiling

    To profile the rendering pipeline specifically, I wrote up a little program to fit my purposes, which can be found here ( puzzle-render-profiler.c ).

    The Attempt with Sysprof

    Initially, I used Sysprof , executing the profiler program under it. Unfortunately, because Sysprof is a sampling profiler and probably also due to its system-wide nature, the results weren’t satisfactory. Also, the functions of interest aren’t long-running functions and each run only once per execution. So the results weren’t accurate enough and somewhat incomplete (missed many nested calls).

    Don’t get me wrong, Sysprof has its strengths and stands strong amongst profilers of its kind. I tried a couple of others, and Sysprof is the only one even worthy of mention here. Most importantly, I’m glad I got to use and explore Sysprof. The next time I use it won’t be my first!

    Using Callgrind

    Callgrind + QCachegrind is sincerely such a godsend!

    Callgrind is a profiling tool that records the call history among functions in a program’s run as a call-graph. By default, the collected data consists of the number of instructions executed, their relationship to source lines, the caller/callee relationship between functions, and the numbers of such calls.

    QCachegrind is a GUI to visualise profiling data. It’s mainly used as a visualisation frontend for data measured by Cachegrind/Callgrind tools from the Valgrind package.

    After a short while of reading through Callgrind’s manual, I pieced together the combination of options (not much at all) that I needed for the task, and the rest is a story of exploration, learning and excitement. With the following command line, I was all set.

    valgrind --tool=callgrind --toggle-collect=render_puzzle ./profiler puzzles/puzzle-sets/cats-and-dogs/doghouse.ipuz

    where:

    • valgrind is the Valgrind .
    • --tool=callgrind selects the Callgrind tool to be run.
    • --toggle-collect=render_puzzle sets render_puzzle , the core function in the profiler program, as the target for Callgrind’s data collection.
    • ./profiler is a symlink to the executable of the profiler program.
    • puzzles/puzzle-sets/cats-and-dogs/doghouse.ipuz is a considerably large puzzle.

    I visualised Callgrind’s output in QCachegrind, which is pretty intuitive to use. My focus is the Call Graph feature, which helps to graphically and interactively analyse the profile data. The graph can also be exported as an image or DOT (a graph description format in the Graphviz language) file. The following is a top-level view of the result of the profile run.

    Top-level profile of the rendering pipeline

    Note that Callgrind measures the number of instructions executed, not exactly execution time, but typically, the former translates proportionally to the latter. The percentages shown in the graph are instruction ratios, i.e the ratio of instructions executed within each function (and its callees) to the total number of instructions executed (within the portions of the program where data was collected).

    This graph shows that loading the generated SVG ( svg_handle_from_string ) takes up the highest percentage of time, followed by rendering the SVG ( rsvg_handle_render_document ). Note that the SVG is simply being rendered to an image buffer, so no encoding, compression, or IO is taking place. The call with the HEX number, instead of a name, simply calls g_object_unref , under which dropping the rsvg::document::Document (owned by the RsvgHandle ) takes the highest percentage. Probing further into svg_handle_from_string and rsvg_handle_render_document :

    Profile of svg_handle_from_string
    Profile of rsvg_handle_render_document

    The performance of rsvg_handle_render_document improves very slightly after its first call within a program due to some one-time initialisation that occurs in the first call, but it is almost insignificant. From these results, it can be seen that the root of the problem is beyond the scope of Crosswords, as it is something to fix in librsvg. My mentors and I thought it would be quicker and easier to replace or eliminate these stages, but before making a final decision, we needed to see real timings first. For anyone who cares to see a colourful, humongous call graph, be my guest:

    Full (sort of) profile of the rendering pipeline

    Getting Real Timings

    The next step was to get real timings for puzzles of various kinds and sizes to make sense of Callgrind’s percentages, determine the relationship between puzzle sizes and render times, and know the maximum amount of time we can save by replacing or eliminating the SVG stages of the pipeline.

    To achieve this, I ran the profiler program over a considerably large collection of puzzles available to the GNOME Crosswords project and piped the output (it’s in CSV format) to a file, the sorted version of which can be found here ( puzzle_render_profile-sorted_by_path.csv ). Then, I wrote a Python script ( visualize_results.py ), which can also be found at the same URL, to plot a couple of graphs from the results.

    Time vs Puzzle size (width * height) (lesser is better)
    Time vs Number of layout elements (items + overlays) (lesser is better)

    Both show a very similar profile from which I made the following deductions:

    • Total render times increase almost linearly with puzzle size, and so do the component stages of the rendering pipeline. The exceptions (those steep valleys in the graphs) are puzzles with significant amounts of NULL cells, the corresponding layout items of which are omitted during SVG generation since NULL cells are not to be rendered.
    • The stages of loading the generated SVG and rendering it take most of the time, significantly more than the other stages, just as the Callgrind graphs above show.

    Below is a stacked bar chart showing the cumulative render times for all puzzles. Note that these timings are only valid relative to one another; comparison with timings from another machine or even the same machine under different conditions would be invalid.

    Cumulative render times for all puzzles (lesser is better)

    The Way Forward

    Thankfully, the maintainer of librsvg, Federico Mena Quintero , happens to be one of my mentors. He looked into the profiling results and ultimately recommended that we cut out the SVG stages ( svg_from_layout , svg_handle_from_string and rsvg_handle_render_document ) entirely and render using Cairo directly. For context, librsvg renders SVGs using Cairo. He also pointed out some specific sources of the librsvg bottlenecks and intends to fix them. The initial work in librsvg is in !1178 and the MR linked from there.

    This is actually no small feat, but is already in the works (actually more complete than the SVG rendering pipeline is, at the point of writing this) and is showing great promise! I’ll be writing about this soon, but here’s a little sneak peek to keep your taste buds wet.

    Comparison of the new and old render pipeline timings with puzzles of varying sizes (lesser is better)
    before after Render of doghouse.ipuz before after Render of catatonic.ipuz

    I’ll leave you to figure it out in the meantime.

    Conclusion

    Every aspect of the task was such a great learning experience. This happened to be my first time profiling C code and using tools like Valgrind and Sysprof; the majority of C code I had written in the past was for bare-metal embedded systems. Now, I’ve got these under my belt, and nothing is taking them away.

    That said, I will greatly appreciate your comments, tips, questions, and what have you; be it about profiling, or anything else discussed herein, or even blog writing (this is only my second time ever, so I need all the help I can get).

    Finally, but by far not the least important, a wise man once told me:

    The good thing about profiling is that it can overturn your expectations :-)

    — H.P. Jansson

    Thanks

    Very big thank yous to my mentors, Jonathan Blandford and Federico Mena Quintero , for their guidance all through the accomplishment of this task and my internship in general. I’ve learnt a great deal from them so far.

    Also, thank you (yes, you reading) so much for your time. Till next time… Anticipate!!!

    • Pl chevron_right

      Sebastian Wick: Blender HDR and the reference white issue

      news.movim.eu / PlanetGnome • 13 July • 6 minutes

    The latest alpha of the upcoming Blender 5.0 release comes with High Dynamic Range (HDR) support for Linux on Wayland which will, if everything works out, make it into the final Blender 5.0 release on October 1, 2025. The post on the developer forum comes with instructions on how to enable the experimental support and how to test it.

    If you are using Fedora Workstation 41, which ships GNOME version 48, everything is already included to run Blender with HDR. All that is required is an HDR compatible display and graphics driver, and turning on HDR in the Display Settings.

    It’s been a lot of personal blood, sweat and tears, paid for by Red Hat across the Linux graphics stack for the last few years to enable applications like Blender to add HDR support. From kernel work, like helping to get the HDR mode working on Intel laptops, and improving the Colorspace and HDR_OUTPUT_METADATA KMS properties, to creating a new library for EDID and DisplayID parsing, and helping with wiring things up in Vulkan.

    I designed the active color management paradigm for Wayland compositors, figured out how to properly support HDR, created two wayland protocols to let clients and compositors communicate the necessary information for active color management, and created documentation around all things color in FOSS graphics. This would have also been impossible without Pekka Paalanen from Collabora and all the other people I can’t possibly list exhaustively.

    For GNOME I implemented the new API design in mutter (the GNOME Shell compositor), and helped my colleagues to support HDR in GTK.

    Now that everything is shipping, applications are starting to make use of the new functionality. To see why Blender targeted Linux on Wayland, we will dig a bit into some of the details of HDR!

    HDR, Vulkan and the reference white level

    Blender’s HDR implementation relies on Vulkan’s VkColorSpaceKHR , which allows applications to specify the color space of their swap chain, enabling proper HDR rendering pipeline integration. The key color space in question is VK_COLOR_SPACE_HDR10_ST2084_EXT , which corresponds to the HDR10 standard using the ST.2084 (PQ) transfer function.

    However, there’s a critical challenge with this Vulkan color space definition: it has an undefined reference white level.

    Reference white indicates the luminance or a signal level at which a diffuse white object (such as a sheet of paper, or the white parts of a UI) appears in an image. If images with different reference white levels end up at different signal levels in a composited image, the result is that “white” in one of the images is still being perceived as white, while the “white” from the other image is now being perceived as gray. If you ever scrolled through Instagram on an iPhone or played an HDR game on Windows, you will probably have noticed this effect.

    The solution to this issue is called anchoring. The reference white level of all images needs to be normalized in order for “white” ending up on the same signal level in the composited image.

    Another issue with the reference white level specific to PQ is the prevalent myth, that the absolute luminance of a PQ signal must be replicated on the actual display a user is viewing the content at. PQ is a bit of a weird transfer characteristic because any given signal level corresponds to an absolute luminance with the unit cd/m² (also known as nit). However, the absolute luminance is only meaningful for the reference viewing environment! If an image is being viewed in the reference viewing environment of ITU-R BT.2100, (essentially a dark room) and the image signal of 203 nits is being shown at 203 nits on the display, it makes the image appear as the artist intended. The same is not true when the same image is being viewed on a phone with the summer sun blasting on the screen from behind.

    PQ is no different from other transfer characteristics in that the reference white level needs to be anchored, and that the anchoring point does not have to correspond to the luminance values that the image encodes.

    Coming back to the Vulkan color space VK_COLOR_SPACE_HDR10_ST2084_EXT definition: “HDR10 (BT2020) color space, encoded according to SMPTE ST2084 Perceptual Quantizer (PQ) specification”. Neither ITU-R BT.2020 (primary chromaticity) nor ST.2084 (transfer characteristics), nor the closely related ITU-R BT.2100 define the reference white level. In practice, the reference level of 203 cd/m² from ITU-R BT.2408 (“Suggested guidance for operational practices in high dynamic range television production”) is used. Notable, this is however not specified in the Vulkan definition of VK_COLOR_SPACE_HDR10_ST2084_EXT .

    The consequences of this? On almost all platforms, VK_COLOR_SPACE_HDR10_ST2084_EXT implicitly means that the image the application submits to the presentation engine (what we call the compositor in the Linux world) is assumed to have a reference white level of 203 cd/m², and the presentation engine adjusts the signal in such a way that the reference white level of the composited image ends up at a signal value that is appropriate for the actual viewing environment of the user. On GNOME, the way to control this currently is the “HDR Brightness” slider in the Display Settings, but will become the regular screen brightness slider in the Quick Settings menu.

    On Windows, the misunderstanding that a PQ signal value must be replicated one to one on the actual display has been immortalized in the APIs. It was only until support for HDR was added to laptops that this decision was revisited, but changing the previous APIs was already impossible at this point. Their solution was exposing the reference white level in the Win32 API and tasking applications to continuously query the level and adjust the image to match the new level. Few applications actually do this, with most games providing a built-in slider instead.

    The reference white level of VK_COLOR_SPACE_HDR10_ST2084_EXT on Windows is essentially a continuously changing value that needs to be queried from Windows APIs outside of Vulkan.

    This has two implications:

    • It is impossible to write a cross-platform HDR application in Vulkan (if Windows is one of the targets)
    • On Wayland, a “standard” HDR signal can just be passed on to the compositor, while on Windows, more work is required

    While the cross-platform issue is solvable, and something we’re working on, the way Windows works also means that the cross-platform API might become harder to use because we cannot change the underlying Windows mechanisms.

    No Windows Support

    The result is that Blender currently does not support HDR on Windows.

    Jeroen-Bakker saying

    Jeroen-Bakker explaining the lack of Windows support

    The design of the Wayland color-management protocol, and the resulting active color-management paradigm of Wayland compositors was a good choice, making it easy for developers to do the right thing, while also giving them more control if they so chose.

    Looking forward

    We have managed to transition the compositor model from a dumb blitter to a component which takes an active part in color management, we have image viewers and video players with HDR support and now we have tools for producing HDR content! While it is extremely exciting for me that we have managed to do this properly, we also have a lot of work ahead of us, some of which I will hopefully tell you about in a future blog post!

    • Pl chevron_right

      Philip Withnall: GUADEC handbook

      news.movim.eu / PlanetGnome • 12 July

    I was reminded today that I put together some notes last year with people’s feedback about what worked well at the last GUADEC. The idea was that this could be built on, and eventually become another part of the GNOME handbook, so that we have a good checklist to organise events from each year.

    I’m not organising GUADEC, so this is about as far as I can push this proto-handbook, but if anyone wants to fork it and build on it then please feel free.

    Most of the notes so far relate to A/V things, remote participation, and some climate considerations. Obviously a lot more about on-the-ground organisation would have to be added to make it a full handbook, but it’s a start.

    • Pl chevron_right

      Steven Deobald: 2025-07-12 Foundation Update

      news.movim.eu / PlanetGnome • 12 July • 4 minutes

    Gah. Every week I’m like “I’ll do a short one this week” and then I… do not.

    ## New Treasurers

    We recently announced our new treasurer, Deepa Venkatraman . We will also have a new vice-treasurer joining us in October.

    This is really exciting. It’s important that Deepa and I can see with absolute clarity what is happening with the Foundation’s finances, and in turn present our understanding to the Board so they share that clarity. She and I also need to start drafting the annual budget soon, which itself must be built on clear financial reporting. Few people I know ask the kind of incisive questions Deepa asks and I’m really looking forward to tackling the following three issues with her:

    • solve our financial reporting problems:
      • cash flow as a “burndown chart” that most hackers will identify with
      • clearer accrual reporting so it’s obvious whether we’re growing or crashing
    • passing a budget on time that the Board really understands
    • help the Board pass safer policies

    ## postmarketOS

    We are excited to announce that postmarketOS has joined the GNOME Advisory Board ! This is particularly fun, because it breaks GNOME out of its safe shell. GNOME has had a complete desktop product for 15 years. Phones and tablets are the most common computers in the world today and the obvious next step for GNOME app developers. It’s a long hard road to win the mobile market, but we will. 🙂

    (I’m just going to keep saying that because I know some people think it’s extraordinarily silly… but I do mean it.)

    ## Sustain? Funding? Jobs?

    We’ve started work this week on the other side of the coin for donate.gnome.org . We’re not entirely sure which subdomain it will live at yet, but the process of funding contributors needs its own home. This page will celebrate the existing grant and contract work going on in GNOME right now (such as Digital Wellbeing) but it will also act as the gateway where contributors can apply for travel grants, contracts, fellowships, and other jobs.

    ## PayPal

    Thanks to Bart, donate.gnome.org now supports PayPal recurring donations, for folks who do not have credit cards.

    We hear you: EUR presentment currency is a highly-requested feature and so are yearly donations. We’re still working away at this. 🙂

    ## Hardware Pals

    We’re making some steady progress toward relationships with Framework Computer and Slimbook where GNOME developers can help them ensure their hardware always works perfectly, out of the box. Great folks at both companies and I’m excited to see all the bugs get squashed. 🙂

    ## Stuff I’m Dropping

    Oh, friends. I should really be working on the Annual Report… but other junk keeps coming up! Same goes for my GUADEC talk. And the copy for jobs.gnome.org … argh. Sorry Sam! haha

    Thanks to everyone who’s contributed your thoughts and ideas to the Successes for Annual Report 2025 issue. GNOME development is a firehose and you’re helping me drink it. More thoughts and ideas still welcome!

    ## It’s Not 1998

    Emmanuele and I had a call this week. There was plenty of nuance and history behind that conversation that would be too difficult to repeat here. However, he and I have similar concerns surrounding communication, tone, tools, media, and moderation: we both want GNOME to be as welcoming a community as it is a computing and development platform. We also agreed the values which bind us as a community are those values directly linked to GNOME’s mission.

    This is a significant challenge. Earth is a big place, with plenty of opinions, cultures, languages, and ideas. We are all trying out best to resolve the forces in tension. Carefully, thoughtfully.

    We both had a laugh at the truism, “it’s not 1998.” There’s a lot that was fun and exciting and uplifting about the earlier internet… but there was also plenty of space for nastiness. Those of us old enough to remember it (read: me) occasionally make the mistake of speaking in the snarky, biting tones that were acceptable back then. As Official Old People , Emmanuele and I agreed we had to work even harder to set an example for the kind of dialogue we hope to see in the community.

    Part of that effort is boosting other peoples’ work. You don’t have to go full shouty Twitter venture capitalist about it or anything… just remember how good it felt the first time someone congratulated you on some good work you did, and pass that along. A quick DM or email can go a long way to making someone’s week.

    Thanks Emmanuele, Brage, Bart, Sid, Sri, Alice, Michael, and all the other mods for keeping our spaces safe and inviting. It’s thankless work most of the time but we’re always grateful.

    ## Office Hours

    We tried out “office hours” today: one hour for Foundation Members to come and chat. Bring a tea or coffee, tell me about your favourite GUADEC, tell me what a bad job I’m doing, explain where the Foundation needs to spend money to make GNOME better, ask a question… anything. The URL is only published on private channels for, uh, obvious reasons. See you next week!

    Donate to GNOME

    • Pl chevron_right

      Jussi Pakkanen: AI slop is the new smoking

      news.movim.eu / PlanetGnome • 11 July

    • Pl chevron_right

      Ignacy Kuchciński: Digital Wellbeing Contract

      news.movim.eu / PlanetGnome • 11 July • 1 minute

    This month I have been accepted as a contractor to work on the Parental Controls frontent and integration as part of the Digital Wellbeing project. I’m very happy to take part in this cool endeavour, and very grateful to the GNOME Foundation for giving me this opportunity – special thanks to Steven Deobald and Allan Day for interviewing me and helping me connect with the team, despite our timezone compatibility 😉

    The idea is to redesign the Parental Controls app UI to bring it on par with modern GNOME apps, and integrate the parental controls in the GNOME Shell lock screen with collaboration with gnome-control-center. There also new features to be added, such as Screen Time monitoring and setting limits, Bedtime Schedule and Web Filtering support. The project has been going for quite some time, and there has been a lot of great work put into both designs by Sam Hewitt and the backend by Philip Withnall, who’s been really helpful teaching me about the project code practices and reviewing my MR. See the designs in the app-mockups ticket and the os-mockups ticket .

    We started implementing the design mockup MVP for Parental Controls, which you can find in the app-mockup ticket. We’re trying to meet the GNOME 49 release deadlines, but as always it’s a goal rather than certain milestone. So far we have finished the redesign of the current Parent Controls app without adding any new features, which is to refresh the UI for unlock page, rework the user selector to be a list rather than a carousel, and changed navigation to use pages. This will be followed by adding pages for Screen Time and Web Filtering.

    Refreshed unlock page Reworked user selector Navigation using pages, Screen Time and Web Filtering to be added

    I want to thank the team for helping me get on board and being generally just awesome to work with 😀 Until next update!

    • Pl chevron_right

      This Week in GNOME: #208 Converting Colors

      news.movim.eu / PlanetGnome • 11 July • 3 minutes

    Update on what happened across the GNOME project in the week from July 04 to July 11.

    GNOME Core Apps and Libraries

    Calendar

    A simple calendar application.

    FineFindus reports

    GNOME Calendar now allows exporting events as .ics files, allowing them to be easily shared.

    gnome_calendar_export_event.BhjWI4cv_1zqHhw.webp

    GNOME Development Tools

    GNOME Builder

    IDE for writing GNOME-based software.

    Nokse reports

    This week GNOME Builder received some new features!

    • Inline git blame to see who last modified each line of code
    • Changes and diagnostics overview displayed directly in the scrollbar
    • Enhanced LSP markdown rendering with syntax highlighting

    builder-1.ChC8_xGy_1gYN9D.webp

    builder-3.B_GRLvlz_1yUntK.webp

    GNOME Circle Apps and Libraries

    Déjà Dup Backups

    A simple backup tool.

    Michael Terry says

    Déjà Dup Backups 49.alpha is out for testing!

    It features a UI refresh and file-manager-based restores (for Restic only).

    Read the announcement for install instructions and more info.

    Any feedback is appreciated!

    Third Party Projects

    Dev Toolbox

    Dev tools at your fingertips

    Alessandro Iepure reports

    When I first started Dev Toolbox, it was just a simple tool I built for myself, a weekend project born out of curiosity and the need for something useful. I never imagined anyone else would care about it, let alone use it regularly. I figured maybe a few developers here and there would find it helpful. But then people started using it. Reporting bugs. Translating it. Opening pull requests. Writing reviews. Sharing it with friends. And suddenly, it wasn’t just my toolbox anymore. Fast forward to today: over 50k downloads on Flathub and 300 stars on GitHub . I still can’t quite believe it.

    To every contributor, translator, tester, reviewer, or curious user who gave it a shot: thank you. You turned a small idea into something real, something useful, and something I’m proud to keep building.

    Enough feelings. Let’s talk about what’s new in v1.3.0!

    • New tool: Color Converter Convert between HEX, RGB, HSL, and other formats. (Thanks @Flachz )
    • JWT tool improvements You can now encode payloads and verify signatures. (Thanks @Flachz )
    • Chmod tool upgrade Added support for setuid, setgid, and the sticky bit. (Thanks @Flachz )
    • Improved search, inside and out
      • The app now includes extra keywords and metadata, making it easier to discover in app stores and desktops
      • In-app search now matches tool keywords, not just their names. (Thanks @freeducks-debug )
    • Now a GNOME search provider You can search and launch Dev Toolbox tools straight from the Overview
    • Updated translations Many new translatable strings were added this release. Thank you to all translators who chipped in.

    devToolbox-colorConverter.WAvtznUG_Z1zedVW.webp

    GNOME Websites

    Victoria 🏳️‍⚧️🏳️‍🌈 she/her reports

    On welcome.gnome.org, of all of the listed teams, only Translation and Documentation Teams linked to their wikis instead of their respective Welcome pages. But now this changes for Translation Team! After several months of working on this we finally have our own Welcome page . Now is the best time to make GNOME speak your language!

    Miscellaneous

    Arjan reports

    PyGObject has support for async functions since 3.50. Now the async functions and methods are also discoverable from the GNOME Python API documentation .

    PyGObject_API_docs.JyB62TFi_1UArFd.webp

    GNOME Foundation

    steven announces

    The 2025-07-05 Foundation Update is out:

    • Grants and Fellowships Plan
    • Friends of GNOME, social media partners, shell notification
    • Annual Report… I haven’t done it yet
    • Fiscal Controls and Operational Resilience… yay?
    • Digital Wellbeing Frontend Kickoff
    • Office Hours
    • A Hacker in Need

    https://blogs.gnome.org/steven/2025/07/05/2025-07-05-foundation-update/

    Digital Wellbeing Project

    Ignacy Kuchciński (ignapk) reports

    As part of the Digital Wellbeing project, sponsored by the GNOME Foundation, there is an initiative to redesign the Parental Controls to bring it on par with modern GNOME apps and implement new features such as Screen Time monitoring, Bedtime Schedule and Web Filtering. Recently the UI for the unlock page was refreshed, the user selector was reworked to be a list rather than a carousel, and navigation was changed to use pages. There’s more to come, see https://blogs.gnome.org/ignapk/2025/07/11/digital-wellbeing-contract/ for more information.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Thibault Martin: Kubernetes is not just for Black Friday

      news.movim.eu / PlanetGnome • 9 July • 12 minutes

    I self-host services mostly for myself. My threat model is particular: the highest threats I face are my own incompetence and hardware failures. To mitigate the weight of my incompetence, I relied on podman containers to minimize the amount of things I could misconfigure. I also wrote ansible playbooks to deploy the containers on my VPS, thus making it easy to redeploy them elsewhere if my VPS failed.

    I've always ruled out Kubernetes as too complex machinery designed for large organizations who face significant surges in traffic during specific events like Black Friday sales. I thought Kubernetes had too many moving parts and would work against my objectives.

    I was wrong. Kubernetes is not just for large organizations with scalability needs I will never have. Kubernetes makes perfect sense for a homelabber who cares about having a simple, sturdy setup. It has less moving parts than my podman and ansible setup, more standard development and deployment practices, and it allows me to rely on the cumulative expertise of thousands of experts.

    I don't want to do things manually or alone

    Self-hosting services is much more difficult than just putting them online . This is a hobby for me, something I do on my free time, so I need to spend as little time doing maintenance as possible. I also know I don't have peers to review my deployments. If I have to choose between using standardized methods that have been reviewed by others or doing things my own way, I will use the standardized method.

    My main threats are:

    I can and will make mistakes. I am an engineer, but my current job is not to maintain services online. In my homelab, I am also a team of one. This means I don't have colleagues to spot the mistakes I make.

    [!info] This means I need to use battle-tested and standardized software and deployment methods.

    I have limited time for it. I am not on call 24/7. I want to enjoy time off with my family. I have work to do. I can't afford to spend my life in front of a computer to figure out what's wrong.

    [!info] This means I need to have a reliable deployment. I need to be notified when something goes in the wrong direction, and when something went completely wrong.

    My hardware can fail, or be stolen. Having working backups is critical. But if my hardware failed, I would still need to restore backups somewhere.

    [!info] This means I need to be able to rebuild my infrastructure quickly and reliably, and restore backups on it.

    I was doing things too manually and alone

    Since I wanted to get standardized software, containers seemed like a good idea. podman was particularly interesting to me because it can generate systemd services that will keep the containers up and running across restarts.

    I could have deployed the containers manually on my VPS and generated the systemd services by invoking the CLI. But I would then risk making small tweaks on the spot, resulting in a deployment that is difficult to replicate elsewhere.

    Instead, I wrote an ansible-playbook based on the containers.podman collection and other ansible modules. This way, ansible deploys the right containers on my VPS, copies or updates the config files for my services, and I can easily replicate this elsewhere.

    It has served me well and worked decently for years now, but I'm starting to see the limits of this approach. Indeed, on their introduction , the ansible maintainers state

    Ansible uses simple, human-readable scripts called playbooks to automate your tasks. You declare the desired state of a local or remote system in your playbook. Ansible ensures that the system remains in that state.

    This is mostly true for ansible, but this is not really the case for the podman collection. In practice I still have to do manual steps in a specific order, like creating a pod first, then adding containers to the pod, then generating a systemd service for the pod, etc.

    To give you a very concrete example, this is what the tasks/main.yaml of my Synapse (Matrix) server deployment role looks like.

    - name: Create synapse pod
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        state: created
    
    - name: Stop synapse pod
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        state: stopped
    
    - name: Create synapse's postgresql
      containers.podman.podman_container:
        name: synapse-postgres
        image: docker.io/library/postgres:{{ synapse_container_pg_tag }}
        pod: pod-synapse
        volume:
          - synapse_pg_pdata:/var/lib/postgresql/data
          - synapse_backup:/tmp/backup
        env:
          {
            "POSTGRES_USER": "{{ synapse_pg_username }}",
            "POSTGRES_PASSWORD": "{{ synapse_pg_password }}",
            "POSTGRES_INITDB_ARGS": "--encoding=UTF-8 --lc-collate=C --lc-ctype=C",
          }
    
    - name: Copy Postgres config
      ansible.builtin.copy:
        src: postgresql.conf
        dest: /var/lib/containers/storage/volumes/synapse_pg_pdata/_data/postgresql.conf
        mode: "600"
    
    - name: Create synapse container and service
      containers.podman.podman_container:
        name: synapse
        image: docker.io/matrixdotorg/synapse:{{ synapse_container_tag }}
        pod: pod-synapse
        volume:
          - synapse_data:/data
          - synapse_backup:/tmp/backup
        labels:
          {
            "traefik.enable": "true",
            "traefik.http.routers.synapse.entrypoints": "websecure",
            "traefik.http.routers.synapse.rule": "Host(`matrix.{{ base_domain }}`)",
            "traefik.http.services.synapse.loadbalancer.server.port": "8008",
            "traefik.http.routers.synapse.tls": "true",
            "traefik.http.routers.synapse.tls.certresolver": "letls",
          }
    
    - name: Copy Synapse's homeserver configuration file
      ansible.builtin.template:
        src: homeserver.yaml.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/homeserver.yaml
        mode: "600"
    
    - name: Copy Synapse's logging configuration file
      ansible.builtin.template:
        src: log.config.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name}}.log.config
        mode: "600"
    
    - name: Copy Synapse's signing key
      ansible.builtin.template:
        src: signing.key.j2
        dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name }}.signing.key
        mode: "600"
    
    - name: Generate the systemd unit for Synapse
      containers.podman.podman_pod:
        name: pod-synapse
        publish:
          - "10.8.0.2:9000:9000"
        generate_systemd:
          path: /etc/systemd/system
          restart_policy: always
    
    - name: Enable synapse unit
      ansible.builtin.systemd:
        name: pod-pod-synapse.service
        enabled: true
        daemon_reload: true
    
    - name: Make sure synapse is running
      ansible.builtin.systemd:
        name: pod-pod-synapse.service
        state: started
        daemon_reload: true
    
    - name: Allow traffic in monitoring firewalld zone for synapse metrics
      ansible.posix.firewalld:
        zone: internal
        port: "9000/tcp"
        permanent: true
        state: enabled
      notify: firewalld reload
    

    I'm certain I'm doing some things wrong and this file can be shortened and improved, but this is also my point: I'm writing a file specifically for my needs, that is not peer reviewed.

    Upgrades are also not necessarily trivial. While in theory it's as simple as updating the image tag in my playbook variables, in practice things get more complex when some containers depend on others.

    [!info] With Ansible, I must describe precisely the steps my server has to go through to deploy the new containers, how to check their health, and how to roll back if needed.

    Finally, discoverability of services is not great. I used traefik as a reverse proxy, and gave it access to the docker socket so it could read the labels of my other containers (like in the labels section of the yaml file above, containing the domain to use for Synapse), figure out what domain names I used, and route traffic to the correct containers. I wish a similar mechanism existed for e.g. prometheus to find new resources and scrape their metrics automatically, but didn't find any. Configuring Prometheus to scrape my pods was brittle and required a lot of manual work.

    Working with Kubernetes and its community

    What I need is a tool that lets me write "I want to deploy the version X of Keycloak." I need it to be able to figure by itself what version is currently running, what needs to be done to deploy the version X, whether the new deployment goes well, and how to roll back automatically if it can't deploy the new version.

    The good news is that this tool exists. It's called Kubernetes, and contrary to popular belief it's not just for large organizations that run services for millions of people and see surges in traffic for Black Friday sales. Kubernetes is software that runs on one or several servers, forming a cluster, and uses their resources to run the containers you asked it to.

    Kubernetes gives me more standardized deployments

    To deploy services on Kubernetes you have to describe what containers to use, how many of them will be deployed, how they're related to one another, etc. To describe these, you use yaml manifests that you apply to your cluster. Kubernetes takes care of the low-level implementation so you can describe what you want to run, how much resources you want to allocate to it, and how to expose it.

    The Kubernetes docs give the following manifest for an example Deployment that will spin up 3 nginx containers (without exposing them outside of the cluster)

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
    

    From my laptop, I can apply this file to my cluster by using kubectl , the CLI to control kubernetes clusters.

    $ kubectl apply -f nginx-deployment.yaml
    

    [!info] With Kubernetes, I can describe my infrastructure as yaml files. Kubernetes will read them, deploy the right containers, and monitor their health.

    If there is already a well established community around the project, chances are that some people already maintain Helm charts for it. Helm charts describe how the containers, volumes and all the Kubernetes object are related to one another. When a project is popular enough, people will write charts for it and publish them on https://artifacthub.io/.

    Those charts are open source, and can be peer reviewed. They already describe all the containers, services, and other Kubernetes objects that need to deploy to make a service run. To use the, I only have to define configuration variables, called Helm values , and I can get a service running in minutes.

    To deploy a fully fledged Keycloak instance on my cluster, I need to override default parameters in a values.yaml file, like for example

    ingress:
    	enabled: true
    	hostname: keycloak.ergaster.org
    	tls: true
    

    This is a short example. In practice, I need to override more values to fine tune my deployment and make it production ready. Once it's done I can deploy a configured Keycloak on my cluster by typing these commands on my laptop

    $ helm repo add bitnami https://charts.bitnami.com/bitnami
    $ helm install my-keycloak -f values.yaml bitnami/keycloak --version 24.7.4
    

    In practice I don't use helm on my laptop but I write yaml files in a git repository to describe what Helm charts I want to use on my cluster and how to configure them. Then, a software suite running on my Kubernetes cluster called Flux detects changes on the git repository and applies them to my cluster. It makes changes even easier to track and roll back. More on that in a further blog post.

    Of course, it would be reckless to deploy charts you don't understand, because you wouldn't be able to chase problems down as they arose. But having community-maintained, peer-reviewed charts gives you a solid base to configure and deploy services. This minimizes the room for error.

    [!info] Helm charts let me benefit from the expertise of thousands of experts. I need to understand what they did, but I have a solid foundation to build on

    But while a Kubernetes cluster is easy to use , it can be difficult to set up . Or is it?

    Kubernetes doesn't have to be complex

    A fully fledged Kubernetes cluster is a complex beast. Kubernetes, often abbreviated k8s, was initially designed to run on several machines. When setting up a cluster, you have to choose between components you know nothing about. What do I want for the network of my cluster? I don't know, I don't even know how the cluster network is supposed to work, and I don't want to know! I want a Kubernetes cluster, not a Lego toolset!

    [!info] A fully fledged cluster solves problems for large companies with public facing services, not problems for a home lab.

    Quite a few cloud providers offer managed cluster options so you don't have to worry about it, but they are expensive for an individual. In particular, they charge fees depending on the amount of outgoing traffic (egress fees). Those are difficult to predict.

    Fortunately, a team of brilliant people has created an opinionated bundle of software to deploy a Kubernetes cluster on a single server (though it can form cluster with several nodes too). They cheekily call it it k3s to advertise it as a small k8s.

    [!info] For a self-hosting enthusiast who want to run Kubernetes on a single server, k3s works like k8s, but installing and maintaining the cluster itself is much simpler.

    Since I don't have High Availability needs and can afford to get my services offline occasionally, k3s on a single node is more than enough to let me play with Kubernetes without the extra complexity.

    Installing k3s can be as simple as running the one-liner they advertise on their website

    $ curl -sfL https://get.k3s.io | sh -
    

    I've found their k3s-ansible playbook more useful since it does a few checks on the cluster host and copies the kubectl configuration files on your laptop automatically.

    Discoverability on Kubernetes is fantastic

    In my podman setup, I loved how traefik could read the labels of a container, figure out what domains I used, and where to route traffic for this domain.

    Not only is the same thing true for Kubernetes, it goes further. cert-manager, the standard way to retrieve certs in a Kubernetes cluster, will read Ingress properties to figure out what domains I use and retrieve certificates for them. If you're not familiar with Kubernetes, an Ingress is a Kubernetes object telling your cluster "I want to expose those containers to the web, and this is how they can be reached."

    Kubernetes has an Operator pattern. When I install a service on my Kubernetes cluster, it often comes with a specific Kubernetes object that Prometheus Operator can read to know how to scrape the service.

    All of this happens by adding an annotation or two in yaml files. I don't have to fiddle with networks. I don't have to configure things manually. Kubernetes handles all that complexity for me.

    Conclusion

    It was a surprise for me to realize that Kubernetes is not the complex beast I thought it was. Kubernetes internals can be difficult to grasp, and there is a steeper learning curve than with docker-compose. But it's well worth the effort.

    I got into Kubernetes by deploying k3s on my Raspberry Pi 4. I then moved to a beefier mini pc, nit because Kubernetes added too much overhead, but because the CPU of the Raspberry Pi 4 is too weak to handle my encrypted backups .

    With Kubernetes and Helm, I have more standardized deployments. The open source services I deploy have been crafted and reviewed by a community of enthusiasts and professionals. Kubernetes handles a lot of the complexity for me, so I don't have to. My k3s cluster runs on a single node. My volumes live on my disk (via Rancher's local path provisioner ). And I still don't do Black Friday sales!