call_end

    • chevron_right

      Jussi Pakkanen: Converting Chapterizer from Cairo + Pango to CapyPDF

      news.movim.eu / PlanetGnome • 4 January 2026 • 1 minute

    Chapterizer (not a great name, I know) is a tool I wrote to generate books. Originally used Cairo and Pango to generate PDF files. It works and was fairly easy to get started but has its own set of downsides:

    • Cairo always produces RGB PDFs, which are not accepted by printing houses
    • Cairo does not handle advanced PDF features like trim boxes
    • Pango aligns text at the top of each line, but for high quality text output you have to do baseline alignment
    • Pango is designed to "always print something", which is to say it does transparent font substitution for example when the chosen font does not have some glyph
    I have also created CapyPDF to generate "proper" PDF. Over the holidays I finalized porting Chapterizer to use CapyPDF. The pipeline is now surprisingly simple. First you read in the source text, then it is shaped with Harfbuzz and then written to a PDF file with CapyPDF.

    It was grunt work. Nothing about it was particularly difficult, just dealing with the same old issues like the fact that in PDF the page's origin is at bottom left, whereas in Cairo it is at the top left.

    Anyhow, now that it is done we can actually test the performance of CapyPDF with a somewhat realistic setup. Currently creating a 40 page document takes 0.4 seconds which comes down to 0.01 seconds per page. Which is fast enough for me.

    • chevron_right

      Matthew Garrett: What is a PC compatible?

      news.movim.eu / PlanetGnome • 4 January 2026 • 12 minutes

    Wikipedia says “An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models” . But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet?

    Before we dig into that, let’s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual . Anyone could buy the same parts from Intel and build a compatible board. They’d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who’d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M , an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn’t run elsewhere. CP/M’s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn’t need to care about the underlying hardware and would run on all systems that had a working CP/M port.

    By 1979, boards based on the 8086, Intel’s successor to the 8080, were hitting the market. The 8086 wasn’t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM’s hardware, and the rest is history.

    But one key part of this was that despite what was now MS-DOS existing only to support IBM’s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn’t include all the code needed to run on a PC - you needed IBM’s BIOS. To begin with this wasn’t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn’t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor’s ROM code wasn’t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way.

    And here’s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM’s functionality, or didn’t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you’d think wouldn’t be necessary given that’s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186 , an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it.

    You’d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn’t maintain compatibility. As long as everything went via the BIOS this shouldn’t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn’t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them.

    And that’s what happened. IBM was the biggest player, so people targeted IBM’s platform. When BIOS interfaces weren’t sufficient they hit the hardware directly - and even if they weren’t doing that, they’d end up depending on behavioural quirks of IBM’s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof.

    So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009 , and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security . The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you’d need some additional media for hardware-specific drivers. It’s something that still distinguishes the PC market from the ARM desktop market. But it’s not as true as it used to be, and it’s interesting to think about whether it ever was as true as people thought.

    Let’s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don’t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn’t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems 1 . Is this system PC compatible? By the strictest of definitions, no.

    Ok. But the hardware is broadly the same, right? There’s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn’t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it’s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that’s really not going to work when you have a PCI card that’s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it’s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won’t 2 , so you’re still actually relying on the firmware to do the right thing but it’s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you’re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work.

    But imagine you are, or imagine you’re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that’s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you’re trying to run something built with IBM Pascal 1.0 ? There’s a risk that it’ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it’ll break. It’d work fine on an actual PC, and it won’t work here, so are we PC compatible?

    That’s a very interesting abstract question and I’m going to entirely ignore it. Let’s talk about PC graphics 3 . The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter . If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware.

    Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr , intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once 4 , and software that depended on that wouldn’t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn’t display correctly on any future PCs either. This is going to become a theme.

    There’s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes 5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn’t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you’re going to have a bad time. This isn’t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we’re not entirely CGA compatible . Is an IBM PC/AT with EGA PC compatible? You’d likely say “yes”, but there’s software written for the original PC that won’t work there.

    And, well, let’s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It’s fine, we’d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through.

    So, what’s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it’ll run most old software, as long as it doesn’t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it’ll potentially be unusable or crash because time is hard.

    The truth is that there’s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn’t run Flight Simulator. “PC Compatible” is a socially defined construct, just like “Woman”. We can get hung up on the details or we can just chill.


    1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don’t provide BIOS compatibility ↩︎

    2. Back in the 90s and early 2000s operating systems didn’t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that’s how I made a laptop that could boot unmodified MacOS X ↩︎

    3. (my name will not be Wolfwings Shadowflight ) ↩︎

    4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA ↩︎

    5. and by advanced we’re still talking about the 90s, don’t get excited ↩︎

    • chevron_right

      Christian Hergert: pgsql-glib

      news.movim.eu / PlanetGnome • 2 January 2026

    Much like the s3-glib library I put together recently , I had another itch to scratch. What would it look like to have a PostgreSQL driver that used futures and fibers with libdex? This was something I wondered about more than a decade ago when writing the libmongoc network driver for 10gen (later MongoDB).

    pgsql-glib is such a library which I made to wrap the venerable libpq PostgreSQL state-machine library. It does operations on fibers and awaits FD I/O to make something that feels synchronous even though it is not.

    It also allows for something more “RAII-like” using g_autoptr() which interacts very nicely with fibers.

    API Documentation can be found here.

    • chevron_right

      Felipe Borges: Looking for Mentors for Google Summer of Code 2026

      news.movim.eu / PlanetGnome • 2 January 2026

    It is once again that pre-GSoC time of year where I go around asking GNOME developers for project ideas they are willing to mentor during Google Summer of Code . GSoC is approaching fast, and we should aim to get a preliminary list of project ideas by the end of January.

    Internships offer an opportunity for new contributors to join our community and help us build the software we love.

    @Mentors , please submit new proposals in our Project Ideas GitLab repository .

    Proposals will be reviewed by the GNOME Internship Committee and posted at https://gsoc.gnome.org/2026 . If you have any questions, please don’t hesitate to contact us .

    • chevron_right

      Jussi Pakkanen: New year, new Pystd epoch, or evolving an API without breaking it

      news.movim.eu / PlanetGnome • 2 January 2026 • 1 minute

    One of the core design points of Pystd has been that it maintains perfect API and ABI stability while also making it possible to improve the code in arbitrary ways. To see how that can be achieved, let's look at what creating a new "year epoch" looks like. It's quite simple. First you run this script pystdscript.png

    Then you add the new files to Meson build targets (I was too lazy to implement that in the script). Done. For extra points there is also a new test that mixes types of pystd2025 and pystd2026 just to verify that things work.

    As everything is inside a yearly namespace (and macros have the corresponding prefix) the symbols do not clash with each other.

    At this point in time pystd2025 is frozen so old apps (of which there are, to be honest, approximately zero) keep working forever. It won't get any new features, only bug fixes. Pystd2026, on the other hand, is free to make any changes it pleases as it has zero backwards compatibility guarantees.

    Isn't code duplication terribly slow and inefficient?

    It can be. Rather than handwaving about it, lets measure. I used my desktop computer which has an AMD Ryzen 7 3700X.

    Compiling Pystd from scratch and running the test suite (with code for both 2025 and 2026) in both debug and optimized modes takes 3 seconds in total (1s for debug, 2s for optimized). This amounts to 2*13 compiler invocations, 2 static linker invocations and 2*5 dynamic linker invocations.

    Compiling a helloworld with standard C++ using -O2 -g also takes 3 seconds. This amounts to a single compiler invocation.

    • chevron_right

      This Week in GNOME: #230 Happy New Year!

      news.movim.eu / PlanetGnome • 2 January 2026 • 4 minutes

    Update on what happened across the GNOME project in the week from December 19 to January 02.

    GNOME Core Apps and Libraries

    Image Viewer (Loupe)

    Browse through images and inspect their metadata.

    Sophie (she/her) announces

    Image Viewer (Loupe) 48.2 and 49.2 have been released with the following fixes:

    • Considerably increased speed for listing other images in folders, especially for remote locations. This allows for switiching to other images to become available much quicker.
    • Fix panics, probably occuring when using an action like ‘copy’, and then closing the window. The crash causes all other windows to close.
    • Fix the creation date for images that don’t provide a timezone. It was displayed as if the recorded date and time was in UTC.
    • Fix zooming in not working via the zoom menu if the resulting zoom state would still fit the image inside the window.
    • Check if the is-hidden property is available before reading it. This avoids warnings being printed when browsing images on remote locations.
    • Fix the missing beginning of user comments in the metadata.

    Loupe 50.alpha has been released as well with better error messages when files cannot be read, design fixes for always visible scrollbars (a11y), and added a limit to the zoom-out, such that the image is still visible.

    Glycin 2.1.alpha was released with XPM and XBM support, making it possible to remove the last unsandboxed image loader on Fedora.

    Maps

    Maps gives you quick access to maps all across the world.

    mlundblad says

    Maps has seen some redesigns for GNOME 50. Place information is now shown in the sidebar (which has moved to the left) on desktop, and using a bottom sheet on mobile (using an AdwMultiLayout). Also public transit itinerary displaying has seen a redesign, using flow boxes for the overview list (with possibly multiple lines displaying long journeys with many legs), and “track segments” when showing details about a trip, displayed using the line colors

    place-info-sidebar.CHnMFICp_rI3iD.webp

    place-info-mobile.DOD8rk3U_Z1bfq0E.webp

    transit-itinerary-overview.Di3fKMYJ_Xaa0P.webp

    transit-journey-details.BUBUQ7_G_BK9LO.webp

    GNOME Circle Apps

    Ignacy Kuchciński (ignapk) announces

    Constrict by Wartybix was accepted into GNOME Circle!

    It compresses your videos to your chosen file size — useful for uploading to services with specific file size limits.

    Congratulations and welcome! 🎉

    https://flathub.org/apps/io.github.wartybix.Constrict

    io.github.wartybix.Constrict.CWEYsuZg_1YUAnL.webp

    Third Party Projects

    vallabhvidy announces

    Happy new year 🥳🥳 The first minor release (v0.2.0) of Cube Timer was released on 1st January 🥳 Since the last TWIG update, the app has received a number of improvements and new features:

    • Support for 2×2, 4×4, 5×5, 6×6, 7×7, Skewb, Megaminx, Pyraminx, and Clock.

    • The user interface is now responsive and adapts to phone size.

    • New preferences were added to customize timer behavior and interface.

      cube_timer_mobile.DJyTtYCx_hsEXP.webp

    Alexander Vanhee reports

    Just before the new year, the Bazaar app store received its 0.7.0 update, bringing features like Flathub account support and a category with apps for your desktop environment.

    Flathub account support allows you to log in using any login method supported by Flathub. You can then bookmark apps on their pages for easy access in the future, which is ideal for keeping track of apps you want to hold onto after a reinstall.

    On the category page, you will now find a new tile for either “Adwaita” or “KDE”. The Adwaita tile allows you to view all the apps listed on arewelibadwaitayet , while the KDE tile shows all apps published by the KDE project.

    Later in the week, I looked into adding support for displaying app permissions, helping you better understand how apps break the sandbox.

    Lastly, we switched our app icon after hearing that the old “tag” icon was confusing for new users on distros where the app is preinstalled, with the added benefit that the new icon better fits the idea of a bazaar.

    bazaar-adwaita.CVR8uqcf_1I4gm8.webp

    bazaar-flathub-login.kowjQn2c_nRoPi.webp

    bazaar-permissions.D74BRJGb_Z2eQYFh.webp

    bazaar-new-logo.CphgZ7o0_16Xdf6.webp

    Gir.Core

    Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

    Marcel Tiede says

    GirCore development update: First Bits for GTK composite template support just landed. See sample . Binding for template children is still missing and more support from source generators will be added. Stay tuned for more updates along the way.

    Miscellaneous

    Guillaume Bernard reports

    Merge Requests pushes are now available in GNOME Damned Lies! We recently updated the workflow in Damned Lies and you can now set the type of push of each module (direct push [the original way], GitLab Merge Request, GitHub Pull Request). At the moment, you will have to ask one of the coordinators to change the type of push for each module.

    What will happen to the original workflow? Nothing for translators and/or reviewers but a very small change for commiters. When committing, if the module’s configuration requires a Merge Request or a Pull Request, a new branch will be creating from the branch you’re translating. For instance, a new branch update-translation-fr-from-gnome-48 is created, push to the repository and a merge request is then opened through the command line. We retrieve the link to the merge request and post it as a comment in the workflow. Workflow now has another, new, state: « Merge Request is Waiting Approval ».

    There are more things to do with this feature: automatically archive the workflow when the commit is merged, track all the merge requests for a module, allow module maintainers to update this setting themselves, etc. But it requires more changes and, as usual, contributions are welcome!

    Sophie (she/her) says

    I wrote a blog post with some fun numbers about GNOME: GNOME in 2025: Some Numbers

    Events

    Kristi Progri reports

    We are happy to announce that GUADEC 2026 will be held in A Coruña, Spain! For more information, please check the link: https://discourse.gnome.org/t/guadec-2026-to-be-held-in-a-coruna-spain/33193

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • chevron_right

      Lennart Poettering: Mastodon Stories for systemd v259

      news.movim.eu / PlanetGnome • 30 December 2025 • 1 minute

    On Dec 17 we released systemd v259 into the wild .

    In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd259 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 25 posts:

    I intend to do a similar series of serieses of posts for the next systemd release (v260), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

    My series for v260 will begin in a few weeks most likely, under the #systemd260 hash tag.

    In case you are interested, here is the corresponding blog story for systemd v258 , here for v257 , and here for v256 .

    • chevron_right

      Christian Hergert: Experimenting with S3 APIs

      news.movim.eu / PlanetGnome • 29 December 2025

    My Red Hat colleague Jan Wildeboer made some posts recently playing around with Garage for S3 compatible storage. Intrigued by the prospect, I was curious what a nice libdex -based API would look like.

    Many years ago I put together a scrappy aws-glib library to do something similar but after changing jobs I failed to find any free time to finish it. S3-GLib is an opportunity to do it better thanks to all the async/future/fiber support I put into libdex.

    I should also mention what a pleasure it is to be able to quickly spin up a Garage instance to run the testsuite.

    • chevron_right

      Sam Thursfield: Musical update

      news.movim.eu / PlanetGnome • 27 December 2025 • 2 minutes

    Like many software engineers, I sometimes see my career as 20 years of failing to become a professional musician. Although its true that in the software industry we get to stay in better hotels than most independent musicians can afford. And we rarely have to work Saturday nights.

    Its difficult to combine two passions sometimes, but I am thankful for these opportunities to part of great independent music projects… it’s always fascinating to make music and see people connecting with it, however that is.

    Here are some highlights from 2025.

    Note: the embedded videos below seem to disappear in RSS feeds.. thanks WordPress.

    Muaré

    If you like funk , then here’s a tune for you by Muaré . Available on all streaming platforms and also here on Bandcamp . Despite being a trombonist, on this tune you can hear me playing Hammond organ Yamaha Reface YC organ… one of the word’s most fun instruments to play.

    URL: https://youtu.be/ml9DIaIMwik

    Brais Ninguén and Couto 90

    If you’re after reggae then have a whole new album, which just came out yesterday, by Brais Ninguen and Couto 90 . That’s myself on trombone. First time I’ve participated in a full album project, and the first time I’ve made noises that will be scratched onto a vinyl record. Decades of listening to ska and reggae finally put to good use!

    The full album is out on Spotify and other streaming platforms, or you can hear a couple of tunes on Bandcamp under the name “Phase II” .

    URL: https://youtu.be/8NsMjjsM2ls


    This was recorded in the amazing Escusalla Sonora studio in the Galician mountains, with Javier Vicalo who is something of a legend in the reggae scene. Check out his discography for more great reggae.

    Rafael Briceño

    Finally here’s a tune from Rafael Briceño, a talented musician from Venezuela who is recording an entire album in video form, under the name “Ensarta”. This one was a challenge. I’ve been doing my best to listen to Willie Colon and Ruben Blades but I don’t have decades of salsa experience to draw on. You don’t make great art without leaving your comfort zone though!

    URL: https://youtu.be/sHxPwsCIMcc

    If you like Rafa’s music.. which is likely… you can find a whole album named “De Regreso A Casa” on Spotify from a few years ago.

    Here is to more music in 2026. Go make some art!!