call_end

    • Pl chevron_right

      Sebastian Wick: Flatpak Pre-Installation Approaches

      news.movim.eu / PlanetGnome • 13 December • 3 minutes

    Together with my then-colleague Kalev Lember, I recently added support for pre-installing Flatpak applications. It sounds fancy, but it is conceptually very simple: Flatpak reads configuration files from several directories to determine which applications should be pre-installed. It then installs any missing applications and removes any that are no longer supposed to be pre-installed (with some small caveats).

    For example, the following configuration tells Flatpak that the devel branch of the app org.test.Foo from remotes which serve the collection org.test.Collection , and the app org.test.Bar from any remote should be installed:

    [Flatpak Preinstall org.test.Foo]
    CollectionID=org.test.Collection
    Branch=devel
    
    [Flatpak Preinstall org.test.Bar]
    

    By dropping in another confiuration file with a higher priority, pre-installation of the app org.test.Foo can be disabled:

    [Flatpak Preinstall org.test.Foo]
    Install=false
    

    The installation procedure is the same as it is for the flatpak-install command. It supports installing from remotes and from side-load repositories, which is to say from a repository on a filesystem.

    This simplicity also means that system integrators are responsible for assembling all the parts into a functioning system, and that there are a number of choices that need to be made for installation and upgrades.

    The simplest way to approach this is to just ship a bunch of config files in /usr/share/flatpak/preinstall.d and config files for the remotes from which the apps are available. In the installation procedure, flatpak-preinstall is called and it will download the Flatpaks from the remotes over the network into /var/lib/flatpak . This works just fine, until someone needs one of those apps but doesn’t have a suitable network connection.

    The next way one could approach this is exactly the same way, but with a sideload repository on the installation medium which contains the apps that will get pre-installed. The flatpak-preinstall command needs to be pointed at this repository at install time, and the process which creates the installation medium needs to be adjusted to create this repository. The installation process now works without a network connection. System updates are usually downloaded over the network, just as new pre-installed applications will be.

    It is also possible to simply skip flatpak-preinstall , and use flatpak-install to create a Flatpak installation containing the pre-installed apps which get shipped on the installation medium. This installation can then be copied over from the installation medium to /var/lib/flatpak in the installation process. It unfortunately also makes the installation process less flexible because it becomes impossible to dynamically build the configuration.

    On modern, image-based operating systems, it might be tempting to just ship this Flatpak installation on the image because the flexibility is usually neither required nor wanted. This currently does not work for the simple reason that the default system installation is in /var/lib/flatpak , which is not in /usr which is the mount point of the image. If the default system installation was in the image, then it would be read-only because the image is read-only. This means we could not update or install anything new to the system installation. If we make it possible to have two different system installations — one in the image, and one in /var — then we could update and install new things, but the installation on the image would become useless over time because all the runtimes and apps will be in /var anyway as they get updated.

    All of those issues mean that even for image-based operating systems, pre-installation via a sideload repository is not a bad idea for now. It is however also not perfect. The kind of “pure” installation medium which is simply an image now contains a sideload repository. It also means that a factory reset functionality is not possible because the image does not contain the pre-installed apps.

    In the future, we will need to revisit these approaches to find a solution that works seamlessly with image-based operating systems and supports factory reset functionality. Until then, we can use the systems mentioned above to start rolling out pre-installed Flatpaks.

    • Pl chevron_right

      Asman Malika: My Outreachy Journey: From curiosity to contribution

      news.movim.eu / PlanetGnome • 9 December • 3 minutes

    Hello! I’m Asman Malika, and I still can’t quite believe I’m writing this as an Outreachy intern.

    I’m working on the GNOME project: Improving document signing in GNOME Document Viewer (Papers), focusing on adding both manual and digital signing features, and improving the user interface, through a smoother signing experience.

    Before I could even imagine working on a project like GNOME Papers, I was exploring a new side of software development. Just 19 months ago, I barely knew anything about coding. No degree, no bootcamp. Just curiosity, determination, and the urge to prove to myself that I belonged in tech.

    Today, I work with Rust, Go, JavaScript, C, React and I contribute to open-source projects. But the road to this point? Let’s just say it wasn’t straight.

    The struggles that led me here

    I’ve applied to opportunities before, and been rejected. Sometimes because of my identity. Sometimes because I didn’t have enough experience or a formal degree. Every rejection whispered the same doubt: maybe I wasn’t ready yet.

    But each rejection also pushed me to look for a space where effort, curiosity, and willingness to learn mattered more than credentials on paper. And then I found Outreachy. The moment I read about the program, it clicked: this was a place built for people like me.

    Why Outreachy felt different

    I didn’t just apply because I wanted an internship. I applied because I wanted to contribute meaningfully to real-world projects. After months of learning, experimenting, and self-teaching, I wanted to show that persistence counts,  that your journey doesn’t need to follow a traditional path to matter.

    The community aspect drew me in even more. Reading about past interns who started exactly where I was gave me hope. Every line of code I wrote during the application period felt like a building block towards improving myself. And the support from mentors and the wider community? I truly appreciate every bit of it.

    The contribution phase: chaos, learning, and late nights

    The contribution period tested my patience and resilience. Imagine this: working a full-time job (where I was still learning software development skills) during the day, then switching gears at night to contribute to Outreachy projects.

    Most of my real contribution time came late at night, fueled by curiosity, determination, and maybe a little too much coffee. I had to adapt and learn quickly from understanding unfamiliar project structures, to reading documentation, asking questions (which was terrifying at first), and sometimes struggling more than I expected.

    Some tasks took hours longer than anticipated. Some pull requests needed multiple revisions. Some nights, imposter syndrome kicked in.

    But every challenge taught me something meaningful. I learned how open-source communities operate: writing clean code, submitting patches, communicating clearly, and staying consistent. The biggest surprise? Collaborating in public. At first, it felt intimidating, every question, every mistake visible to everyone. But gradually, it became empowering. Asking for help isn’t weakness; it’s how real developers grow.

    Contributions I’m proud of

    I fixed bugs. Improved documentation. Implemented and tested features. Helped refine workflows.

    But here’s the truth: the real achievement wasn’t the list of tasks, it was consistency, I showed up when it was hard. I learned to work efficiently in a community, and contributed in ways that genuinely helped me grow as a developer.

    Even small contributions taught me big lessons. Each merged pull request felt like a win. Each piece of mentor feedback felt like progress. Every late night debugging was worth it because I was building something real.

    What I hope to gain

    I want to deepen my technical skills, learn best practices from my mentors, and make contributions that truly matter. I also hope to grow my confidence in open-source collaboration and continue growing a software developer.

    Throughout this journey, I want to document my progress and share my experiences with the community, reflecting on what I learn and hopefully inspire others along the way.

    • Pl chevron_right

      Michael Catanzaro: Significant Drag and Drop Vulnerability in WebKitGTK

      news.movim.eu / PlanetGnome • 9 December

    WebKitGTK 2.50.3 contains a workaround for CVE-2025-13947 , an issue that allows websites to exfiltrate files from your filesystem. If you’re using Epiphany or any other web browser based on WebKitGTK, then you should immediately update to 2.50.3.

    Websites may attach file URLs to drag sources. When the drag source is dropped onto a drop target, the website can read the file data for its chosen files, without any restrictions. Oops. Suffice to say, this is not how drag and drop is supposed to work. Websites should not be able to choose for themselves which files to read from your filesystem; only the user is supposed to be able to make that choice, by dragging the file from an external application . That is, drag sources created by websites should not receive file access.

    I failed to find the correct way to fix this bug in the two afternoons I allowed myself to work on this issue, so instead my overly-broad solution was to disable file access for all drags . With this workaround, the website will only receive the list of file URLs rather than the file contents.

    Apple platforms are not affected by this issue.

    • Pl chevron_right

      Laura Kramolis: Rewriting Cartridges

      news.movim.eu / PlanetGnome • 9 December • 4 minutes

    Gamepad support, collections, instant imports, and more!

    Cartridges is, in my biased opinion, the best game launcher out there. To use it, you do not need to wait 2 minutes for a memory-hungry Electron app to start up before you can start looking for what you want to play. You don’t need to sign into anything. You don’t need to spend 20 minutes configuring it. You don’t need to sacrifice your app menu, filling it with low-resolution icons designed for Windows that don’t disappear after you uninstall a game. You install the app, click “Import”, and all your games from anywhere on your computer magically appear.

    It was also the first app I ever wrote. From this, you can probably already guess that it is an unmaintainable mess. It’s both under- and over-engineered, it is full of bad practices, and most importantly, I don’t trust it. I’ve learned a lot since then. I’ve learned so much that if I were to write the app again, I would approach it completely differently. Since Cartridges is the preferred way to launch games for so many other people as well, I feel it is my duty as a maintainer to give it my best shot and do just that: rewrite the app from scratch.

    Myself, Zoey , and Jamie have been working on this for the past two weeks and we’ve made really good progress so far. Beyond stability improvements, the new base has allowed us to work on the following new features:

    Gamepad Support

    Support for controller navigation has been something I’ve attempted in the past but had to give up as it proved too challenging. That’s why I was overjoyed when Zoey stepped up to work on it. In the currently open pull request, you can already launch games and navigate many parts of the UI with a controller. You can donate to her on Ko-fi if you would like to support the feature’s development. Planned future enhancements include navigating menus, remapping, and button prompts.

    Collections

    Easily the most requested feature, a lot of people asked for a way to manually organize their games. I initially rejected the idea as I wanted Cartridges to remain a single-click game launcher but softened up to it over time as more and more people requested it since it’s an optional dimension that you can just ignore if you don’t use it. As such, I’m happy to say that Jamie has been working on categorization with an initial implementation ready for review as of writing this. You can support her on Liberapay or GitHub Sponsors .

    Instant Imports

    I mentioned that Cartridges’ main selling point is being a single-click launcher. This is as good it gets, right? Wrong: how about zero clicks?

    The app has been reworked to be even more magical. Instead of pulling data into Cartridges from other apps at the request of the user, it will now read data directly from other apps without the need to keep games in-sync manually. You will still be able to edit the details of any game, but only these edits will be saved, meaning if any information on, let’s say Steam gets updated, Cartridges will automatically reflect these changes.

    The existing app has settings to import and remove games automatically, but this has been a band-aid solution and it will be nice to finally do this properly, saving you time, storage space, and saving you from conflicts.

    To allow for this, I also changed the way the Steam source works to fetch all data from disk instead of making calls to Steam’s web API. This was the only source that relied on the network, so all imports should now be instant. Just install the app and all your games from anywhere on your computer appear. How cool is that? :3

    And More

    We have some ideas for the longer term involving installing games, launching more apps, and viewing more game data. There are no concrete plans for any of these, but it would be nice to turn Cartridges into a less clunky replacement for most of Steam Big Picture.

    And of course, we’ve already made many quality of life improvements and bug fixes. Parts of the interface have been redesigned to work better and look nicer. You can expect many issues to be closed once the rewrite is stable. Speaking of…

    Timeline

    We would like to have feature-parity with the existing app. The new app will be released under the same name, as if it was just a regular update so no action will be required to get the new features.

    We’re aiming to release the new version sometime next year, I’m afraid I can’t be more precise than that. It could take three more months, it could take 12. We all have our own lives, working on the app as a side project so we’ll see how much time we can dedicate to it.

    If you would like to keep up with development, you can watch open pull requests on Codeberg targeting the rewrite branch. You can also join the Cartridges Discord server .

    Thank you again Jamie and Zoey for your efforts!

    • Pl chevron_right

      Jakub Steiner: Dithering

      news.movim.eu / PlanetGnome • 8 December • 1 minute

    One of the new additions to the GNOME 49 wallpaper set is Dithered Sun by Tobias . It uses dithering not as a technical workaround for color banding, but as an artistic device.

    Halftone app

    Tobias initially planned to use Halftone — a great example of a GNOME app with a focused scope and a pleasantly streamlined experience. However, I suggested that a custom dithering method and finer control over color depth would help execute the idea better. A long time ago, Hans Peter Jensen responded to my request for arbitrary color-depth dithering in GIMP by writing a custom GEGL op.

    Now, since the younger generation may be understandably intimidated by GIMP’s somewhat… vintage interface, I promised to write a short guide on how to process your images to get a nice ordered dither pattern without going overboard on reducing colors. And with only a bit of time passing since the amazing GUADEC in Brescia, I’m finally delivering on that promise. Better late than later.

    GEGL dithering op

    I’ve historically used the GEGL dithering operation to work around potential color banding on lower-quality displays. In Tobias’ wallpaper, though, the dithering is a core element of the artwork itself. While it can cause issues when scaling (filtering can introduce moiré patterns), there’s a real beauty to the structured patterns of Bayer dithering.

    You will find the GEGL Op in Color > Dither menu. The filter/op parameters don’t allow you to set the number of colors directly—only the per-channel color depth (in bits). For full-color dithers I tend to use 12-bit . I personally like the Bayer ordered dither, though there are plenty of algorithms to choose from, and depending on your artwork, another might suit you better. I usually save my preferred settings as a preset for easier recall next time (find Presets at the top of the dialog).

    Happy dithering!

    • Pl chevron_right

      Javad Rahmatzadeh: AI and GNOME Shell Extensions

      news.movim.eu / PlanetGnome • 6 December • 2 minutes

    Since I joined the extensions team, I’ve only had one goal in mind. Making the extension developers’ job easier by providing them documentation and help.

    I started with the port guide and then I became involved in the reviews by providing developers code samples, mentioning best practices, even fixing the issue myself and sending them merge requests. Andy Holmes and I spent a lot of time writing all the necessary documentation for the extension developers. We even made the review guidelines very strict and easy to understand with code samples.

    Today, extension developers have all the documentation to start with extensions, a port guide to port their extensions, and a very friendly place on the GNOME Extensions Matrix channel to ask questions and get fast answers. Now, we have a very strong community for GNOME Shell extensions that can easily overcome all the difficulties of learning and changes.

    The number of submitted packages to EGO is growing every month and we see more and more people joining the extensions community to create their own extensions. Some days, I spend more than 6 hours a day reviewing over 15,000 lines of extension code and answering the community.

    In the past two months, we have received many new extensions on EGO. This is a good thing since it can make the extensions community grow even more, but there is one issue with some packages. Some devs are using AI without understanding the code.

    This has led to receiving packages with many unnecessary lines and bad practices. And once a bad practice is introduced in one package, it can create a domino effect, appearing on other extensions. That alone has increased the waiting time for all packages to be reviewed.

    At the start, I was really curious about the increase in unnecessary try-catch block usage in many new extensions submitted on EGO. So I asked, and they answered that it is coming from AI.

    Just to give you a gist of how these unnecessary code might look:

    destroy() {
        try {
            if (typeof super.destroy === 'function') {
                super.destroy();
            }
        } catch (e) {
            console.warn(`${e.message}`);
        }
    }

    Instead of simply calling `super.destroy()`, which you clearly know exists in the parent:

    destroy() {
        super.destroy();
    }

    At this point, we have to add a new rule to the EGO review guidelines. So the packages with unnecessary code that indicate they are AI-generated will be rejected.

    This doesn’t mean you cannot use AI for learning or fixing some issues. AI is a fantastic tool for learning and helping find and fix issues. Use it for that, not for generating the entire extension. For sure, in the future, AI can generate very high quality code without any unnecessary lines but until then, if you want to start writing extensions, you can always ask us in the GNOME Extensions Matrix channel .

    • Pl chevron_right

      Felipe Borges: One Project Selected for the December 2025 Outreachy Cohort with GNOME!

      news.movim.eu / PlanetGnome • 3 December

    We are happy to announce that the GNOME Foundation is sponsoring an Outreachy project for the December 2025 Outreachy cohort.

    Outreachy provides internships to people subject to systemic bias and impacted by underrepresentation in the tech industry where they are living.

    Let’s welcome Malika Asman! Malika will be working with Lucas Baudin on improving document signing in Papers , our document viewer.

    The new contributor will soon get their blogs added to Planet GNOME making it easy for the GNOME community to get to know them and the projects that they will be working on. We would like to also thank our mentor, Lucas  for supporting Outreachy and helping new contributors enter our project.

    If you have any questions, feel free to reply to this Discourse topic or message us privately at soc-admins@gnome.org.

    • Pl chevron_right

      Cassidy James Blaede: Looking back on GNOME in 2025—and looking forward to 2026

      news.movim.eu / PlanetGnome • 2 December • 3 minutes

    foot-dark.png

    This past year has been an exceptional one for GNOME. The project released two excellent releases on schedule with GNOME 48 in March and GNOME 49 in September. Contributors have been relentless in delivering a set of new and improved default apps, constant performance improvements across the board benefitting everyone (but especially lower-specced hardware), a better experience on high end hardware like HiDPI and HDR displays, refined design and refreshed typography, all new digital wellbeing features and parental controls improvements, improved accessibility support across the entire platform, and much more.

    Just take a look back through This Week in GNOME where contributors provided updates on development every single week of 2025 so far. (And a huge thank you to Felix, who puts This Week in GNOME together !)

    All of these improvements were delivered for free to users of GNOME across distributions—and even beyond users of GNOME itself via GNOME apps running on any desktop thanks to Flatpak and distribution via Flathub.

    Earlier this year the GNOME Foundation also relaunched Friends of GNOME where you can set up a small recurring donation to help fund initiatives including:

    • infrastructure freely provided to Core, Circle, and World projects
    • services for GNOME Foundation members like blog hosting, chat, and video conferencing
    • development of Flathub
    • community travel sponsorship

    While I’m proud of what GNOME has accomplished in 2025 and that the GNOME Foundation is operating sustainably, I’m personally even more excited to look ahead to what I hope the Foundation will be able to achieve in the coming year.

    Let’s Reach 1,500 Friends of GNOME

    The newly-formed fundraising committee kicked off their efforts by announcing a simple goal to close out 2025: let’s reach 1,500 Friends of GNOME! If we can reach this goal by the end of this year, it will help GNOME deliver even more in 2026; for example, by enabling the Foundation to sponsor more community travel for hackfests and conferences, and potentially even sponsoring specific, targeted development work.

    But GNOME needs your help!

    How You Can Help

    First, if you’re not already a Friend of GNOME, please consider setting up a small recurring donation at donate.gnome.org . Every little bit helps, and donating less but consistently is super valuable to not only keep the lights on at the GNOME Foundation, but to enable explicit budgeting for and delivering on more interesting initiatives that directly support the community and the development of GNOME itself.

    Become a Friend of GNOME

    If you’re already a Friend of GNOME (or not able to commit to that at the moment—no hard feelings!), please consider sharing this message far and wide! I consistently hear that not only do so many users of GNOME not know that it’s a nonprofit, but they don’t know that the GNOME Foundation relies on individual donations—and that users can help out, too! Please share this post to your circles—especially outside of usual contributor spaces—to let them know the cool things GNOME does and that GNOME could use their help to be able to do even more in the coming year.

    Lastly, if you represent an organization that relies on GNOME or is invested in its continued success, please consider a corporate sponsorship . While this sponsorship comes with no strings attached, it’s a really powerful way to show that your organization supports Free and Open Source software—and puts their money where their mouth is.

    Sponsor GNOME

    Thank You!

    Thank you again to all of the dedicated contributors to GNOME making everyone’s computing experience that much better. As we close out 2025, I’m excited by the prospect of the GNOME Foundation being able to not just be sustainable, but—with your help—to take an even more active role in supporting our community and the development of GNOME.

    And of course, thank you to all 700+ current Friends of GNOME ; your gracious support has helped GNOME achieve everything in 2025 while ensuring the sustainability of the Foundation going forward. Let’s see if we can close out the year with 1,500 Friends helping GNOME do even more!

    • Pl chevron_right

      Federico Mena-Quintero: Mutation testing for librsvg

      news.movim.eu / PlanetGnome • 1 December • 10 minutes

    I was reading a blog post about the testing strategy for the Wild linker , when I came upon a link to cargo-mutants , a mutation testing tool for Rust. The tool promised to be easy to set up, so I gave it a try. I'm happy to find that it totally delivers!

    Briefly: mutation testing catches cases where bugs are deliberately inserted in the source code, but the test suite fails to catch them: after making the incorrect changes, all the tests still pass. This indicates a gap in the test suite.

    Previously I had only seen mentions of "mutation testing" in passing, as something exotic to be done when testing compilers. I don't recall seeing it as a general tool; maybe I have not been looking closely enough.

    Setup and running

    Setting up cargo-mutants is easy enough: you can cargo install cargo-mutants and run it with cargo mutants .

    For librsvg this ran for a few hours, but I discovered a couple of things related to the way the librsvg repository is structured. The repo is a cargo workspace with multiple crates: the librsvg implementation and public Rust API, the rsvg-convert binary, and some utilities like rsvg-bench .

    1. By default cargo-mutants only seemed to pick up the tests for rsvg-convert . I think it may have done this because it is the only binary in the workspace that has a test suite (e.g. rsvg-bench does not have a test suite).

    2. I had to run cargo mutants --package librsvg to tell it to consider the test suite for the librsvg crate, which is the main library. I think I could have used cargo mutants --workspace to make it run all the things; maybe I'll try that next time.

    Initial results

    My initial run on rsvg-covert produced useful results; cargo-mutants found 32 mutations in the rsvg-convert source code that ought to have caused failures, but the test suite didn't catch them.

    The running output of cargo-mutants on the librsvg

    The second run, on the librsvg crate, took about 10 hours. It is fascinating to watch it run. In the end it found 889 mutations with bugs that the test suite couldn't catch:

    5243 mutants tested in 9h 53m 15s: 889 missed, 3663 caught, 674 unviable, 17 timeouts
    

    What does that mean?

    • 5243 mutants tested : how many modifications were tried on the code.

    • 889 missed : The important ones: after a modification was made, the test suite failed to catch this modification.

    • 3663 caught : Good! The test suite caught these!

    • 674 unviable : These modifications didn't compile. Nothing to do.

    • 17 timeouts : Worth investigating; maybe a function can be marked to be skipped for mutation.

    Starting to analyze the results

    Due to the way cargo-mutants works, the "missed" results come in an arbitrary order, spread among all the source files:

    rsvg/src/path_parser.rs:857:9: replace <impl fmt::Display for ParseError>::fmt -> fmt::Result with Ok(Default::default())
    rsvg/src/drawing_ctx.rs:732:33: replace > with == in DrawingCtx::check_layer_nesting_depth
    rsvg/src/filters/lighting.rs:931:16: replace / with * in Normal::bottom_left
    rsvg/src/test_utils/compare_surfaces.rs:24:9: replace <impl fmt::Display for BufferDiff>::fmt -> fmt::Result with Ok(Default::default())
    rsvg/src/filters/turbulence.rs:133:22: replace - with / in setup_seed
    rsvg/src/document.rs:627:24: replace match guard is_mime_type(x, "image", "svg+xml") with false in ResourceType::from
    rsvg/src/length.rs:472:57: replace * with + in CssLength<N, V>::to_points
    

    So, I started by sorting the missed.txt file from the results. This is much better:

    rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with false
    rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with true
    rsvg/src/accept_language.rs:78:9: replace <impl fmt::Display for AcceptLanguageError>::fmt -> fmt::Result with Ok(Default::default())
    rsvg/src/angle.rs:40:22: replace < with <= in Angle::bisect
    rsvg/src/angle.rs:41:56: replace - with + in Angle::bisect
    rsvg/src/angle.rs:49:35: replace + with - in Angle::flip
    rsvg/src/angle.rs:57:23: replace < with <= in Angle::normalize
    

    With the sorted results, I can clearly see how cargo-mutants gradually does its modifications on (say) all the arithmetic and logic operators to try to find changes that would not be caught by the test suite.

    Look at the first two lines from above, the ones that refer to AcceptLanguage::any_matches :

    rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with false
    rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with true
    

    Now look at the corresponding lines in the source:

    ... impl AcceptLanguage {
    135     fn any_matches(&self, tag: &LanguageTag) -> bool {
    136         self.iter().any(|(self_tag, _weight)| tag.matches(self_tag))
    137     }
    ... }
    }
    

    The two lines from missed.txt mean that if the body of this any_matches() function were replaced with just true or false , instead of its actual work, there would be no failed tests:

    135     fn any_matches(&self, tag: &LanguageTag) -> bool {
    136         false // or true, either version wouldn't affect the tests
    137     }
    }
    

    This is bad! It indicates that the test suite does not check that this function, or the surrounding code, is working correctly. And yet, the test coverage report for those lines shows that they are indeed getting executed by the test suite. What is going on?

    I think this is what is happening:

    • The librsvg crate's tests do not have tests for AcceptLanguage::any_matches .
    • The rsvg_convert crate's integration tests do have a test for its --accept-language option, and that is what causes this code to get executed and shown as covered in the coverage report.
    • This run of cargo-mutants was just for the librsvg crate, not for the integrated librsvg plus rsvg_convert .

    Getting a bit pedantic with the purpose of tests, rsvg-convert assumes that the underlying librsvg library works correctly. The library advertises support in its API for matching based on AcceptLanguage, even though it doesn't test it internally.

    On the other hand, rsvg-convert has a test for its own --accept-language option, in the sense of "did we implement this command-line option correctly", not in the sense of "does librsvg implement the AcceptLanguage functionality correctly".

    After adding a little unit test for AcceptLanguage::any_matches in the librsvg crate, we can run cargo-mutants just for that the accept_language.rs file again:

    # cargo mutants --package librsvg --file accept_language.rs
    Found 37 mutants to test
    ok       Unmutated baseline in 24.9s build + 6.1s test
     INFO Auto-set test timeout to 31s
    MISSED   rsvg/src/accept_language.rs:78:9: replace <impl fmt::Display for AcceptLanguageError>::fmt -> fmt::Result with Ok(Default::default()) in 4.8s build + 6.5s test
    37 mutants tested in 2m 59s: 1 missed, 26 caught, 10 unviable
    

    Great! As expected, we just have 1 missed mutant on that file now. Let's look into it.

    The function in question is now <impl fmt::Display for AcceptLanguageError>::fmt , an error formatter for the AcceptLanguageError type:

    impl fmt::Display for AcceptLanguageError {
        fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
            match self {
                Self::NoElements => write!(f, "no language tags in list"),
                Self::InvalidCharacters => write!(f, "invalid characters in language list"),
                Self::InvalidLanguageTag(e) => write!(f, "invalid language tag: {e}"),
                Self::InvalidWeight => write!(f, "invalid q= weight"),
            }
        }
    }
    

    What cargo-mutants means by " replace ... -> fmt::Result with Ok(Default::default()) is that if this function were modified to just be like this:

    impl fmt::Display for AcceptLanguageError {
        fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
            Ok(Default::default())
        }
    }
    

    then no tests would catch that. Now, this is just a formatter function; the fmt::Result it returns is just whether the underlying call to write!() succeeded. When cargo-mutants discovers that it can change this function to return Ok(Default::default()) it is because fmt::Result is defined as Result<(), fmt::Error> , which implements Default because the unit type () implements Default .

    In librsvg, those AcceptLanguageError errors are just surfaced as strings for rsvg-convert, so that if you give it a command-line argument with an invalid value like --accept-language=foo , it will print the appropriate error. However, rsvg-convert does not make any promises as to the content of error messages, so I think it is acceptable to not test this error formatter — just to make sure it handles all the cases, which is already guaranteed by its match statement. Rationale:

    • There already are tests to ensure that the error codes are computed correctly in the parser for AcceptLanguage ; those are the AcceptLanguageError 's enumeration variants.

    • There is a test in rsvg-convert's test suite to ensure that it detects invalid language tags and reports them.

    For cases like this, cargo-mutants allows marking code to be skipped . After marking this fmt implementation with #[mutants::skip] , there are no more missed mutants in accept_language.rs .

    Yay!

    Understanding the tool

    You should absolutely read " using results " in the cargo-mutants documentation, which is very well-written. It gives excellent suggestions for how to deal with missed mutants. Again, these indicate potential gaps in your test suite. The documentation discusses how to think about what to do, and I found it very helpful.

    Then you should read about genres of mutants . It tells you the kind of modifications that cargo-mutants does to your code. Apart from changing individual operators to try to compute incorrect results, it also does things like replacing whole function bodies to return a different value instead. What if a function returns Default::default() instead of your carefully computed value? What if a boolean function always returns true ? What if a function that returns a HashMap always returns an empty hash table, or one full with the product of all keys and values? That is, do your tests actually check your invariants, or your assumptions about the shape of the results of computations? It is really interesting stuff!

    Future work for librsvg

    The documentation for cargo-mutants suggests how to use it in CI, to ensure that no uncaught mutants are merged into the code . I will probably investigate this once I have fixed all the missed mutants; this will take me a few weeks at least.

    Librsvg already has the gitlab incantation to show test coverage for patches in merge requests, so it would be nice to know if the existing tests, or any new added tests, are missing any conditions in the MR. That can be caught with cargo-mutants.

    Hackery relevant to my tests, but not to this article

    If you are just reading about mutation testing, you can ignore this section. If you are interested in the practicalities of compilation, read on!

    The source code for the librsvg crate uses a bit of conditional compilation to select whether to export functions that are used by the integration tests as well as the crate's internal tests. For example, there is some code for diffing two images, and this is used when comparing the pixel output of rendering an SVG to a reference image. For historical reasons, this code ended up in the main library, so that it can run its own internal tests, but then the rest of the integration tests also use this code to diff images. The librsvg crate exports the "diff two images" functions only if it is being compiled for the integration tests, and it doesn't export them for a normal build of the public API.

    Somehow, cargo-mutants didn't understand this, and the integration tests did not build since the cargo feature to select that conditionally-compiled code... wasn't active, or something. I tried enabling it by hand with something like cargo mutants --package librsvg -- --features test-utils but that still didn't work.

    So, I hacked up a temporary version of the source tree just for mutation testing, which always exports the functions for diffing images, without conditional compilation. In the future it might be possible to split out that code to a separate crate that is only used where needed and never exported. I am not sure how it would be structured, since that code also depends on librsvg's internal representation of pixel images. Maybe we can move the whole thing out to a separate crate? Stop using Cairo image surfaces as the way to represent pixel images? Who knows!