call_end

    • Pl chevron_right

      Jussi Pakkanen: A custom C++ standard library part 4: using it for real

      news.movim.eu / PlanetGnome • 16 June • 2 minutes

    Writing your own standard library is all fun and games until someone (which is to say yourself) asks the important question: could this be actually used for real? Theories and opinions can be thrown about the issue pretty much forever, but the only way to actually know for sure is to do it.

    Thus I converted CapyPDF , which is a fairly compact 15k LoC codebase from the C++ standard library to Pystd , which is about 4k lines. All functionality is still the same, which is to say that the test suite passes, there are most likely new bugs that the tests do not catch. For those wanting to replicate the results themselves, clone the CapyPDF repo, switch to the pystdport branch and start building. Meson will automatically download and set up Pystd as a subproject. The code is fairly bleeding edge and only works on Linux with GCC 15.1.

    Build times

    One of the original reasons for starting Pystd was being annoyed at STL compile times. Let's see if we succeeded in improving on them. Build times when using only one core in debug look like this.

    When optimizations are enabled the results look like this:

    In both cases the Pystd version compiles in about a quarter of the time.

    Binary size

    C++ gets a lot of valid criticism for creating bloated code. How much of that is due to the language as opposed to the library.

    That's quite unexpected. The debug info for STL types seems to take an extra 20 megabytes. But how about the executable code itself?

    STL is still 200 kB bigger. Based on observations most of this seems to come from stdlibc++'s implementation of variant . Note that if you try this yourself the Pystd version is probably 100 kB bigger, because by default the build setup links against libsubc++ , which adds 100+ kB to binary sizes whereas linking against the main C++ runtime library does not.

    Performance

    Ok, fine, so we can implement basic code to build faster and take less space. Fine. But what about performance? That is the main thing that matters after all, right? CapyPDF ships with a simple benchmark program. Let's look at its memory usage first.

    Apologies for the Y-axis does not starting at zero. I tried my best to make it happen, but LibreOffice Calc said no. In any case the outcome itself is as expected. Pystd has not seen any performance optimization work so it requiring 10% more memory is tolerable. But what about the actual runtime itself?

    This is unexpected to say the least. A reasonable result would have been to be only 2x slower than the standard library, but the code ended up being almost 25% faster. This is even stranger considering that Pystd's containers do bounds checks on all accesses, the UTF-8 parsing code sometimes validates its input twice, the hashing algorithm is a simple multiply-and-xor and so on. Pystd should be slower, and yet, in this case at least, it is not.

    I have no explanation for this.

    • Pl chevron_right

      Sam Thursfield: Status update, 15/06/2025

      news.movim.eu / PlanetGnome • 15 June • 7 minutes

    This month I created a personal data map where I tried to list all my important digital identities.

    (It’s actually now a spreadsheet, which I’ll show you later. I didn’t want to start the blog post with something as dry as a screenshot of a spreadsheet.)

    Anyway, I made my personal data map for several reasons.

    The first reason was to stay safe from cybercrime . In a world of increasing global unfairness and inequality, of course crime and scams are increasing too. Schools don’t teach how digital tech actually works, so it’s a great time to be a cyber criminal. Imagine being a house burglar in a town where nobody knows how doors work.

    Lucky for me, I’m a professional door guy. So I don’t worry too much beyond having a really really good email password (it has numbers and letters). But its useful to double check if I have my credit card details on a site where the password is still “sam2003”.

    The second reason is to help me migrate to services based in Europe . Democracy over here is what it is, there are good days and bad days, but unlike the USA we have at least more options than a repressive death cult and a fundraising business. (Shout to @angusm@mastodon.social for that one). You can’t completely own your digital identity and your data, but you can at least try to keep it close to home.

    The third reason was to see who has the power to influence my online behaviour .

    This was an insight from reading the book Technofuedalism . I’ve always been uneasy about websites tracking everything I do. Most of us are, to the point that we have made myths like “your phone microphone is always listening so Instagram can target adverts”. (As McSweeney’s Internet Tendency confirms, it’s not! It’s just tracking everything you type, every app you use, every website you visit, and everywhere you go in the physical world ).

    I used to struggle to explain why all that tracking feels bad bad. Technofuedalism frames a concept of cloud capital , saying this is now more powerful than other kinds of capital because cloud capitalists can do something Henry Ford, Walt Disney and The Monopoly Guy can only dream of: mine their data stockpile to produce precisely targeted recommendations, search bubbles and adverts which can influence your behaviour before you’ve even noticed .

    This might sound paranoid when you first hear it, but consider how social media platforms reward you for expressing anger and outrage. Remember the first time you saw a post on Twitter from a stranger that you disagreed with? And your witty takedown attracted likes and praise? This stuff can be habit-forming.

    In the 20th century, ad agencies changed people’s buying patterns and political views using billboards, TV channel and newspapers. But all that is like a primitive blunderbuss compared to recommendation algorithms, feedback loops and targeted ads on social media and video apps.

    I lived through the days when web search for “Who won the last election” would just return you 10 pages that included the word “election”. (If you’re nostalgic for those days… you’ll be happy to know that GNOME’s desktop search engine still works like that today! 🙂 I can spot when apps trying to ‘nudge’ me with dark patterns. But kids aren’t born with that skill, and they aren’t necessarily going to understand the nature of Tech Billionaire power unless we help them to see it. We need a framework to think critically and discuss the power that Meta, Amazon and Microsoft have over everyone’s lives. Schools don’t teach how digital tech actually works, but maybe a “personal data map” can be a useful teaching tool?

    By the way, here’s what my cobbled-together “Personal data map” looks like, taking into account security, what data is stored and who controls it. ( With some fake data… I don’t want this blog post to be a “How to steal my identity” guide. )

    Name Risks Sensitivity rating Ethical rating Location Controlle r First factor Second factor Credentials cached? Data stored
    Bank account Financial loss 10 2 Europe Bank Fingerprint None On phone Money, transactions
    Instagram Identity theft 5 -10 USA Meta Password Email On phone Posts, likes, replies, friends, views, time spent, locations, searches.
    Google Mail ( sam@gmail.com ) Reset passwords 9 -5 USA Google Password None Yes – cookies Conversations, secrets
    Github Impersonation 3 3 USA Microsoft Password OTP Yes – cookies Credit card, projects, searches.

    How is it going migrating off USA based cloud services?

    “The internet was always a project of US power”, says Paris Marx , a keynote at PublicSpaces conference , which I never heard of before.

    Closing my Amazon account took an unnecessary amount of steps, and it was sad to say goodbye to the list of 12 different address I called home at various times since 2006, but I don’t miss it; I’ve been avoiding Amazon for years anyway. When I need English-language books, I get them from an Irish online bookstore named Kenny’s. (Ireland, cleverly, did not leave the EU so they can still ship books to Spain without incurring import taxes).

    Dropbox took a while because I had years of important stuff in there. I actually don’t think they’re too bad of a company, and it was certainly quick to delete my account. (And my data… right? You guys did delete all my data?).

    I was using Dropbox to sync notes with the Joplin notes app, and switched to the paid Joplin Cloud option , which seems a nice way to support a useful open source project.

    I still needed a way to store sensitive data, and realized I have access to Protondrive. I can’t recommend that as a service because the parent company Proton AG don’t seem so serious about Linux support, but I got it to work thanks to some heroes who added a protondrive backend to rclone .

    Instead of using Google cloud services to share photos, and to avoid anything so primitive as an actual cable, I learned that KDE Connect can transfer files from my Android phone over my laptop really neatly. KDE Connect is really good . On the desktop I use GSConnect which integrates with GNOME Shell really well. I think I’ve not been so impressed by a volunteer-driven open source project in years. Thanks to everyone who worked on these great apps!

    I also migrated my VPS from a US-based host Tornado VPS to one in Europe. Tornado VPS (formally prgmr.com) are a great company, but storing data in the USA doesn’t seem like the way forwards.

    That’s about it so far. Feels a bit better.

    What’s next?

    I’m not sure whats next!

    I can’t leave Github and Gitlab.com, but my days of “Write some interesting new code and push it straight to Github” are long gone. I didn’t sign up to train somebody else’s LLM for free, and neither should you. (I’m still interested in sharing interesting code with nice people, of course, but let’s not make it so easy for Corporate America to take our stuff without credit or compensation. Bring back the “ sneakernet “!)

    Leaving Meta platforms and dropping YouTube doesn’t feel directly useful. It’s like individually renouncing debit cards, or air travel: a lot of inconvenience for you, but the business owners don’t even notice. The important thing is to use the alternatives more . Hence why I still write a blog in 2025 and mostly read RSS feeds and the Fediverse. Gigs where I live are mostly only promoted on Instagram, but I’m sure that’s temporary.

    In the first quarter of 2025, rich people put more money into AI startups than everything else put together (see: Pivot to AI ). Investors love a good bubble, but there’s also an element of power here.

    If programmers only know how to write code using Copilot, then Microsoft have the power to decide what code we can and can’t write. (This currently this seems limited to not using the word ‘gender’ . But I can imagine a future where it catches you reverse-engineering proprietary software, or jailbreaking locked-down devices, or trying write a new Bittorrent client).

    If everyone gets their facts from ChatGPT, then OpenAI have the power to tweak everyone’s facts, an ability that is currently limited only to presidents of major world superpowers. If we let ourselves avoid critical thinking and rely on ChatGPT to generate answers to hard questions instead, which teachers say is very much exactly what’s happening in schools now … then what?

    • Pl chevron_right

      Toluwaleke Ogundipe: Hello GNOME and GSoC!

      news.movim.eu / PlanetGnome • 14 June • 1 minute

    I am delighted to announce that I am contributing to GNOME Crosswords as part of the Google Summer of Code 2025 program. My project primarily aims to add printing support to Crosswords, with some additional stretch goals. I am being mentored by Jonathan Blandford , Federico Mena Quintero , and Tanmay Patil .

    The Days Ahead

    During my internship, I will be refactoring the puzzle rendering code to support existing and printable use cases, adding clues to rendered puzzles, and integrating a print dialog into the game and editor with crossword-specific options. Additionally, I should implement an ipuz2pdf utility to render puzzles in the IPUZ format to PDF documents.

    Beyond the internship, I am glad to be a member of the GNOME community and look forward to so much more. In the coming weeks, I will be sharing updates about my GSoC project and other contributions to GNOME. If you are interested in my journey with GNOME and/or how I got into GSoC, I implore you to watch out for a much longer post coming soon.

    Appreciation

    Many thanks to Hans Petter Jansson , Federico Mena Quintero and Jonathan Blandford , who have all played major roles in my journey with GNOME and GSoC. 🙏 ❤

    • Pl chevron_right

      Ignacy Kuchciński: Taking out the trash, or just sweeping it under the rug? A story of leftovers after removing files

      news.movim.eu / PlanetGnome • 14 June • 6 minutes

    There are many things that we take for granted in this world, and one of them is undoubtedly the ability to clean up your files – imagine a world where you can’t just throw all those disk space hungry things that you no longer find useful. Though that might sound impossible, turns out some people have encountered a particularly interesting bug, that resulted in silent sweeping the Trash under the rug instead of emptying it in Nautilus. Since I was blessed to run into that issue myself, I decided to fix it and shed some light on the fun.

    Trash after emptying in Nautilus, are the files really gone?

    It all started with a 2009 Ubuntu launchpad ticket , reported against Nautilus. The user found 70 GB worth of files using disk analyzer in the ~/.local/share/Trash/expunged directory, even though they had emptied it with graphical interface. They did realize the offending files belonged to another user, however, they couldn’t reproduce it easily at first. After all, when you try to move to trash a file or a directory not belonging to you, you would usually be correctly informed that you don’t have necessary permissions, and perhaps even offer to permanently delete them instead. So what was so special about this case?

    First let’s get a better view of when we can and when we can’t permanently delete files, something that is done at the end of a successful trash emptying operation. We’ll focus only on the owners of relevant files, since other factors, such as file read/write/execute permissions, can be adjusted freely by their owners, and that’s what trash implementations will do for you. Here are cases where you CAN delete files:

    – when a file is in a directory owned by you, you can always delete it
    – when a directory is in a directory owned by you and it’s owned by you, you can obviously delete it
    – when a directory is in a directory owned by you but you don’t own it, and it’s empty , you can surprisingly delete it as well

    So to summarize, no matter who the owner of the file or a directory is, if it’s in a directory owned by you, you can get rid of it. There is one exception to this – the directory must be empty, otherwise, you will not be able to remove neither it, nor its including files. Which takes us to an analogous list for cases where you CANNOT delete files:

    – when a directory is in a directory owned by you but you don’t own it, and it’s not empty , you can’t delete it.
    – when a file is in a directory NOT owned by you, you can’t delete it
    – when a directory is in a directory NOT owned by you, you can’t delete it either

    In contrast with removing files in a directory you own, when you are not the owner of the parent directory, you cannot delete any of the child files and directories, without exceptions. This is actually the reason for the one case where you can’t remove something from a directory you own – to remove a non-empty directory, first you need to recursively delete all of its including files and directories, and you can’t do that if the directory is not owned by you.

    Now let’s look inside the trash can, or rather how it functions – the reason for separating permanently deleting and trashing operations, is obvious – users are expected to change their mind and be able to get their files back on a whim, so there’s a need for a middle step. That’s where the Trash specification comes, providing a common way in which all “Trash can” implementation should store, list, and restore trashed files, even across different filesystems – Nautilus Trash feature is one of the possible implementations. The way the trashing works is actually moving files to the $XDG_DATA_HOME/Trash/files directory and setting up some metadata to track their original location, to be able to restore them if needed. Only when the user empties the trash, are they actually deleted. If it’s all about moving files, specifically outside their previous parent directory (i.e. to Trash), let’s look at cases where you CAN move files:

    – when a file is in a directory owned by you, you can move it
    – when a directory is in a directory owned by you and you own it , you can obviously move it

    We can see that the only exception when moving files in a directory you own, is when the directory you’re moving doesn’t belong to you, in which case you will be correctly informed you don’t have permissions. In the remaining cases, users are able to move files and therefore trash them. Now what about the cases where you CANNOT move files?

    – when a directory is in a directory owned by you but you don’t own it , you can’t move it
    – when a file is in a directory NOT owned by you, you can’t move it either
    – when a directory is in a directory NOT owned by you, you still can’t move it

    In those cases Nautilus will either not expose the ability to trash files, or will tell user about the error, and the system is working well – even if moving them was possible, permanently deleting files in a directory not owned by you is not supported anyway.

    So, where’s the catch? What are we missing? We’ve got two different operations that can succeed or fail given different circumstances, moving (trashing) and deleting. We need to find a situation, where moving a file is possible, and such overlap exists, by chaining the following two rules:

    – when a directory A is in a directory owned by you and it’s owned by you, you can obviously move it
    – when a directory B is in a directory A owned by you but you don’t own it, and it’s not empty , you can’t delete it.

    So a simple way to reproduce was found, precisely:

    mkdir -p test/root
    touch test/root/file
    sudo chown root:root test/root

    Afterwards trashing and emptying in Nautilus or gio trash command will result in the files not being deleted, and left in the ~/.local/share/Trash/expunged , which is used by the gvfsd-trash as an intermediary during emptying operation. The situations where that can happen are very rare, but they do exist – personally I have encountered this when manually cleaning container files created by podman in ~/.local/share/containers , which I arguably I shouldn’t be doing in the first place, and rather leave it up to the podman itself. Nevertheless, it’s still possible from the user perspective, and should be handled and prevented correctly. That’s exactly what was done, a ticket was submitted and moved to appropriate place, which turned out to be glib itself, and I have submitted a MR that was merged – now both Nautilus and gio trash will recursively check for this case, and prevent you from doing this. You can expect it in the next glib release 2.85.1.

    On the ending notes I want to thank the glib maintainer Philip Withnall who has walked me through on the required changes and reviewed them, and ask you one thing: is your ~/.local/share/Trash/expunged really empty? 🙂

    • Pl chevron_right

      Steven Deobald: 2025-06-14 Foundation Report

      news.movim.eu / PlanetGnome • 14 June • 4 minutes

    These weeks are going by fast and I’m still releasing these reports after the TWIG goes out. Weaker humans than I might be tempted to automate — but don’t worry! These will always be artisanal, hand-crafted, single-origin, uncut, and whole bean. Felix encouraged me to add these to following week’s TWIG, at least, so I’ll start doing that.

    ## Opaque Stuff

    • a few policy decisions are in-flight with the Board — productive conversations happening on all fronts, and it feels really good to see them moving forward

    ## Elections

    Voting closes in 5 days (June 19th). If you haven’t voted yet, get your votes in!

    ## GUADEC

    Planning for GUADEC is chugging along. Sponsored visas, flights, and hotels are getting sorted out.

    If you have a BoF or workshop proposal, get it in before tomorrow !

    ## Operations

    Our yearly CPA review is finalized. Tax filings and 990 prep are in flight.

    ## Infrastructure

    You may have seen our infrastructure announcement on social media earlier this week. This closes a long chapter of transitioning to AWS for GNOME’s essential services. A number of people have asked me if our setup is now highly AWS-specific. It isn’t. The vast majority of GNOME’s infrastructure runs on vanilla Linux and OpenShift. AWS helps our infrastructure engineers scale our services. They’re also generously donating the cloud infrastructure to the Foundation to support the GNOME project.

    ## Fundraising

    Over the weekend, I booted up a couple of volunteer developers to help with a sneaky little project we kicked off last week. As Julian, Pablo, Adrian, and Tobias have told me: No Promises… so I’m not making any. You’ll see it when you see it. 🙂 Hopefully in a few days. This has been the biggest focus of the Foundation over the past week-and-a-half.

    Many thanks to the other folks who’ve been helping with this little initiative. The Foundation could really use some financial help soon, and this project will be the base we build everything on top of.

    ## Meeting People

    Speaking of fundraising, I met Loren Crary of the Python Foundation! She is extremely cool and we found out that we both somehow descended on the term “gentle nerds”, each thinking we coined it ourselves. I first used this term in my 2015 Rootconf keynote . She’s been using it for ages, too. But I didn’t originally ask for her help with terminology. I went to her to sanity-check my approach to fundraising and — hooray! — she tells me I’m not crazy. Semi-related: she asked me if there are many books on GNOME and I had to admit I’ve never read one myself. A quick search shows me Mastering GNOME: A Beginner’s Guide and The Linux GNOME Desktop For Dummies . Have you ever read a book on GNOME? Or written one?

    I met Jorge Castro (of CNCF and Bazzite fame), a friend of Matt Hartley. We talked October GNOME, Wayland, dconf, KDE, Kubernetes, Fedora, and the fact that the Linux desktop is the true UI to cloud-native …everything. He also wants to be co-conspirators and I’m all about it. It had never really occurred to me that the ubiquity of dconf means GNOME is actually highly configurable, since I tend to eat the default GNOME experience (mostly), but it’s a good point. I told him a little story that the first Linux desktop experience that outstripped both Windows and MacOS for me was on a company-built RHEL machine back in 2010. Linux has been better than commercial operating systems for 15 years and the gap keeps widening. The Year of The Linux Desktop was a decade ago… just take the W.

    I had a long chat with Tobias and, among other things, we discussed the possibility of internal conversation spaces for Foundation Members and the possibility of a project General Assembly. Both nice ideas.

    I met Alejandro and Ousama from Slimbook. It was really cool to hear what their approach to the market is, how they ensure Linux and GNOME run perfectly on their hardware, and where their devices go. (They sell to NASA!) We talked about improving upstream communications and ways for the Foundation to facilitate that. We’re both hoping to get more Slimbooks in the hands of more developers.

    We had our normal Board meeting. Karen gave me some sage advice on fundraising campaigns and grants programs.

    ## One-Month Feedback Session

    I had my one-month feedback session with Rob and Allan, who are President and Vice-President at the moment, respectively. (And thus, my bosses.)

    Some key take-aways are that they’d like me to increase my focus on the finances and try to make my community outreach a little more sustainable by being less verbose. Probably two sides of the same coin, there. 🙂 I’ve already shifted my focus toward finances as of two weeks ago… which may mean you’ve seen less of me in Matrix and other community spaces. I’m still around! I just have my nose in a spreadsheet or something.

    They said some nice stuff, too, but nobody gets better by focusing on the stuff they’re already doing right.

    • Pl chevron_right

      Thibault Martin: TIL that htop can display more useful metrics

      news.movim.eu / PlanetGnome • 13 June • 1 minute

    A program on my Raspberry Pi was reading data on disk, performing operations, and writing the result on disk. It did so at an unusually slow speed. The problem could either be that the CPU was too underpowered to perform the operations it needed or the disk was too slow during read, write, or both.

    I asked colleagues for opinions, and one of them mentioned that htop could orient me in the right direction. The time a CPU spends waiting for an I/O device such as a disk is known as the I/O wait. If that wait time is superior to 10%, then the CPU spends a lot of time waiting for data from the I/O device, so the disk would likely be the bottleneck. If the wait time remains low, then the CPU is likely the bottleneck.

    By default htop doesn't show the wait time. By pressing F2 I can access htop's configuration. There I can use the right arrow to move to the Display options, select Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest) , and press Space to enable it.

    I can then press the left arrow to get back to the options menu, and move to Meters . Using the right arrow I can go to the rightmost column, select CPUs (1/1): all CPUs by pressing Enter, move it to one of the two columns, and press Enter when I'm done. With it still selected, I can press Enter to alternate through the different visualisations. The most useful to me is the [Text] one.

    I can do the same with Disk IO to track the global read / write speed, and Blank to make the whole set-up more readable.

    With htop configured like this, I can trigger my slow program again see that the CPU is not waiting for the disk. All CPUs have a wa of 0%

    If you know more useful tools I should know about when chasing bottlenecks, or if you think I got something wrong, please email me at thib@ergaster.org!

    • Pl chevron_right

      Lennart Poettering: ASG! 2025 CfP Closes Tomorrow!

      news.movim.eu / PlanetGnome • 12 June

    The All Systems Go! 2025 Call for Participation Closes Tomorrow!

    The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow , on 13th of June! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!

    • Pl chevron_right

      Andy Wingo: whippet in guile hacklog: evacuation

      news.movim.eu / PlanetGnome • 11 June • 1 minute

    Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile’s memory manager.

    So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.

    Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone—it is still available for allocation!—but it won’t be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.

    The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don’t think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.

    So that’s where I’m at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we’ll see!

    • Pl chevron_right

      Alireza Shabani: Why GNOME’s Translation Platform Is Called “Damned Lies”

      news.movim.eu / PlanetGnome • 11 June • 2 minutes

    Damned Lies is the name of GNOME’s web application for managing localization (l10n) across its projects. But why is it named like this?

    Damned Lies about GNOME

    Screenshot of Gnome Damned Lies from Google search with the title: Damned Lies about GNOME

    On the About page of GNOME’s localization site , the only explanation given for the name Damned Lies is a link to a Wikipedia article called “ Lies, damned lies, and statistics .

    “Damned Lies” comes from the saying “Lies, damned lies, and statistics” which is a 19th-century phrase used to describe the persuasive power of statistics to bolster weak arguments, as described on Wikipedia . One of its earliest known uses appeared in a 1891 letter to the National Observer , which categorised lies into three types:

    “Sir, —It has been wittily remarked that there are three kinds of falsehood: the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate of national pensions relies …”

    To find out more, I asked in GNOME’s i18n Matrix room , and Alexandre Franke helped a lot, he said:

    Stats are indeed lies, in many ways.
    Like if GNOME 48 gets 100% translated in your language on Damned Lies, it doesn’t mean the version of GNOME 48 you have installed on your system is 100% translated, because the former is a real time stat for the branch and the latter is a snapshot (tarball) at a specific time.
    So 48.1 gets released while the translation is at 99%, and then the translators complete the work, but you won’t get the missing translations until 48.2 gets released.
    Works the other way around: the translation is at 100% at the time of the release, but then there’s a freeze exception and the stats go 99% while the released version is at 100%.
    Or you are looking at an old version of GNOME for which there won’t be any new release, which wasn’t fully translated by the time of the latest release, but then a translator decided that they wanted to see 100% because the incomplete translation was not looking as nice as they’d like, and you end up with Damned Lies telling you that version of GNOME was fully translated when it never was and never will be.
    All that to say that translators need to learn to work smart, at the right time, on the right modules, and not focus on the stats.

    So there you have it: Damned Lies is a name that reminds us that numbers and statistics can be misleading even on GNOME’s I10n Web application.