• Pl chevron_right

      Thibault Martin: TIL that Git can locally ignore files

      news.movim.eu / PlanetGnome • 3 days ago • 1 minute

    When editing markdown, I love using Helix (best editor in the world). I rely on three language servers to help me do it:

    • rumdl to check markdown syntax and enforce the rules decided by a project
    • marksman to get assistance when creating links
    • harper-ls to check for spelling or grammar mistakes

    All of these are configured in my ~/.config/helix/languages.toml configuration file, so it applies globally to all the markdown I edit. But when I edit This Week In Matrix at work, things are different.

    To edit those posts, we let our community report their progress in a Matrix room, we collect them into a markdown file that we then editorialized. This is a perfect fit for Helix (best editor in the world) and its language servers.

    Helix has two features that make it a particularly good fit for the job

    1. The diagnostics view
    2. Jumping to next error with ]d

    It is possible to filter out pickers , but it becomes tedious to do so. For this project specifically, I want to disable harper-ls entirely. Helix supports per-project configuration by creating a .helix/languages.toml file at the project's root.

    It's a good solution to override my default config, but now I have an extra .helix directory that git wants to track. I could add it to the .gitignore , but that would also add it to everyone else's .gitignore , even if they don't use Helix (best editor in the world) yet.

    It turns out that there is a local-only equivalent to .gitignore , and it's .git/info/exclude . The syntax is the same as .gitignore but it's not committed.

    I can't believe I didn't need this earlier in my life.

    • Pl chevron_right

      Thibault Martin: TIL that You can filter Helix pickers

      news.movim.eu / PlanetGnome • 3 days ago

    Helix has a system of pickers. It's a pop up window to open files, or open diagnostics coming from a Language Server.

    The diagnostics picker displays data in columns:

    • severity
    • source
    • code
    • message

    Sometimes it can get very crowded, especially when you have plenty of hints but few actual errors. I didn't know it, but Helix supports filtering in pickers!

    By typing %severity WARN I only get warnings. I can even shorten it to %se (and not %s , since source also starts with an s ). The full syntax is well documented in the pickers documentation .

    • Pl chevron_right

      Andy Wingo: the value of a performance oracle

      news.movim.eu / PlanetGnome • 3 days ago • 5 minutes

    Over on his excellent blog, Matt Keeter posts some results from having ported a bytecode virtual machine to tail-calling style . He finds that his tail-calling interpreter written in Rust beats his switch-based interpreter, and even beats hand-coded assembly on some platforms.

    He also compares tail-calling versus switch-based interpreters on WebAssembly, and concludes that performance of tail-calling interpreters in Wasm is terrible:

    1.2× slower on Firefox, 3.7× slower on Chrome, and 4.6× slower in wasmtime. I guess patterns which generate good assembly don't map well to the WASM stack machine, and the JITs aren't smart enough to lower it to optimal machine code.

    In this article, I would like to argue the opposite: patterns that generate good assembly map just fine to the Wasm stack machine, and the underperformance of V8, SpiderMonkey, and Wasmtime is an accident.

    some numbers

    I re-ran Matt’s experiment locally on my x86-64 machine (AMD Ryzen Threadripper PRO 5955WX). I tested three toolchains:

    • Compiled natively via cargo / rustc

    • Compiled to WebAssembly, then run with Wasmtime

    • Compiled to WebAssembly, then run with Wastrel

    For each of these toolchains, I tested Raven as implemented in Rust in both “switch-based” and “tail-calling” modes. Additionally, Matt has a Raven implementation written directly in assembly; I test this as well, for the native toolchain. All results use nightly/git toolchains from 7 April 2026.

    My results confirm Matt’s for the native and wasmtime toolchains, but wastrel puts them in context:

    Bar charts showing native, wasmtime, and wastrel scenarios testing tail-calling versus switch implementations; wasmtime slows down for tail-calling, whereas wastrel speeds up.

    We can read this chart from left to right: a switch-based interpreter written in Rust is 1.5× slower than a tail-calling interpreter, and the tail-calling interpreter just about reaches the speed of hand-written assembler. (Testing on AArch64, Matt even sees the tail-calling interpreter beating his hand-written assembler.)

    Then moving to WebAssembly run using Wasmtime, we see that Wasmtime takes 4.3× as much time to run the switch-based interpreter, compared to the fastest run from the hand-written assembler, and worse, actually shows 6.5× overhead for the tail-calling interpreter. Hence Matt’s conclusions: there must be something wrong with WebAssembly.

    But if we compare to Wastrel , we see a different story: Wastrel runs the basic interpreter with 2.4× overhead, and the tail-calling interpreter improves on this marginally with a 2.3x overhead. Now, granted, two-point-whatever-x is not one; Matt’s Raven VM still runs slower in Wasm than when compiled natively. Still, a tail-calling interpreter is inherently a pretty good idea.

    where does the time go

    When I think about it, there’s no reason that the switch-based interpreter should be slower when compiled via Wastrel than when compiled via rustc . Memory accesses via Wasm should actually be cheaper due to 32-bit pointers, and all the rest of it should be pretty much the same. I looked at the assembly that Wastrel produces and I see most of the patterns that I would expect.

    I do see, however, that Wastrel repeatedly reloads a struct memory value, containing the address (and size) of main memory. I need to figure out a way to keep this value in registers. I don’t know what’s up with the other Wasm implementations here; for Wastrel, I get 98% of time spent in the single interpreter function, and surely this is bread-and-butter for an optimizing compiler such as Cranelift. I tried pre-compilation in Wasmtime but it didn’t help. It could be that there is a different Wasmtime configuration that allows for higher performance.

    Things are more nuanced for the tail-calling VM. When compiling natively, Matt is careful to use a preserve_none calling convention for the opcode-implementing functions, which allows LLVM to allocate more registers to function parameters; this is just as well, as it seems that his opcodes have around 9 parameters. Wastrel currently uses GCC’s default calling convention, which only has 6 registers for non-floating-point arguments on x86-64, leaving three values to be passed via global variables (described here ); this obviously will be slower than the native build. Perhaps Wastrel should add the equivalent annotation to tail-calling functions.

    On the one hand, Cranelift (and V8) are a bit more constrained than Wastrel by their function-at-a-time compilation model that privileges latency over throughput; and as they allow Wasm modules to be instantiated at run-time, functions are effectively closures, in which the “instance” is an additional hidden dynamic parameter. On the other hand, these compilers get to choose an ABI; last I looked into it, SpiderMonkey used the equivalent of preserve_none , which would allow it to allocate more registers to function parameters. But it doesn’t: you only get 6 register arguments on x86-64, and only 8 on AArch64. Something to fix, perhaps, in the Wasm engines, but also something to keep in mind when making tail-calling virtual machines: there are only so many registers available for VM state.

    the value of time

    Well friends, you know us compiler types: we walk a line between collegial and catty. In that regard, I won’t deny that I was delighted when I saw the Wastrel numbers coming in better than Wasmtime! Of course, most of the credit goes to GCC; Wastrel is a relatively small wrapper on top.

    But my message is not about the relative worth of different Wasm implementations. Rather, it is that performance oracles are a public good : a fast implementation of a particular algorithm is of use to everyone who uses that algorithm, whether they use that implementation or not.

    This happens in two ways. Firstly, faster implementations advance the state of the art, and through competition-driven convergence will in time result in better performance for all implementations. Someone in Google will see these benchmarks, turn them into an OKR, and golf their way to a faster web and also hopefully a bonus.

    Secondly, there is a dialectic between the state of the art and our collective imagination of what is possible, and advancing one will eventually ratchet the other forward. We can forgive the conclusion that “patterns which generate good assembly don’t map well to the WASM stack machine” as long as Wasm implementations fall short; but having showed that good performance is possible, our toolkit of applicable patterns in source languages also expands to new horizons.

    Well, that is all for today. Until next time, happy hacking!

    • Pl chevron_right

      Michael Meeks: 2026-04-06 Monday

      news.movim.eu / PlanetGnome • 4 days ago

    • Up early, out for a run with J. poked at mail, and web bits. hmm. Contemplated the latest barrage of oddity from TDF.
    • Set too cutting out cardboard pieces to position tools in the workshop with H. spent qiute some time re-arranging the workshop to try to fit everything in. Binned un-needed junk, moved things around happily. Mended an old light - a wedding present from a friend.
    • Relaxed with J. and watched Upload in the evening, tired.
    • Pl chevron_right

      Jussi Pakkanen: Sorting performance rabbit hole

      news.movim.eu / PlanetGnome • 4 days ago • 2 minutes

    In an earlier blog post we found out that Pystd's simple sorting algorithm implementations were 5-10% slower than their stdlibc++ counterparts. The obvious follow up nerd snipe is to ask "can we make the Pystd implementation faster than stdlibc++?"

    For all tests below the data set used was 10 million consecutive 64 bit integers shuffled in a random order. The order was the same for all algorithms.

    Stable sort

    It turns out that the answer for stable sorting is "yes, surprisingly easily". I made a few obvious tweaks (whose details I don't even remember any more) and got the runtime down to 0.86 seconds. This is approximately 5% faster than std::stable_sort . Done. Onwards to unstable sort.

    Unstable sort

    This one was not, as they say, a picnic. I suspect that stdlib developers have spent more time optimizing std::sort than std::stable_sort simply because it is used a lot more.

    After all the improvements I could think of were done, Pystd's implementation was consistently 5-10% percent slower. At this point I started cheating and examined how stdlibc++'s implementation worked to see if there are any optimization ideas to steal. Indeed there were, but they did not help.

    Pystd's insertion sort moves elements by pairwise swaps. Stdlibc++ does it by moving the last item to a temporary, shifting the array elements onwards and then moving the stored item to its final location. I implemented that. It made things slower.

    Stdlibc++'s moves use memmove instead of copying (at least according to code comments).  I implemented that. It made things slower.

    Then I implemented shell sort to see if it made things faster. It didn't. It made them a lot slower.

    Then I reworked the way pivot selection is done and realized that if you do it in a specific way, some elements move to their correct partitions as a side effect of median selection. I implemented that and it did not make things faster. It did not make them slower, either, but the end result should be more resistant against bad pivot selection so I left it in.

    At some point the implementation grew a bug which only appeared with very large data sets. For debugging purposes I reduce the limit where introsort switches from qsort to insertion sort from 16 to 8. I got the bug fixed but the change made sorting a lot slower. As it should.

    But this raises a question, namely would increasing the limit from 16 to 32 make things faster? It turns out that it did. A lot. Out of all perf improvements I implemented, this was the one that yielded the biggest improvement. By a lot. Going to 64 elements made it even faster, but that made other algorithms using insertion sort slower, so 32 it is. For now at least.

    After a few final tweaks I managed to finally beat stdlibc++. By how much you ask? Pystd's best observed time was 0.754 seconds while stdlibc++'s was 0.755 seconds. And it happened only once. But that's enough for me.

    • Pl chevron_right

      Michael Meeks: 2026-04-05 Sunday

      news.movim.eu / PlanetGnome • 5 days ago

    • Slept badly; All Saints in the morning - Easter day. Lovely to play with H. and Jenny - fun - lots of people there.
    • Back for a big roast-lamb lunch with visiting family, Tried to sleep somewhat, prepped for the evening service - preaching at the last minute too; ran that, back.
    • Caught up with S&C&boys a bit; bid them 'bye, A. staying; rested somewhat, headed to bed.
    • Pl chevron_right

      Jakub Steiner: Japan

      news.movim.eu / PlanetGnome • 5 days ago • 1 minute

    Last year we went to Japan to finally visit friends after two decades of planning to. Because they live in Fukuoka, we only ended up visiting Hiroshima, Kyoto and Osaka afterwards. We loved it there and as soon as cheap flights became available, booked another one for Tokio, to be legally allowed to cross off Japan as visited.

    Now if I were to book the trip today, I probably wouldn't. It's quite a gamble given the geopolitical situation and Asia running out of oil. But making it back, it's been as good as the first one. Visiting only Tokio with a short trip to Kawaguchiko in the Sakura blooming season worked out great.

    At the start of the year I promised myself to shoot my Fuji more. And I don't mean the volcano, I mean the my X-T20. I haven't kept the promise at all, always relying on the iphone. Luckily for the trip I didn't chicken out carrying the extra weight and I think it paid off. I did only take my 35mm, as the desire to carry gear has really faded away with the years. As we walked over 120km in the few days my back didn't feel very young even with the little gear I did have.

    While the difference in quality isn't quite visible on Pixelfed or my photo website (I don't post to Instagram anymore), working through the set on a 4K display has been a pleasure. Bigger sensor is a bigger sensor.

    Check out more photos on photo.jimmac.eu -- use arrow keys of swipe to navigate the set.

    I also managed to get both of my weeklybeats tracks done on the flight so that's a bonus too!

    Japan is probably quite difficult to live in, but as a tourist you get so much to feast your eyes on. It's like another planet. I hope to find more time to draw some of the awesome little cars and signs and white tiles and electric cables everywhere.

    • Pl chevron_right

      Michael Meeks: 2026-04-04 Saturday

      news.movim.eu / PlanetGnome • 6 days ago

    • Up earlyish, poked at some work. We all drove to Aldeburgh with the family to begin the sad task of sorting through Bruce's things with Anne, S&C& boys.
    • A somewhat draining day; death is such a sad thing, but good wider family spirit.
    • Picked up fish & chips in Aldeburgh on the way back; tragically helped at the aftermath of an extremely grisly, run-over pedestrian in Aldeburgh high-street, even sadder.
    • Pl chevron_right

      This Week in GNOME: #243 Delayed Trains

      news.movim.eu / PlanetGnome • 3 April 2026 • 8 minutes

    Update on what happened across the GNOME project in the week from March 27 to April 03.

    GNOME Core Apps and Libraries

    Maps

    Maps gives you quick access to maps all across the world.

    mlundblad says

    Now Maps shows delays for public transit journeys (when there’s a realtime GTFS-RT feed available for the affected journey)

    public-transit-delays.BxDJr5pj_Z1FLOS8.webp

    Glycin

    Sandboxed and extendable image loading and editing.

    Sophie (she/her) reports

    After four weeks of work, glycin now supports compiled-in loaders. The main benefit of this is that glycin should now work on other operating systems like FreeBSD, Windows, or macOS.

    Glycin uses Linux exclusive technologies to sandbox image operations. For this, the image processing is happening in a separate, isolated, process. It would be very hard to impossible to replicate this technology for other operating systems. Therefore, glycin now supports building loaders into it directly. This still provides a huge benefit compared to traditional image loaders, since almost all of the code is written in safe Rust. The feature of ‘builtin’ loaders can, in theory, be combined with ‘external’ loaders.

    The glycin crate now builds with external loaders for Linux, and automatically uses builtin loaders for all other operating systems. That means that libglycin should work on other operating systems as well. So far, the CI only contains builds cross compiled for x86_64-pc-windows-gnu and tested on Wine. Further testing, feedback, and fixes are very welcome.

    Image loaders that are written in C/C++ like for HEIF, AVIF, SVG, and JPEG XL, are currently not supported for being used without sandbox. Since AVIF and JPEG XL already have rust-based implementations, and rsvg might move away from libxml2 in the future, potentially allowing for safe builtin loaders for these formats in the future.

    If you want, you can support my work financially on various platforms .

    GNOME Circle Apps and Libraries

    Pika Backup

    Keep your data safe.

    Sophie (she/her) reports

    On March 31, we observed Trans Day of Visibility 🏳️‍⚧️, World Backup Day, and fittingly, the release of Pika Backup 0.8 . After two years of work, this release not only brings many small improvements , but also a rework of the code base that dates back to 2018. This will greatly help to keep Pika Backup stable and maintainable for another eight years.

    You can support the development on Open Collective or support my work via various other platforms .

    Big thanks to everyone who makes Pika Backup possible, especially BorgBackup , our donors, and translators.

    pika-backup.Lg0f3L1V_Z19xA4K.webp

    Third Party Projects

    Antonio Zugaldia announces

    Speed of Sound, voice typing for the Linux desktop, is now available on Flathub!

    Main features:

    • Offline, on-device transcription using Whisper. No data leaves your machine.
    • Multiple activation options: click the in-app button or use a global keyboard shortcut.
    • Types the result directly into any focused application using Portals for wide desktop support (X11, Wayland).
    • Multi-language support with switchable primary and secondary languages on the fly.
    • Works out of the box with the built-in Whisper Tiny model. Download additional models from within the app to improve accuracy.
    • Optional text polishing with LLMs, with support for a custom context and vocabulary.
    • Supports self-hosted services like vLLM, Ollama, and llama.cpp (cloud services supported but not required).
    • Built with the fantastic Java GI bindings, come hang out on #java-gi:matrix.org .

    Get it from https://flathub.org/en/apps/io.speedofsound.SpeedOfSound . Learn more on https://www.speedofsound.io .

    demo-light.CceIw5Qt_28oUCv.webp

    Ronnie Nissan reports

    This week I released Embellish v1.0,0.This is a major rewrite of the app from Gjs to Vala, using all the experience I gain from making GTK apps for the past few years. The app now uses view models for the fonts ListBox and a GridView for the Icons. Not only is the app more performant now, the code is much nicer and easier to maintain and hack.

    You can get Embellish from Flatpak

    Or you can contribute to it’s Development/Translation on Github

    embellish-v1.0.0-icons-page.Cyp8_cns_Z25gbLa.webp

    embellish-v1.0.0-preview-dialog.CTu5TX7P_w1Nj0.webp

    Daniel Wood reports

    Design, 2D computer aided design (CAD) for GNOME sees a new release, highlights include:

    • Polyline Trim (TR)
    • Polyline Extend (EX)
    • Chamfer Command (CHA)
    • Fillet Command (F)
    • Inferred direction for Arc Command (A)
    • Diameter input for Circle Command (C)
    • Close option for Line Command (L)
    • Close and Undo options for Polyline Command (PL)
    • Multiple copies with Copy Command (CO)
    • Show angle in Distance Command (DI)
    • Performance improvements when panning
    • Nested elements with Hatch Command (H)
    • Consistent Toast message format
    • Plus many fixes!

    Design is available from Flathub:

    https://flathub.org/apps/details/io.github.dubstar_04.design

    drawing.B8ObgpVr_lqAxg.webp

    Cleo Menezes Jr. reports

    Serigy has reached version 2, evolving into a focused, minimal clipboard manager. The release brings substantial improvements across functionality, performance, and user experience.

    The new version introduces automatic expiration of old clipboard items, incognito mode for privacy, and a grid view that brings clarity to your slots. Advanced features are now accessible through context menus and tooltips, while global shortcuts let you summon Serigy instantly from anywhere.

    Several bug fixes have improved stability and reliability, and the UI is significantly more responsive. The application now persists window size across sessions, and Wayland clipboard detection has been improved.

    Serigy 2 also refines its design. The app now supports Brazilian Portuguese, Russian, and Spanish (Chile).

    Get it on Flathub Follow the development

    serigy.mzQ6JWYu_Z38cfs.webp

    Wildcard

    Test your regular expressions.

    says

    Wildcard 0.3.5 released, bringing matching of regex groups, sidebar now shows overall match and group information, and new quick reference dialog showing common regular expression use cases! You can download the latest release from Flathub!

    wildcard-0.3.5.B3uIty9O_18spUb.webp

    Files

    Providing a simple and integrated way of managing your files and browsing your file system.

    Romain says

    I have created a Nautilus extension that adds an “Open in” context menu for installed IDEs , allowing to easily open directories and files in them.

    It works with any IDE marked as such in their desktop entry. That includes IDEs in development containers such as Toolbx if you create a desktop file for it on the host system.

    To install, download the latest version and follow the instructions in the README file.

    Metadata Cleaner

    View and clean metadata in files.

    GeopJr 🏳️‍⚧️🏳️‍🌈 says

    Metadata Cleaner is back, now with more adaptive layouts, bug fixes and features!

    Grab the latest release from Flathub !

    metadatacleaner.NvR--Orx_GMvMN.webp

    Fractal

    Matrix messaging app for GNOME written in Rust.

    Kévin Commaille reports

    Things have been fairly quiet since the Jason AI takeover , but here comes Fractal 14.beta.

    • Sending files & location is properly disabled while editing/replying, as it doesn’t work anyway.
    • Call rooms are identified with a camera icon in the sidebar and show a banner to warn that other users might not read messages in these rooms.
    • While we still support signing in via SSO, we have dropped support for identity providers, to simplify our code and a have a closer experience to signing in with OAuth 2.0.
    • Map markers now use a darker variant of the accent color to have a better contrast with the map underneath.
    • Many small behind the scenes changes, mostly through dependency updates, and we have removed a few of them. Small improvements to the technical docs as well.

    As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

    It is available to install via Flathub Beta, see the instructions in our README .

    As the version implies, there might be a slight risk of regressions, but it should be mostly stable. If all goes well the next step is the release candidate!

    We are very excited to see several new contributors opening MRs lately to take care of their pet peeves with Fractal, which will benefit everyone in the end. If you have a little bit of time on your hands, you can try to join them by fixing one of our newcomers issues .

    Flood It

    Flood the board

    tfuxu reports

    Flood It 2.0 has been released! It now comes with a simple game explanation dialog, the ability to replay recently played board, full translation support, and a Ctrl+1…6 keyboard shortcut for color buttons.

    It also contains many under-the-hood improvements, like the transition from Gotk4 to Puregotk, a runtime update to GNOME 50, custom seed support, and much more.

    Check it out on Flathub !

    floodit_2.0_game_rules.DkS1U6q0_2tFl7y.webp

    floodit_2.0_play_again.CDSKQkJ3_c77v1.webp

    Bouncer

    Bouncer is an application to help you choose the correct firewall zone for wireless connections.

    justinrdonnelly announces

    Bouncer 50 was released this week, using the GNOME 50 runtime. It has bug fixes related to NetworkManager restarts, and autostart status. It also includes translations for Italian and Polish. Check it out on Flathub !

    GNOME Websites

    Guillaume Bernard says

    GNOME Damned Lies has seen a few UX improvements! For the release of GNOME 50, I added a specific tag for Damned Lies to track the changes and link them to existing GNOME cycles. You can see all the changes I already spoke about like merge request support, background refresh of statistics, etc. (see: https://gitlab.gnome.org/Infrastructure/damned-lies/-/releases/gnome_50 ). After that, I am working towards GNOME 50.1. Why follow the GNOME release calendar? Because it provides pace, and pace is important while developing. After more than 3 years fixing technical debt and refactoring, upgrading existing code, you can see the pace of changes has increased a lot. At the beginning of GNOME 50, we had more than 120 open issues in the Damned Lies tracker; it’s down to 76 at the time of writing these lines. So what’s new this week? I worked a lot on long-standing UX-related issues and can proudly announce a few changes in Damned Lies:

    • Better consistency in many strings (‘Release’ vs ‘Release Set’, past tense in action history that previously used the infinitive form).
    • Administrators now see the modules maintained in the site backend.
    • String freeze notifications now expose the affected versions and are far more stable when detecting string freeze breaks.
    • You now have anchors in your team pages for each language your team is working on.
    • i18n coordinators are now identified by a mini badge in their profile, helping any user to reach them more easily.
    • i18n coordinators can take action in any workflow without being a member of the team they act on.
    • Users can remove their own accounts.

    That’s all for this week! 😃

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!