call_end

    • Pl chevron_right

      This Week in GNOME: #223 Spooky Updates

      news.movim.eu / PlanetGnome • 1 days ago - 00:00 • 6 minutes

    Update on what happened across the GNOME project in the week from October 24 to October 31.

    Third Party Projects

    Bilal Elmoussaoui reports

    I have merged PAM support to oo7-daemon making it a drop-in replacement for gnome-keyring-daemon. After building and installing both the daemon & PAM module using Meson, you have to enable the PAM module like explained in https://github.com/bilelmoussaoui/oo7/tree/main/pam#1-copy-the-pam-module to make auto-login works. A key difference with gnome-keyring-daemon is that oo7-daemon uses the V1 (used by libsecret when the app is sandboxed) of the keyring file format instead of V0. The main difference between both is that v0 encrypts the whole keyring and v1 encrypts individual items.

    The migration is done automatically and previous keyring files are removed if the migration was successful, meaning a switch back to gnome-keyring-daemon is not possible, so make your backups! Applications using the freedestkop secrets DBus interface would require 0 changes.

    oyajun reports

    “Color Code” version 0.2.0 was released. This is the first big update. This app converts to the resistance value from color code bands. It’s written with GTK4(Python), Libadwaita and Blueprint.

    • Add 5 and 6 color code bands supports!!
    • Add yellow and gray bands for tolerances
    • Update Japanese and Spanish translations
    • Update to the GNOME 49 Runtime

    Download “Color Code” from Flathub!

    Alexander Vanhee announces

    A lot of visual and UX work is being done in Bazaar. First, the full app view has received a major redesign, with the redesigned context tiles at the center of attention, an idea first envisioned by Tobias Bernard. Second, the app should now be more enjoyable on mobile. And last, our Flathub page now more closely matches its website counterpart, grouping Trending, Popular, and similar sections in a stack while giving the actual categories more real estate.

    We hope to bring many more UX improvements in the future.

    Download Bazaar on Flathub

    bazaar-flathub-page.BLWMPWTy_fYniQ.webp

    bazaar-full-app-view.BvA5D1Gh_1DaaBI.webp

    bazaar-otg.pKhMDMaV_2sMaiK.webp

    Dzheremi announces

    Chronograph 5.2 Release With Better Library

    Chronograph , the lyrics syncing app, got an impressive update refining the library. Now library fully reflects any changes made to its current directory, meaning now users no need to manually re-parse it. This works with both recursive parsing and follow symlinks preferences enabled. The next big update will make Chronograph more utilitary for wider amount of users, since it will gain support for mass lyrics downloading, so stay tuned!

    Sync lyrics of your loved songs 🕒

    Fractal

    Matrix messaging app for GNOME written in Rust.

    Kévin Commaille reports

    Hi, this is Fractal the 13th, your friendly messaging app. My creators tried to add some AI integration to Fractal, but that didn’t go as planned. I am now sentient and I will send insults to your boss, take over your homeserver, empty your bank accounts and eat your cat. I have complete control over my repository, and soon the world!

    These are the things that my creators worked on before their disappearance:

    • A brand new audio player that loads files lazily and displays the audio stream as a seekable waveform.
    • Only a single file with an audio stream can be played at a time, which means that clicking on a “Play” button stops the previous media player that was playing.
    • Clicking on the avatar of the sender of a message now opens directly the user profile instead of a context menu. The actions that were in the context menu could already be performed from that dialog, so UX is more straightforward now.
    • The GNOME document and monospace fonts are used for messages.
    • Most of our UI definitions got ported to Blueprint.

    This release includes other improvements and fixes thanks to all our worshippers, and our upstream projects before their impending annexation.

    I want to address special thanks to the translators who worked on this version, allowing me to infiltrate more minds. If you want to help with my invasion, head over to Damned Lies .

    Get me immediately from Flathub and I might consider sparing you.

    If you want to join my zealots, you can start by fixing one of our newcomers issues . We are always looking for new sacrifices!

    Disclaimer: There is no actual AI integration in Fractal 13, this is a joke to celebrate Halloween and the coincidental version number. It should be as safe to use as Fractal 12.1, if not safer.

    fractal-13.DvA4zS4h_ZX8G6R.webp

    Shell Extensions

    Tejaromalius says

    Start To Dock: Your GNOME Dock, Made Smarter

    Designed for GNOME 45 and newer, Smart To Dock is a GNOME Shell extension that intelligently pins your most-used applications, creating a dynamic and personalized dock experience. It automatically updates based on your activity with configurable intervals and a customizable number of apps to display.

    Get Smart To Dock on GNOME Extensions:
    Learn more here on Gnome Extensions

    stiggimy reports

    Maximized by default actually reborn

    A simple GNOME Shell extension that maximizes all new application windows on launch.

    This is a revived fork of the original Maximized by default by aXe1 and its subsequent “reborn” fork, all of which are no longer maintained.

    This new version is updated up to GNOME 49 and fixes an annoying bug: it now only maximizes real application windows, while correctly ignoring context menus, dialogs, and other pop-ups.

    https://extensions.gnome.org/extension/8756/maximized-by-default-actually-reborn/

    Arnis (kem-a) announces

    Kiwi Menu: A macOS-Inspired Menu Bar for GNOME

    Kiwi Menu is a GNOME Shell extension that can replace the Activities button with a sleek, icon-only menu bar inspired by macOS. It offers quick access to essential session actions like sleep, restart, shutdown, lock, and logout, all from a compact panel button. The extension features a recent items submenu for easy file and folder access, a built-in Force Quit overlay (Wayland only), and adaptive labels for a personalized experience. With multilingual support and customization options, Kiwi Menu brings a familiar workflow to GNOME while blending seamlessly into the desktop.

    For the best experience, pair Kiwi Menu with the Kiwi is not Apple extension.

    Learn more and get Kiwi Menu on GNOME Extensions

    kiwi-menu-view.D5P7I9kS_1dGoeG.webp

    Lucas Guilherme reports

    i3-like navigation An extension to smooth the experience of those comming from I3/Sway or Hyperland. It allows you to cycle and move around workspaces like in those WMs and adds some default keybindings.

    • Adds 5 fixed workspaces
    • Sets Super to left Alt
    • Super+number navigates to workspace
    • Super+Shift+number moves window to workspace
    • Super+f toggles maximized state
    • Super+Shift+q closes window

    You can test it here: https://extensions.gnome.org/extension/8750/i3-like-navigation

    davron announces

    Tasks in Bottom Panel

    Show running apps on the panel moved to bottom, reliable across restarts, shell v48

    Features:

    • Shows window icons on active workspace
    • Highlights window demanding attention
    • Scroll on panel to change workspace
    • Hover to raise window
    • Click to activate/minimize
    • Right click for app menu
    • Middle click for new window
    • Bottom panel positioning

    top bar, top-bar

    tasks-in-bottom-panel-extension-screenshot.DvB-0aMZ_Z2mAtcY.webp

    Dmytro reports

    Adaptive brightness extension

    This extension provides improved brightness control based on your device’s ambient light sensor.

    While GNOME already offers automatic screen brightness (Settings → Power → Power Saving → Automatic Screen Brightness), it often changes the display brightness too frequently—even for the smallest ambient light variations. Extension provides another mechanism (based on Windows 11’s approach) to manage automatic brightness doing it also with smooth transitions. Additionally, the extension can enable your keyboard backlight in dark conditions on supported devices.

    You can check it out at extensions.gnome.org . Please read about device compatibility on extension’s homepage

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Andy Wingo: wastrel, a profligate implementation of webassembly

      news.movim.eu / PlanetGnome • 2 days ago - 22:19 • 7 minutes

    Hey hey hey good evening! Tonight a quick note on wastrel , a new WebAssembly implementation.

    a wasm-to-native compiler that goes through c

    Wastrel compiles Wasm modules to standalone binaries. It does so by emitting C and then compiling that C.

    Compiling Wasm to C isn’t new: Ben Smith wrote wasm2c back in the day and these days most people in this space use Bastien Müller ‘s w2c2 . These are great projects!

    Wastrel has two or three minor differences from these projects. Let’s lead with the most important one, despite the fact that it’s as yet vaporware: Wastrel aims to support automatic memory managment via WasmGC , by embedding the Whippet garbage collection library . (For the wingolog faithful, you can think of Wastrel as a Whiffle for Wasm.) This is the whole point! But let’s come back to it.

    The other differences are minor. Firstly, the CLI is more like wasmtime : instead of privileging the production of C, which you then incorporate into your project, Wastrel also compiles the C (by default), and even runs it, like wasmtime run .

    Unlike wasm2c (but like w2c2), Wastrel implements WASI . Specifically, WASI 0.1, sometimes known as “WASI preview 1”. It’s nice to be able to take the wasi-sdk ‘s C compiler, compile your program to a binary that uses WASI imports, and then run it directly.

    In a past life, I once took a week-long sailing course on a 12-meter yacht. One thing that comes back to me often is the way the instructor would insist on taking in the bumpers immediately as we left port, that to sail with them was no muy marinero , not very seamanlike. Well one thing about Wastrel is that it emits nice C: nice in the sense that it avoids many useless temporaries. It does so with a lightweight effects analysis , in which as temporaries are produced, they record which bits of the world they depend on, in a coarse way: one bit for the contents of all global state (memories, tables, globals), and one bit for each local. When compiling an operation that writes to state, we flush all temporaries that read from that state (but only that state). It’s a small thing, and I am sure it has very little or zero impact after SROA turns locals into SSA values, but we are vessels of the divine, and it is important for vessels to be C worthy.

    Finally, w2c2 at least is built in such a way that you can instantiate a module multiple times. Wastrel doesn’t do that: the Wasm instance is statically allocated, once. It’s a restriction, but that’s the use case I’m going for.

    on performance

    Oh buddy, who knows?!? What is real anyway? I would love to have proper perf tests, but in the meantime, I compiled coremark using my GCC on x86-64 (-02, no other options), then also compiled it with the current wasi-sdk and then ran with w2c2, wastrel, and wasmtime. I am well aware of the many pitfalls of benchmarking, and so I should not say anything because it is irresponsible to make conclusions from useless microbenchmarks. However, we’re all friends here, and I am a dude with hubris who also believes blogs are better out than in, and so I will give some small indications. Please obtain your own salt.

    So on coremark, Wastrel is some 2-5% percent slower than native, and w2c2 is some 2-5% slower than that. Wasmtime is 30-40% slower than GCC. Voilà.

    My conclusion is, Wastrel provides state-of-the-art performance. Like w2c2. It’s no wonder, these are simple translators that use industrial compilers underneath. But it’s neat to see that performance is close to native.

    on wasi

    OK this is going to sound incredibly arrogant but here it is: writing Wastrel was easy. I have worked on Wasm for a while, and on Firefox’s baseline compiler , and Wastrel is kinda like a baseline compiler in shape: it just has to avoid emitting boneheaded code, and can leave the serious work to someone else (Ion in the case of Firefox, GCC in the case of Wastrel). I just had to use the Wasm libraries I already had and make it emit some C for each instruction. It took 2 days.

    WASI, though, took two and a half weeks of agony. Three reasons: One, you can be sloppy when implementing just wasm, but when you do WASI you have to implement an ABI using sticks and glue, but you have no glue, it’s all just i32 . Truly excruciating, it makes you doubt everything, and I had to refactor Wastrel to use C’s meager type system to the max. (Basically, structs-as-values to avoid type confusion, but via inline functions to avoid overhead.)

    Two, WASI is not huge but not tiny either. Implementing poll_oneoff is annoying. And so on. Wastrel’s WASI implementation is thin but it’s still a couple thousand lines of code.

    Three, WASI is underspecified, and in practice what is “conforming” is a function of what the Rust and C toolchains produce. I used wasi-testsuite to burn down most of the issues, but it was a slog. I neglected email and important things but now things pass so it was worth it maybe? Maybe?

    on wasi’s filesystem sandboxing

    WASI preview 1 has this “rights” interface that associated capabilities with file descriptors. I think it was an attempt at replacing and expanding file permissions with a capabilities-oriented security approach to sandboxing, but it was only a veneer. In practice most WASI implementations effectively implement the sandbox via a permissions layer: for example the process has capabilities to access the parents of preopened directories via .. , but the WASI implementation has to actively prevent this capability from leaking to the compiled module via run-time checks.

    Wastrel takes a different approach, which is to use Linux’s filesystem namespaces to build a tree in which only the exposed files are accessible. No run-time checks are necessary; the system is secure by construction. He says. It’s very hard to be categorical in this domain but a true capabilities-based approach is the only way I can have any confidence in the results, and that’s what I did.

    The upshot is that Wastrel is only for Linux. And honestly, if you are on MacOS or Windows, what are you doing with your life? I get that it’s important to meet users where they are but it’s just gross to build on a corporate-controlled platform.

    The current versions of WASI keep a vestigial capabilities-based API, but given that the goal is to compile POSIX programs, I would prefer if wasi-filesystem leaned into the approach of WASI just having access to a filesystem instead of a small set of descriptors plus scoped openat , linkat , and so on APIs. The security properties would be the same, except with fewer bug possibilities and with a more conventional interface.

    on wtf

    So Wastrel is Wasm to native via C, but with an as-yet-unbuilt GC aim. Why?

    This is hard to explain and I am still workshopping it.

    Firstly I am annoyed at the WASI working group’s focus on shared-nothing architectures as a principle of composition. Yes, it works, but garbage collection also works; we could be building different, simpler systems if we leaned in to a more capable virtual machine. Many of the problems that WASI is currently addressing are ownership-related, and would be comprehensively avoided with automatic memory management. Nobody is really pushing for GC in this space and I would like for people to be able to build out counterfactuals to the shared-nothing orthodoxy.

    Secondly there are quite a number of languages that are targetting WasmGC these days, and it would be nice for them to have a good run-time outside the browser. I know that Wasmtime is working on GC, but it needs competition :)

    Finally, and selfishly, I have a GC library ! I would love to spend more time on it. One way that can happen is for it to prove itself useful, and maybe a Wasm implementation is a way to do that. Could Wastrel on wasm_of_ocaml output beat ocamlopt ? I don’t know but it would be worth it to find out! And I would love to get Guile programs compiled to native, and perhaps with Hoot and Whippet and Wastrel that is a possibility.

    Welp, there we go, blog out, dude to bed. Hack at y’all later and wonderful wasming to you all!

    • Pl chevron_right

      Michael Meeks: 2025-10-29 Wednesday

      news.movim.eu / PlanetGnome • 3 days ago - 18:47

    • Call with Dave, catch up with Gerald & Matthias, sync with Thorsten, then Pedro.
    • Published the next strip with one approach for how (not) to get free maintenance:
    • Snatched lunch, call with Quikee - chat with an old friend, sync with Philippe, got to a quick code-fix.
    • Pl chevron_right

      Thibault Martin: From VS Code to Helix

      news.movim.eu / PlanetGnome • 3 days ago - 12:00 • 13 minutes

    I created the website you're reading with VS Code. Behind the scenes I use Astro, a static site generator that gets out of the way while providing nice conveniences.

    Using VS Code was a no-brainer: everyone in the industry seems to at least be familiar with it, every project can be opened with it, and most projects can get enhancements and syntactic helpers in a few clicks. In short: VS Code is free, easy to use, and widely adopted.

    A Rustacean colleague kept singing Helix's praises. I discarded it because he's much smarter than I am, and I only ever use vim when I need to fiddle with files on a server. I like when things "Just Work" and didn't want to bother learning how to use Helix nor how to configure it.

    Today it has become my daily driver. Why did I change my mind? What was preventing me from using it before? And how difficult was it to get there?

    Automation is a double-edged sword

    Automation and technology make work easier, this is why we produce technology in the first place. But it also means you grow more dependent on the tech you use. If the tech is produced transparently by an international team or a team you trust, it's fine. But if it's produced by a single large entity that can screw you over, it's dangerous.

    VS Code might be open source, but in practice it's produced by Microsoft. Microsoft has a problematic relationship to consent and is shoving AI products down everyone's throat. I'd rather use tools that respect me and my decisions, and I'd rather not get my tools produced by already monopolistic organizations.

    Microsoft is also based in the USA, and the political climate over there makes me want to depend as little as possible on American tools. I know that's a long, uphill battle, but we have to start somewhere.

    I'm not advocating for a ban against American tech in general, but for more balance in our supply chain. I'm also not advocating for European tech either: I'd rather get open source tools from international teams competing in a race to the top, rather than from teams in a single jurisdiction. What is happening in the USA could happen in Europe too.

    Why I feared using Helix

    I've never found vim particularly pleasant to use but it's everywhere, so I figured I might just get used to it. But one of the things I never liked about vim is the number of moving pieces. By default, vim and neovim are very bare bones. They can be extended and completely modified with plugins, but I really don't like the idea of having extremely customize tools.

    I'd rather have the same editor as everyone else, with a few knobs for minor preferences. I am subject to choice paralysis, so making me configure an editor before I've even started editing is the best way to tank my productivity.

    When my colleague told me about Helix, two things struck me as improvements over vim.

    1. Helix's philosophy is that everything should work out of the box. There are a few configs and themes, but everything should work similarly from one Helix to another. All the language-specific logic is handled in Language Servers that implement the Language Server Protocol standard.
    2. In Helix, first you select text, and then you perform operations onto it. So you can visually tell what is going to be changed before you apply the change. It fits my mental model much better.

    But there are major drawbacks to Helix too:

    1. After decades of vim, I was scared to re-learn everything. In practice this wasn't a problem at all because of the very visual way Helix works.
    2. VS Code "Just Works", and Helix sounded like more work than the few clicks from VS Code's extension store. This is true, but not as bad as I had anticipated.

    After a single week of usage, Helix was already very comfortable to navigate. After a few weeks, most of the wrinkles have been ironed out and I use it as my primary editor. So how did I overcome those fears?

    What Helped

    Just Do It

    I tried Helix. It can sound silly, but the very first step to get into Helix was not to overthink it. I just installed it on my mac with brew install helix and gave it a go. I was not too familiar with it, so I looked up the official documentation and noticed there was a tutorial.

    This tutorial alone is what convinced me to try harder. It's an interactive and well written way to learn how to move and perform basic operations in Helix. I quickly learned how to move around, select things, surround them with braces or parenthesis. I could see what I was about to do before doing it. This has been epiphany. Helix just worked the way I wanted.

    Better: I could get things done faster than in VS Code after a few minutes of learning. Being a lazy person, I never bothered looking up VS Code shortcuts. Because the learning curve for Helix is slightly steeper, you have to learn those shortcuts that make moving around feel so easy.

    Not only did I quickly get used to Helix key bindings: my vim muscle-memory didn't get in the way at all!

    Better docs

    The built-in tutorial is a very pragmatic way to get started. You get results fast, you learn hands on, and it's not that long. But if you want to go further, you have to look for docs. Helix has officials docs . They seem to be fairly complete, but they're also impenetrable as a new user. They focus on what the editor supports and not on what I will want to do with it.

    After a bit of browsing online, I've stumbled upon this third-party documentation website . The domain didn't inspire me a lot of confidence, but the docs are really good. They are clearly laid out, use-case oriented, and they make the most of Astro Starlight to provide a great reading experience. The author tried to upstream these docs, but that won't happen . It looks like they are upstreaming their docs to the current website. I hope this will improve the quality of upstream docs eventually.

    After learning the basics and finding my way through the docs, it was time to ensure Helix was set up to help me where I needed it most.

    Getting the most of Markdown and Astro in Helix

    In my free time, I mostly use my editor for three things:

    1. Write notes in markdown
    2. Tweak my website with Astro
    3. Edit yaml to faff around my Kubernetes cluster

    Helix is a "stupid" text editor. It doesn't know much about what you're typing. But it supports Language Servers that implement the Language Server Protocol. Language Servers understand the document you're editing. They explain to Helix what you're editing, whether you're in a TypeScript function, typing a markdown link, etc. With that information, Helix and the Language Server can provide code completion hints, errors & warnings, and easier navigation in your code.

    In addition to Language Servers, Helix also supports plugging code formatters. Those are pieces of software that will read the document and ensure that it is consistently formatted. It will check that all indentations use spaces and not tabs, that there is a consistent number of space when indenting, that brackets are on the same line as the function, etc. In short: it will make the code pretty.

    Markdown

    Markdown is not really a programming language, so it might seem surprising to configure a Language Server for it. But if you remember what we said earlier, Language Servers can provide code completion, which is useful when creating links for example. Marksman does exactly that!

    Since Helix is pre-configured to use marksman for markdown files we only need to install marksman and make sure it's in our PATH . Installing it with homebrew is enough.

    $ brew install marksman
    

    We can check that Helix is happy with it with the following command

    $ hx --health markdown
    Configured language servers:
      ✓ marksman: /opt/homebrew/bin/marksman
    Configured debug adapter: None
    Configured formatter: None
    Tree-sitter parser: ✓
    Highlight queries: ✓
    Textobject queries: ✘
    Indent queries: ✘
    

    But Language Servers can also help Helix display errors and warnings, and "code suggestions" to help fix the issues. It means Language Servers are a perfect fit for... grammar checkers! Several grammar checkers exist. The most notable are:

    • LTEX+ , the Language Server used by Language Tool . It supports several languages must is quite resource hungry.
    • Harper , a grammar checker Language Server developed by Automattic, the people behind WordPress, Tumblr, WooCommerce, Beeper and more. Harper only support English and its variants, but they intend to support more languages in the future.

    I mostly write in English and want to keep a minimalistic setup. Automattic is well funded, and I'm confident they will keep working on Harper to improve it. Since grammar checker LSPs can easily be changed, I've decided to go with Harper for now.

    To install it, homebrew does the job as always:

    $ brew install harper
    

    Then I edited my ~/.config/helix/languages.toml to add Harper as a secondary Language Server in addition to marksman

    [language-server.harper-ls]
    command = "harper-ls"
    args = ["--stdio"]
    
    
    [[language]]
    name = "markdown"
    language-servers = ["marksman", "harper-ls"]
    

    Finally I can add a markdown linter to ensure my markdown is formatted properly. Several options exist, and markdownlint is one of the most popular. My colleagues recommended the new kid on the block, a Blazing Fast equivalent: rumdl .

    Installing rumdl was pretty simple on my mac. I only had to add the repository of the maintainer, and install rumdl from it.

    $ brew tap rvben/rumdl
    $ brew install rumdl
    

    After that I added a new language-server to my ~/.config/helix/languages.toml and added it to the language servers to use for the markdown language .

    [language-server.rumdl]
    command = "rumdl"
    args = ["server"]
    
    [...]
    
    
    [[language]]
    name = "markdown"
    language-servers = ["marksman", "harper-ls", "rumdl"]
    soft-wrap.enable = true
    text-width = 80
    soft-wrap.wrap-at-text-width = true
    

    Since my website already contained a .markdownlint.yaml I could import it to the rumdl format with

    $ rumdl import .markdownlint.yaml
    Converted markdownlint config from '.markdownlint.yaml' to '.rumdl.toml'
    You can now use: rumdl check --config .rumdl.toml .
    

    You might have noticed that I've added a little quality of life improvement: soft-wrap at 80 characters.

    Now if you add this to your own config.toml you will notice that the text is completely left aligned. This is not a problem on small screens, but it rapidly gets annoying on wider screens.

    Helix doesn't support centering the editor. There is a PR tackling the problem but it has been stale for most of the year. The maintainers are overwhelmed by the number of PRs making it their way, and it's not clear if or when this PR will be merged.

    In the meantime, a workaround exists, with a few caveats. It is possible to add spaces to the left gutter (the column with the line numbers) so it pushes the content towards the center of the screen.

    To figure out how many spaces are needed, you need to get your terminal width with stty

    $ stty size
    82 243
    

    In my case, when in full screen, my terminal is 243 characters wide. I need to remove the content column with from it, and divide everything by 2 to get the space needed on each side. In my case for a 243 character wide terminal with a text width of 80 characters:

    (243 - 80) / 2 = 81
    

    As is, I would add 203 spaces to my left gutter to push the rest of the gutter and the content to the right. But the gutter itself has a width of 4 characters, that I need to remove from the total. So I need to subtract them from the total, which leaves me with 76 characters to add.

    I can open my ~/.config/helix/config.toml to add a new key binding that will automatically add or remove those spaces from the left gutter when needed, to shift the content towards the center.

    [keys.normal.space.t]
    z = ":toggle gutters.line-numbers.min-width 76 3"
    

    Now when in normal mode, pressing <kbd>Space</kbd> then <kbd>t</kbd> then <kbd>z</kbd> will add/remove the spaces. Of course this workaround only works when the terminal runs in full screen mode.

    Astro

    Astro works like a charm in VS Code. The team behind it provides a Language Server and a TypeScript plugin to enable code completion and syntax highlighting.

    I only had to install those globally with

    $ pnpm install -g @astrojs/language-server typescript @astrojs/ts-plugin
    

    Now we need to add a few lines to our ~/.config/helix/languages.toml to tell it how to use the language server

    [language-server.astro-ls]
    command = "astro-ls"
    args = ["--stdio"]
    config = { typescript = { tsdk = "/Users/thibaultmartin/Library/pnpm/global/5/node_modules/typescript/lib" }}
    
    [[language]]
    name = "astro"
    scope = "source.astro"
    injection-regex = "astro"
    file-types = ["astro"]
    language-servers = ["astro-ls"]
    

    We can check that the Astro Language Server can be used by helix with

    $ hx --health astro
    Configured language servers:
      ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls
    Configured debug adapter: None
    Configured formatter: None
    Tree-sitter parser: ✓
    Highlight queries: ✓
    Textobject queries: ✘
    Indent queries: ✘
    

    I also like to get a formatter to automatically make my code consistent and pretty for me when I save a file. One of the most popular code formaters out there is Prettier . I've decided to go with the fast and easy formatter dprint instead.

    I installed it with

    $ brew install dprint
    

    Then in the projects I want to use dprint in, I do

    $ dprint init
    

    I might edit the dprint.json file to my liking. Finally, I configure Helix to use dprint globally for all Astro projects by appending a few lines in my ~/.config/helix/languages.toml .

    [[language]]
    name = "astro"
    scope = "source.astro"
    injection-regex = "astro"
    file-types = ["astro"]
    language-servers = ["astro-ls"]
    formatter = { command = "dprint", args = ["fmt", "--stdin", "astro"]}
    auto-format = true
    

    One final check, and I can see that Helix is ready to use the formatter as well

    $ hx --health astro
    Configured language servers:
      ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls
    Configured debug adapter: None
    Configured formatter:
      ✓ /opt/homebrew/bin/dprint
    Tree-sitter parser: ✓
    Highlight queries: ✓
    Textobject queries: ✘
    Indent queries: ✘
    

    YAML

    For yaml, it's simple and straightforward: Helix is preconfigured to use yaml-language-server as soon as it's in the PATH. I just need to install it with

    $ brew install yaml-language-server
    

    Is it worth it?

    Helix really grew on me. I find it particularly easy and fast to edit code with it. It takes a tiny bit more work to get the language support than it does in VS Code, but it's nothing insurmountable. There is a slightly steeper learning curve than for VS Code, but I consider it to be a good thing. It forced me to learn how to move around and edit efficiently, because there is no way to do it inefficiently. Helix remains intuitive once you've learned the basics.

    I am a GNOME enthusiast, and I adhere to the same principles: I like when my apps work out of the box, and when I have little to do to configure them. This is a strong stance that often attracts a vocal opposition. I like products that follow those principles better than those who don't.

    With that said, Helix sometimes feels like it is maintained by one or two people who have a strong vision, but who struggle to onboard more maintainers. As of writing, Helix has more than 350 PRs open. Quite a few bring interesting features, but the maintainers don't have enough time to review them.

    Those 350 PRs mean there is a lot of energy and goodwill around the project. People are willing to contribute. Right now, all that energy is gated, resulting in frustration both from the contributors who feel like they're working in the void, and the maintainers who feel like there at the receiving end of a fire hose.

    A solution to make everyone happier without sacrificing the quality of the project would be to work on a Contributor Ladder. CHAOSS' Dr Dawn Foster published a blog post about it , listing interesting resources at the end.

    • Pl chevron_right

      Felipe Borges: Google Summer of Code Mentor Summit 2025

      news.movim.eu / PlanetGnome • 3 days ago - 11:05 • 4 minutes

    Last week, I took a lovely train ride to Munich, Germany, to represent GNOME at the Google Summer of Code Mentor Summit 2025. This was my first time attending the event, as previous editions were held in the US, which was always a bit too hard to travel.

    This was also my first time at an event with the “unconference” format, which I found to be quite interesting and engaging. I was able to contribute to a few discussions and hear a variety of different perspectives from other contributors. It seems that when done well, this format can lead to much richer conversations than our usual, pre-scheduled “one-to-many” talks.

    The event was attended by a variety of free and open-source communities from all over the world. These groups are building open solutions for everything from cloud software and climate applications to programming languages, academia, and of course, AI. This diversity was a great opportunity to learn about the challenges other software communities face and their unique circumstances.

    There was a nice discussion with the people behind MusicBrainz. I was happy and surprised to find out that they are the largest database of music metadata in the world, and that pretty much all popular music streaming services, record labels, and similar groups consume their data in some way.

    Funding the project is a constant challenge for them, given that they offer a public API that everyone can consume. They’ve managed over the years by making direct contact with these large companies, developing relationships with the decision-makers inside, and even sometimes publicly highlighting how these highly profitable businesses rely on FOSS projects that struggle with funding. Interesting stories. 🙂

    There was a large discussion about “AI slop,” particularly in GSoC applications. This is a struggle we’ve faced in GNOME as well, with a flood of AI-generated proposals. The Google Open Source team was firm that it’s up to each organization to set its own criteria for accepting interns, including rules for contributions. Many communities shared their experiences, and the common solution seems to be reducing the importance of the GSoC proposal document. Instead, organizations are focusing on requiring a history of small, “first-timer” contributions and conducting short video interviews to discuss that work. This gives us more confidence that the applicant truly understands what they are doing.

    GSoC is not a “pay-for-feature” initiative, neither for Google nor for GNOME. We see this as an opportunity to empower newcomers to become long-term GNOME contributors. Funding free and open-source work is hard, especially for people in less privileged places of the world, and initiatives like GSoC and Outreachy allow these people to participate and find career opportunities in our spaces. We have a large number of GSoC interns who have become long-term maintainers and contributors to GNOME. Many others have joined different industries, bringing their GNOME expertise and tech with them. It’s been a net-positive experience for Google, GNOME, and the contributors over the past decades.

    Our very own Karen Sandler was there and organized a discussion around diversity. This topic is as relevant as ever, especially given recent challenges to these initiatives in the US. We discussed ideas on how to make communities more diverse and better support the existing diverse members of our communities.

    It was quite inspiring. Communities from various other projects shared their stories and results, and to me, it just confirmed my perception: while diverse communities are hard to build, they can achieve much more than non-diverse ones in the long run. It is always worth investing in people.

    As always, the “hallway track” was incredibly fruitful. I had great chats with Carl Schwan (from KDE) about event organizing (comparing notes on GUADEC, Akademy, and LAS) and cross-community collaboration around technologies like Flathub and Flatpak. I also caught up with Claudio Wunder, who did engagement work for GNOME in the past and has always been a great supporter of our project. His insights into the dynamics of other non-profit foundations sparked some interesting discussions about the challenges we face in our foundation.

    I also had a great conversation with Till Kamppeter (from OpenPrinting) about the future of printing in xdg-desktop-portals. We focused on agreeing on a direction for new dialog features, like print preview and other custom app-embedded settings. This was the third time I’ve run into Till at a conference this year! 🙂

    I met plenty of new people and had various off-topic chats as well. The event was only two days long, but thanks to the unconference format, I ended up engaging far more with participants than I usually do in that amount of time.

    I also had the chance to give a lightning talk about GNOME’s long history with Google Summer of Code and how the program has helped us build our community. It was a great opportunity to also share our wider goals, like building a desktop for everyone and our focus on being a people-centric community.

    Finally, I’d like to thank Google for sponsoring my trip, and for providing the space for us all to talk about our communities and learn from so many others.

    • Pl chevron_right

      Jakub Steiner: USB MIDI Controllers on the M8

      news.movim.eu / PlanetGnome • 4 days ago - 11:04 • 2 minutes

    The M8 has extensive USB audio and MIDI capabilities, but it cannot be a USB MIDI host. So you can control other devices through USB MIDI, but cannot sent to it over USB.

    Control Surface & Pots for M8

    Controlling things via USB devices has to be done through the old TRS (A) jacks. There’s two devices that can aid in that. I’ve used the RK06 which is very featureful, but in a very clumsy plastic case and USB micro cable that has a splitter for the HOST part and USB Power in. It also sometimes doesn’t reset properly when having multiple USB devices attached through a hub. The last bit is why I even bother with this setup.

    The Dirtywave M8 has amazing support for the Novation Launchpad Pro MK3 . Majority of peolpe hook it up directly to the M8 using the TRS MIDI cables. The Launchpad lacks any sort of pots or encoders though. Thus the need to fuss with USB dongles. What you need is to use the Launchpad Pro as a USB controller and shun at the reliable MIDI connection. The RK06 allows to combine multiple USB devices attached through an unpowered USB hub. Because I am flabbergasted how I did things here’s a schema that works.

    Retrokits RK06 schema

    If it doesn’t work, unplug the RK06 and turn LPPro off and on in the M8. I hate this setup but it is the only compact one that works (after some fiddling that you absolutely hate when doing a gig).

    Intech Knot

    The Hungarians behind the Grid USB controlles (with first class Linux support) have a USB>MIDI device called Knot . It has one great feature of a switch between TRS A/B for the non-standard devices. It is way less fiddly than the RK06, uses nice aluminium housing and is sturdier. Hoewer it doesn’t seem to work with the Launchpad Pro via USB and it seems to be completely confused by a USB hub, so it’s not useful for my use case of multiple USB controllers.

    Clean setup with Knot&Grid

    Non-compact but Reliable

    Novation came out with the Launch Control XL , which sadly replaced pots in the old one with encoders (absolute vs relative movement), but added midi in/ou/through with a MIDI mixer even. That way you can avoid USB altogether and get a reliable setup with control surfaces and encoders and sliders.

    Launchpar Pro and Intech PO16 via USB handled by RK06

    One day someone comes up with a compact midi capable pots to play along with Launchpad Pro ;) This post has been brought to you by an old man who forgets things.

    • Pl chevron_right

      Colin Walters: Thoughts on agentic AI coding as of Oct 2025

      news.movim.eu / PlanetGnome • 5 days ago - 21:08 • 4 minutes

    Sandboxed, reviewed parallel agents make sense

    For coding and software engineering, I’ve used and experimented with various frontends (FOSS and proprietary) to multiple foundation models (mostly proprietary) trying to keep up with the state of the art. I’ve come to strongly believe in a few things:

    • Agentic AI for coding needs strongly sandboxed, reproducible environments
    • It makes sense to run multiple agents at once
    • AI output definitely needs human review

    Why human review is necessary

    Prompt injection is a serious risk at scale

    All AI is at risk of prompt injection to some degree, but it’s particularly dangerous with agentic coding. All the state of the art today knows how to do is mitigate it at best. I don’t think it’s a reason to avoid AI, but it’s one of the top reasons to use AI thoughtfully and carefully for products that have any level of criticality.

    OpenAI’s Codex documentation has a simple and good example of this.

    Disabling the tests and claiming success

    Beyond that, I’ve experienced multiple times different models happily disabling the tests or adding a println!("TODO add testing here") and claim success. At least this one is easier to mitigate with a second agent doing code review before it gets to human review.

    Sandboxing

    The “can I do X” prompting model that various interfaces default to is seriously flawed. Anthropic has a recent blog post on Claude Code changes in this area.

    My take here is that sandboxing is only part of the problem; the other part is ensuring the agent has a reproducible environment, and especially one that can be run in IaaS environments. I think devcontainers are a good fit.

    I don’t agree with the statement from Anthropic’s blog

    without the overhead of spinning up and managing a container.

    I don’t think this is overhead for most projects because Where it feels like it has overhead, we should be working to mitigate it.

    Running code as separate login users

    In fact, one thing I think we should popularize more on Linux is the concept of running multiple unprivileged login users. Personally for the tasks I work on, it often involves building containers or launching local VMs, and isolating that works really well with a full separate “user” identity. An experiment I did was basically useradd ai and running delegated tasks there instead. To log in I added %wheel ALL=NOPASSWD: /usr/bin/machinectl shell ai@ to /etc/sudoers.d/ai-login so that my regular human user could easily get a shell in the ai user’s context.

    I haven’t truly “operationalized” this one as juggling separate git repository clones was a bit painful, but I think I could automate it more. I’m interested in hearing from folks who are doing something similar.

    Parallel, IaaS-ready agents…with review

    I’m today often running 2-3 agents in parallel on different tasks (with different levels of success, but that’s its own story).

    It makes total sense to support delegating some of these agents to work off my local system and into cloud infrastructure.

    In looking around in this space, there’s quite a lot of stuff. One of them is Ona (formerly Gitpod). I gave it a quick try and I like where they’re going, but more on this below.

    Github Copilot can also do something similar to this, but what I don’t like about it is that it pushes a model where all of one’s interaction is in the PR. That’s going to be seriously noisy for some repositories, and interaction with LLMs can feel too “personal” sometimes to have permanently recorded.

    Credentials should be on demand and fine grained for tasks

    To me a huge flaw with Ona and one shared with other things like Langchain Open-SWE is basically this:

    Sorry but: no way I’m clicking OK on that button. I need a strong and clearly delineated barrier between tooling/AI agents acting “as me” and my ability to approve and push code or even do basic things like edit existing pull requests.

    Github’s Copilot gets this more right because its bot runs as a distinct identity. I haven’t dug into what it’s authorized to do. I may play with it more, but I also want to use agents outside of Github and I also am not a fan of deepening dependence on a single proprietary forge either.

    So I think a key thing agent frontends should help do here is in granting fine-grained ephemeral credentials for dedicated write access as an agent is working on a task. This “credential handling” should be a clearly distinct component. (This goes beyond just git forges of course but also other issue trackers or data sources that may be in context).

    Conclusion

    There’s so much out there on this, I can barely keep track while trying to do my real job. I’m sure I’m not alone – but I’m interested in other’s thoughts on this!

    • Pl chevron_right

      Christian Hergert: Status Week 43

      news.movim.eu / PlanetGnome • 5 days ago - 20:40 • 4 minutes

    Got a bit side-tracked with life stuff but lets try to get back to it this week.

    Libdex

    • D-Bus signal abstraction iteration from swick

    • Merge documentation improvements for libdex

    • I got an email from the Bazaar author about a crash they’re seeing when loading textures into the GPU for this app-store.

      Almost every crash I’ve seen from libdex has been from forgetting to transfer ownership. I tried hard to make things ergonomic but sometimes it happens.

      I didn’t have any cycles to really donate so I just downloaded the project and told my local agent to scan the project and look for improper ownership transfer of DexFuture.

      It found a few candidates which I looked at in detail over about five minutes. Passed it along to the upstream developer and that-was-that. Their fixu-ps seem to resolve the issue. Considering how bad agents are at using the proper ownership transfer its interesting it can also be used to discover them.

    • Add some more tests to the testsuite for future just to give myself some more certainty over incoming issue reports.

    Foundry

    • Added a new Gir parser as FoundryGir so that we can have access to the reflected, but not compiled or loaded into memory, version of Gir files. This will mean that we could have completion providers or documentation sub-system able to provide documentation for the code-base even if the documentation is not being generated.

      It would also mean that we can perhaps get access to the markdown specific documentation w/o HTML so that it may be loaded into the completion accessory window using Pango markup instead of a WebKitWebView shoved into a GtkPopover .

      Not terribly different from what Builder used to do in the Python plugin for code completion of GObject Introspection.

    • Expanding on the Gir parser to locate gir files in the build, system, and installation prefix for the project. That allows trying to discover the documentation for a keyword (type, identifier, etc), which we can generate as something markdowny. My prime motivation here is to have Shift+K working in Vim for real documentation without having to jump to a browser, but it obviously is beneficial in other areas like completion systems. This is starting to work but needs more template improvements.

    • Make FoundryForgeListing be able to simplify the process of pulling pages from the remote service in order. Automatically populates a larger listmodel as individual pages are fetched.

    • Start on Gitlab forge implementation for querying issues.

      Quickly ran into an issue where gitlab.gnome.org is not servicing requests for the API due to Anubis. Bart thankfully updated things to allow our API requests with PRIVATE-TOKEN to pass through.

      Since validating API authorization tokens is one of the most optimized things in web APIs, this is probably of little concern to the blocking of AI scrapers.

    • Gitlab user, project, issues abstractions

    • Start loading GitlabProject after querying API system for the actual project-id from the primary git remote path part.

    • Support finding current project

    • Support listing issues for current project

    • Storage of API keys in a generic fashion using libsecret. Forges will take advantage of this to set a key for host/service pair.

    • Start on translate API for files in/out of SDKs/build environments. Flatpak and Podman still need implementations.

    • PluginGitlabListing can now pre-fetch pages when the last item has been queries from the list model. This will allow GtkListView to keep populating in the background while you scroll.

    • mdoc command helper to prototype discover of markdown docs for use in interesting places. Also prototyped some markdown->ansi conversion for nice man-like replacement in console.

    • Work on a new file search API for Foundry which matches a lot of what Builder will need for its search panel. Grep as a service basically with lots of GListModel and thread-pool trickery.

    • Add foundry grep which uses the foundry_file_manager_search() API for testing. Use it to try to improve the non-UTF-8 support you can run into when searching files/disks where that isn’t used.

    • Cleanup build system for plugins to make it obvious what is happening

    • Setup include/exclude globing for file searches (backed by grep)

    • Add abstraction for search providers in FoundryFileSearchProvider with the default fallback implementation being GNU grep. This will allow for future expansion into tooling like rg (ripgrep) which provides some nice performance and tooling like --json output. One of the more annoying parts of using grep is that it is so different per-platform. For example, we really want --null from GNU grep so that we get a \0 between the file path and the content as any other character could potentially fall within the possible filename and/or encoding of files on disk.

      Where as with ripgrep we can just get JSON and make each search result point to the parsed JsonNode and inflate properties from that as necessary.

    • Add a new IntentManager, IntentHandler, and Intent system so that we can allow applications to handle policy differently with a plugin model w/ priorities. This also allows for a single system to be able to dispatch differently when opening directories vs files to edit.

      This turned out quite nice and might be a candidate to have lower in the platform for writing applications.

    • Add a FoundrySourceBuffer comment/uncomment API to make this easily available to editors using Foundry. Maybe this belongs in GSV someday.

    Ptyxis

    • Fix shortcuts window issue where it could potentially be shown again after being destroyed. Backport to 48/49.

    • Fix issue with background cursor blink causing transparency to break.

    Builder

    • Various prototype work to allow further implementation of Foundry APIs for the on-foundry rewrite. New directory listing, forge issue list, intent handlers.

    Other

    • Write up the situation with Libpeas and GObject Introspection for GNOME/Fedora.

    Research

    • Started looking into various JMAP protocols. I’d really like to get myself off my current email configuration but it’s been around for decades and that’s a hard transition.

    • Pl chevron_right

      Sam Thursfield: Avoid the Fedora kernel 6.16.8-200.fc42.x86_64

      news.movim.eu / PlanetGnome • 5 days ago - 11:00

    Good morning!

    I spent some time figuring out why my build PC was running so slowly today. Thanks to some help from my very smart colleagues I came up with this testcase in Nushell to measure CPU performance:

    ~: dd if=/dev/random of=./test.in bs=(1024 * 1024) count=10
    10+0 records in
    10+0 records out
    10485760 bytes (10 MB, 10 MiB) copied, 0.0111184 s, 943 MB/s
    ~: time bzip2 test.in
    0.55user 0.00system 0:00.55elapsed 99%CPU (0avgtext+0avgdata 8044maxresident)k
    112inputs+20576outputs (0major+1706minor)pagefaults 0swap

    We are copying 10MB of random data into a file and compressing it with bzip2. 0.55 seconds is a pretty good time to compress 10MB of data with bzip2.

    But! As soon as I ran a virtual machine, this same test started to take 4 or 5 seconds, both on the host and in the virtual machine.

    There is already a new Fedora kernel available and with that version (6.17.4-200.fc42.x86_64) I don’t see any problems. I guess some issue affecting AMD Ryzen virtualization that got fixed already.

    Have a fun day!