• Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 3 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Andy Wingo: wastrel milestone: full hoot support, with generational gc as a treat

      news.movim.eu / PlanetGnome • 13:48 • 7 minutes

    Hear ye, hear ye: Wastrel and Hoot means REPL!

    Which is to say, Wastrel can now make native binaries out of WebAssembly files as produced by the Hoot Scheme toolchain, up to and including a full read-eval-print loop. Like the REPL on the Hoot web page , but instead of requiring a browser, you can just run it on your console. Amazing stuff!

    try it at home

    First, we need the latest Hoot. Build it from source , then compile a simple REPL:

    echo '(import (hoot repl)) (spawn-repl)' > repl.scm
    ./pre-inst-env hoot compile -fruntime-modules -o repl.wasm repl.scm
    

    This takes about a minute. The resulting wasm file has a pretty full standard library including a full macro expander and evaluator.

    Normally Hoot would do some aggressive tree-shaking to discard any definitions not used by the program, but with a REPL we don’t know what we might need. So, we pass -fruntime-modules to instruct Hoot to record all modules and their bindings in a central registry, so they can be looked up at run-time. This results in a 6.6 MB Wasm file; with tree-shaking we would have been at 1.2 MB.

    Next, build Wastrel from source , and compile our new repl.wasm :

    wastrel compile -o repl repl.wasm
    

    This takes about 5 minutes on my machine: about 3 minutes to generate all the C, about 6.6MLOC all in all, split into a couple hundred files of about 30KLOC each, and then 2 minutes to compile with GCC and link-time optimization (parallelised over 32 cores in my case). I have some ideas to golf the first part down a bit, but the the GCC side will resist improvements.

    Finally, the moment of truth:

    $ ./repl
    Hoot 0.8.0
    
    Enter `,help' for help.
    (hoot user)> "hello, world!"
    => "hello, world!"
    (hoot user)>
    

    statics

    When I first got the REPL working last week, I gasped out loud: it’s alive, it’s alive!!! Now that some days have passed, I am finally able to look a bit more dispassionately at where we’re at.

    Firstly, let’s look at the compiled binary itself. By default, Wastrel passes the -g flag to GCC, which results in binaries with embedded debug information. Which is to say, my ./repl is chonky: 180 MB!! Stripped, it’s “just” 33 MB. 92% of that is in the .text (code) section. I would like a smaller binary, but it’s what we got for now: each byte in the Wasm file corresponds to around 5 bytes in the x86-64 instruction stream.

    As for dependencies, this is a pretty minimal binary, though dynamically linked to libc :

    linux-vdso.so.1 (0x00007f6c19fb0000)
    libm.so.6 => /gnu/store/…-glibc-2.41/lib/libm.so.6 (0x00007f6c19eba000)
    libgcc_s.so.1 => /gnu/store/…-gcc-15.2.0-lib/lib/libgcc_s.so.1 (0x00007f6c19e8d000)
    libc.so.6 => /gnu/store/…-glibc-2.41/lib/libc.so.6 (0x00007f6c19c9f000)
    /gnu/store/…-glibc-2.41/lib/ld-linux-x86-64.so.2 (0x00007f6c19fb2000)
    

    Our compiled ./repl includes a garbage collector from Whippet , about which, more in a minute. For now, we just note that our use of Whippet introduces no run-time dependencies.

    dynamics

    Just running the REPL with WASTREL_PRINT_STATS=1 in the environment, it seems that the REPL has a peak live data size of 4MB or so, but for some reason uses 15 MB total. It takes about 17 ms to start up and then exit.

    These numbers I give are consistent over a choice of particular garbage collector implementations: the default --gc=stack-conservative-parallel-generational-mmc , or the non-generational stack-conservative-parallel-mmc , or the Boehm-Demers-Weiser bdw . Benchmarking collectors is a bit gnarly because the dynamic heap growth heuristics aren’t the same between the various collectors; by default, the heap grows to 15 MB or so with all collectors, but whether it chooses to collect or expand the heap in response to allocation affects startup timing. I get the above startup numbers by setting GC_OPTIONS=heap-size=15m,heap-size-policy=fixed in the environment.

    Hoot implements Guile Scheme , so we can also benchmark Hoot against Guile. Given the following test program that sums the leaf values for ten thousand quad trees of height 5:

    (define (quads depth)
      (if (zero? depth)
          1
          (vector (quads (- depth 1))
                  (quads (- depth 1))
                  (quads (- depth 1))
                  (quads (- depth 1)))))
    (define (sum-quad q)
      (if (vector? q)
          (+ (sum-quad (vector-ref q 0))
             (sum-quad (vector-ref q 1))
             (sum-quad (vector-ref q 2))
             (sum-quad (vector-ref q 3)))
          q))
    
    (define (sum-of-sums n depth)
      (let lp ((n n) (sum 0))
        (if (zero? n)
            sum
            (lp (- n 1)
                (+ sum (sum-quad (quads depth)))))))
    
    
    (sum-of-sums #e1e4 5)
    

    We can cat it to our repl to see how we do:

    Hoot 0.8.0
    
    Enter `,help' for help.
    (hoot user)> => 10240000
    (hoot user)>
    Completed 3 major collections (281 minor).
    4445.267 ms total time (84.214 stopped); 4556.235 ms CPU time (189.188 stopped).
    0.256 ms median pause time, 0.272 p95, 7.168 max.
    Heap size is 28.269 MB (max 28.269 MB); peak live data 9.388 MB.
    

    That is to say, 4.44s, of which 0.084s was spent in garbage collection pauses. The default collector configuration is generational, which can result in some odd heap growth patterns; as it happens, this workload runs fine in a 15MB heap. Pause time as a percentage of total run-time is very low, so all the various GCs perform the same, more or less; we seem to be benchmarking eval more than the GC itself.

    Is our Wastrel-compiled repl performance good? Well, we can evaluate it in two ways. Firstly, against Chrome or Firefox, which can run the same program; if I paste in the above program in the REPL over at the Hoot web site , it takes about 5 or 6 times as long to complete, respectively. Wastrel wins!

    I can also try this program under Guile itself: if I eval it in Guile, it takes about 3.5s. Granted, Guile’s implementation of the same source language is different, and it benefits from a number of representational tricks, for example using just two words for a pair instead of four on Hoot+Wastrel. But these numbers are in the same ballpark, which is heartening. Compiling the test program instead of interpreting is about 10× faster with both Wastrel and Guile, with a similar relative ratio.

    Finally, I should note that Hoot’s binaries are pretty well optimized in many ways, but not in all the ways. Notably, they use too many locals, and the post-pass to fix this is unimplemented , and last time I checked (a long time ago!), wasm-opt didn’t work on our binaries. I should take another look some time.

    generational?

    This week I dotted all the t’s and crossed all the i’s to emit write barriers when we mutate the value of a field to store a new GC-managed data type, allowing me to enable the sticky mark-bit variant of the Immix-inspired mostly-marking collector . It seems to work fine, though this kind of generational collector still baffles me sometimes .

    With all of this, Wastrel’s GC-using binaries use a stack-conservative , parallel , generational collector that can compact the heap as needed. This collector supports multiple concurrent mutator threads, though Wastrel doesn’t do threading yet. Other collectors can be chosen at compile-time, though always-moving collectors are off the table due to not emitting stack maps.

    The neat thing is that any language that compiles to Wasm can have any of these collectors! And when the Whippet GC library gets another collector or another mode on an existing collector, you can have that too.

    missing pieces

    The biggest missing piece for Wastrel and Hoot is some kind of asynchrony, similar to JavaScript Promise Integration (JSPI) , and somewhat related to stack switching . You want Wasm programs to be able to wait on external events, and Wastrel doesn’t support that yet.

    Other than that, it would be lovely to experiment with Wasm shared-everything threads at some point.

    what’s next

    So I have an ahead-of-time Wasm compiler. It does GC and lots of neat things. Its performance is state-of-the-art. It implements a few standard libraries, including WASI 0.1 and Hoot. It can make a pretty good standalone Guile REPL. But what the hell is it for?

    Friends, I... I don’t know! It’s really cool, but I don’t yet know who needs it. I have a few purposes of my own (pushing Wasm standards, performance work on Whippet, etc), but you or someone you know needs a wastrel, do let me know at wingo@igalia.com : I would love to be able to spend more time hacking in this area.

    Until next time, happy compiling to all!

    • Pl chevron_right

      Thibault Martin: TIL that Helix and Typst are a match made in heaven

      news.movim.eu / PlanetGnome • 8:00 • 1 minute

    I love Markdown with all my heart. It's a markup language so simple to understand that even people who are not software engineers can use it in a few minutes.

    The flip side of that coin if that Markdown is limited. It can let you create various title levels, bold, italics, strikethrough, tables, links, and a bit more, but not so much.

    When it comes to more complex documents, most people resort to full fledged office suite like Microsoft Office or LibreOffice. Both have their merits, but office file formats are brittle and heavy.

    The alternative is to use another more complex markup language. Academics used to be into LaTeX but it's often tedious to use. Typst emerged more recently as a simpler yet useful markup language to create well formatted documents.

    Tinymist is a language server for Typst. It provides the usual services a Language server provides, like semantic highlighting, code actions, formatting, etc.

    But it really stands out by providing a live preview feature that keeps your cursor in sync in Helix when you are clicking around in the live preview!

    I only had to install it with

    $ brew install tinymist
    

    I then configured Helix to use tinymist for Typst documents, enabling live preview along the way. This happens of course in ~/.config/helix/languages.toml

    [language-server.tinymist]
    command = "tinymist"
    config = { preview.background.enabled = true, preview.background.args = ["--data-plane-host=127.0.0.1:23635", "--invert-colors=never", "--open"] }
    
    [[language]]
    name = "typst"
    language-servers = ["tinymist"]
    

    A warm thank you to my lovely friend Felix for showing me the live preview mode of tinymist!

    • Pl chevron_right

      Michael Meeks: 2026-04-07 Tuesday

      news.movim.eu / PlanetGnome • 1 day ago

    • Up early, mail chew, planning call, sync with Julie & Thorsten. Lunch, catch-up with Anna & Andras.
    • Poked at an update in response to TDF's blog with Chris.
    • TDF board call, Staff + Board + MC with no community questions - surreal, Dennis arrived but too late for the questions I suppose.
    • Read Die L%C3%B6sung which someone pointed me to.
    • Pl chevron_right

      Thibault Martin: TIL that Git can locally ignore files

      news.movim.eu / PlanetGnome • 2 days ago • 1 minute

    When editing markdown, I love using Helix (best editor in the world). I rely on three language servers to help me do it:

    • rumdl to check markdown syntax and enforce the rules decided by a project
    • marksman to get assistance when creating links
    • harper-ls to check for spelling or grammar mistakes

    All of these are configured in my ~/.config/helix/languages.toml configuration file, so it applies globally to all the markdown I edit. But when I edit This Week In Matrix at work, things are different.

    To edit those posts, we let our community report their progress in a Matrix room, we collect them into a markdown file that we then editorialized. This is a perfect fit for Helix (best editor in the world) and its language servers.

    Helix has two features that make it a particularly good fit for the job

    1. The diagnostics view
    2. Jumping to next error with ]d

    It is possible to filter out pickers , but it becomes tedious to do so. For this project specifically, I want to disable harper-ls entirely. Helix supports per-project configuration by creating a .helix/languages.toml file at the project's root.

    It's a good solution to override my default config, but now I have an extra .helix directory that git wants to track. I could add it to the .gitignore , but that would also add it to everyone else's .gitignore , even if they don't use Helix (best editor in the world) yet.

    It turns out that there is a local-only equivalent to .gitignore , and it's .git/info/exclude . The syntax is the same as .gitignore but it's not committed.

    I can't believe I didn't need this earlier in my life.

    • Pl chevron_right

      Thibault Martin: TIL that You can filter Helix pickers

      news.movim.eu / PlanetGnome • 2 days ago

    Helix has a system of pickers. It's a pop up window to open files, or open diagnostics coming from a Language Server.

    The diagnostics picker displays data in columns:

    • severity
    • source
    • code
    • message

    Sometimes it can get very crowded, especially when you have plenty of hints but few actual errors. I didn't know it, but Helix supports filtering in pickers!

    By typing %severity WARN I only get warnings. I can even shorten it to %se (and not %s , since source also starts with an s ). The full syntax is well documented in the pickers documentation .

    • Pl chevron_right

      Andy Wingo: the value of a performance oracle

      news.movim.eu / PlanetGnome • 2 days ago • 5 minutes

    Over on his excellent blog, Matt Keeter posts some results from having ported a bytecode virtual machine to tail-calling style . He finds that his tail-calling interpreter written in Rust beats his switch-based interpreter, and even beats hand-coded assembly on some platforms.

    He also compares tail-calling versus switch-based interpreters on WebAssembly, and concludes that performance of tail-calling interpreters in Wasm is terrible:

    1.2× slower on Firefox, 3.7× slower on Chrome, and 4.6× slower in wasmtime. I guess patterns which generate good assembly don't map well to the WASM stack machine, and the JITs aren't smart enough to lower it to optimal machine code.

    In this article, I would like to argue the opposite: patterns that generate good assembly map just fine to the Wasm stack machine, and the underperformance of V8, SpiderMonkey, and Wasmtime is an accident.

    some numbers

    I re-ran Matt’s experiment locally on my x86-64 machine (AMD Ryzen Threadripper PRO 5955WX). I tested three toolchains:

    • Compiled natively via cargo / rustc

    • Compiled to WebAssembly, then run with Wasmtime

    • Compiled to WebAssembly, then run with Wastrel

    For each of these toolchains, I tested Raven as implemented in Rust in both “switch-based” and “tail-calling” modes. Additionally, Matt has a Raven implementation written directly in assembly; I test this as well, for the native toolchain. All results use nightly/git toolchains from 7 April 2026.

    My results confirm Matt’s for the native and wasmtime toolchains, but wastrel puts them in context:

    Bar charts showing native, wasmtime, and wastrel scenarios testing tail-calling versus switch implementations; wasmtime slows down for tail-calling, whereas wastrel speeds up.

    We can read this chart from left to right: a switch-based interpreter written in Rust is 1.5× slower than a tail-calling interpreter, and the tail-calling interpreter just about reaches the speed of hand-written assembler. (Testing on AArch64, Matt even sees the tail-calling interpreter beating his hand-written assembler.)

    Then moving to WebAssembly run using Wasmtime, we see that Wasmtime takes 4.3× as much time to run the switch-based interpreter, compared to the fastest run from the hand-written assembler, and worse, actually shows 6.5× overhead for the tail-calling interpreter. Hence Matt’s conclusions: there must be something wrong with WebAssembly.

    But if we compare to Wastrel , we see a different story: Wastrel runs the basic interpreter with 2.4× overhead, and the tail-calling interpreter improves on this marginally with a 2.3x overhead. Now, granted, two-point-whatever-x is not one; Matt’s Raven VM still runs slower in Wasm than when compiled natively. Still, a tail-calling interpreter is inherently a pretty good idea.

    where does the time go

    When I think about it, there’s no reason that the switch-based interpreter should be slower when compiled via Wastrel than when compiled via rustc . Memory accesses via Wasm should actually be cheaper due to 32-bit pointers, and all the rest of it should be pretty much the same. I looked at the assembly that Wastrel produces and I see most of the patterns that I would expect.

    I do see, however, that Wastrel repeatedly reloads a struct memory value, containing the address (and size) of main memory. I need to figure out a way to keep this value in registers. I don’t know what’s up with the other Wasm implementations here; for Wastrel, I get 98% of time spent in the single interpreter function, and surely this is bread-and-butter for an optimizing compiler such as Cranelift. I tried pre-compilation in Wasmtime but it didn’t help. It could be that there is a different Wasmtime configuration that allows for higher performance.

    Things are more nuanced for the tail-calling VM. When compiling natively, Matt is careful to use a preserve_none calling convention for the opcode-implementing functions, which allows LLVM to allocate more registers to function parameters; this is just as well, as it seems that his opcodes have around 9 parameters. Wastrel currently uses GCC’s default calling convention, which only has 6 registers for non-floating-point arguments on x86-64, leaving three values to be passed via global variables (described here ); this obviously will be slower than the native build. Perhaps Wastrel should add the equivalent annotation to tail-calling functions.

    On the one hand, Cranelift (and V8) are a bit more constrained than Wastrel by their function-at-a-time compilation model that privileges latency over throughput; and as they allow Wasm modules to be instantiated at run-time, functions are effectively closures, in which the “instance” is an additional hidden dynamic parameter. On the other hand, these compilers get to choose an ABI; last I looked into it, SpiderMonkey used the equivalent of preserve_none , which would allow it to allocate more registers to function parameters. But it doesn’t: you only get 6 register arguments on x86-64, and only 8 on AArch64. Something to fix, perhaps, in the Wasm engines, but also something to keep in mind when making tail-calling virtual machines: there are only so many registers available for VM state.

    the value of time

    Well friends, you know us compiler types: we walk a line between collegial and catty. In that regard, I won’t deny that I was delighted when I saw the Wastrel numbers coming in better than Wasmtime! Of course, most of the credit goes to GCC; Wastrel is a relatively small wrapper on top.

    But my message is not about the relative worth of different Wasm implementations. Rather, it is that performance oracles are a public good : a fast implementation of a particular algorithm is of use to everyone who uses that algorithm, whether they use that implementation or not.

    This happens in two ways. Firstly, faster implementations advance the state of the art, and through competition-driven convergence will in time result in better performance for all implementations. Someone in Google will see these benchmarks, turn them into an OKR, and golf their way to a faster web and also hopefully a bonus.

    Secondly, there is a dialectic between the state of the art and our collective imagination of what is possible, and advancing one will eventually ratchet the other forward. We can forgive the conclusion that “patterns which generate good assembly don’t map well to the WASM stack machine” as long as Wasm implementations fall short; but having showed that good performance is possible, our toolkit of applicable patterns in source languages also expands to new horizons.

    Well, that is all for today. Until next time, happy hacking!

    • Pl chevron_right

      Michael Meeks: 2026-04-06 Monday

      news.movim.eu / PlanetGnome • 2 days ago

    • Up early, out for a run with J. poked at mail, and web bits. hmm. Contemplated the latest barrage of oddity from TDF.
    • Set too cutting out cardboard pieces to position tools in the workshop with H. spent qiute some time re-arranging the workshop to try to fit everything in. Binned un-needed junk, moved things around happily. Mended an old light - a wedding present from a friend.
    • Relaxed with J. and watched Upload in the evening, tired.
    • Pl chevron_right

      Jussi Pakkanen: Sorting performance rabbit hole

      news.movim.eu / PlanetGnome • 3 days ago • 2 minutes

    In an earlier blog post we found out that Pystd's simple sorting algorithm implementations were 5-10% slower than their stdlibc++ counterparts. The obvious follow up nerd snipe is to ask "can we make the Pystd implementation faster than stdlibc++?"

    For all tests below the data set used was 10 million consecutive 64 bit integers shuffled in a random order. The order was the same for all algorithms.

    Stable sort

    It turns out that the answer for stable sorting is "yes, surprisingly easily". I made a few obvious tweaks (whose details I don't even remember any more) and got the runtime down to 0.86 seconds. This is approximately 5% faster than std::stable_sort . Done. Onwards to unstable sort.

    Unstable sort

    This one was not, as they say, a picnic. I suspect that stdlib developers have spent more time optimizing std::sort than std::stable_sort simply because it is used a lot more.

    After all the improvements I could think of were done, Pystd's implementation was consistently 5-10% percent slower. At this point I started cheating and examined how stdlibc++'s implementation worked to see if there are any optimization ideas to steal. Indeed there were, but they did not help.

    Pystd's insertion sort moves elements by pairwise swaps. Stdlibc++ does it by moving the last item to a temporary, shifting the array elements onwards and then moving the stored item to its final location. I implemented that. It made things slower.

    Stdlibc++'s moves use memmove instead of copying (at least according to code comments).  I implemented that. It made things slower.

    Then I implemented shell sort to see if it made things faster. It didn't. It made them a lot slower.

    Then I reworked the way pivot selection is done and realized that if you do it in a specific way, some elements move to their correct partitions as a side effect of median selection. I implemented that and it did not make things faster. It did not make them slower, either, but the end result should be more resistant against bad pivot selection so I left it in.

    At some point the implementation grew a bug which only appeared with very large data sets. For debugging purposes I reduce the limit where introsort switches from qsort to insertion sort from 16 to 8. I got the bug fixed but the change made sorting a lot slower. As it should.

    But this raises a question, namely would increasing the limit from 16 to 32 make things faster? It turns out that it did. A lot. Out of all perf improvements I implemented, this was the one that yielded the biggest improvement. By a lot. Going to 64 elements made it even faster, but that made other algorithms using insertion sort slower, so 32 it is. For now at least.

    After a few final tweaks I managed to finally beat stdlibc++. By how much you ask? Pystd's best observed time was 0.754 seconds while stdlibc++'s was 0.755 seconds. And it happened only once. But that's enough for me.