call_end

    • Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 3 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      Jussi Pakkanen: Simple sort implementations vs production quality ones

      news.movim.eu / PlanetGnome • 13:49 • 2 minutes

    One of the most optimized algorithms in any standard library is sorting. It is used everywhere so it must be fast. Thousands upon thousands of developer hours have been sunk into inventing new algorithms and making sort implementations faster. Pystd has a different design philosophy where fast compilation times and readability of the implementation have higher priority than absolute performance. Perf still very much matters, it has to be fast , but not at the cost of 10x compilation time.

    This leads to the natural question of how much slower such an implementation would be compared to a production quality one. Could it even be faster? (Spoilers: no) The only way to find out is to run performance benchmarks on actual code.

    To keep things simple there is only one test set, sorting 10'000'000 consecutive 64 bit integers that have been shuffled to a random order which is the same for all algorithms. This is not an exhaustive test by any means but you have to start somewhere. All tests used GCC 15.2 using -O2 optimization. Pystd code was not thoroughly hand optimized, I only fixed (some of the) obvious hotspots.

    Stable sort

    Pystd uses mergesort for stable sorting. The way the C++ standard specifies stable sort means that most implementations probably use it as well. I did not dive in the code to find out. Pystd's merge sort implementation consists of ~220 lines of code. It can be read on this page .

    Stdlibc++ can do the sort in 0.9 seconds whereas Pystd takes .94 seconds. Getting to within 5% with such a simple implementation is actually quite astonishing. Even when considering all the usual caveats where it might completely fall over with a different input data distribution and all that.

    Regular sort

    Both stdlibc++ and Pystd use introsort . Pystd's implementation has ~150 lines of code but it also uses heapsort, which has a further 100 lines of code). Code for introsort is here, and heapsort is here .

    Stdlibc++ gets the sort done in 0.76 seconds whereas Pystd takes 0.82 seconds. This makes it approximately 8% slower. It's not great, but getting within 10% with a few evening's work is still a pretty good result. Especially since, and I'm speculating here, std::sort has seen a lot more optimization work than std::stable_sort because it is used more.

    For heavy duty number crunching this would be way too slow. But for moderate data set sizes the performance difference might be insignificant for many use cases.

    Note that all of these are faster (note: did not measure) than libc's qsort because it requires an indirect function call on every comparison i.e. the comparison method can not be inlined.

    Where does the time go?

    Valgrind will tell you that quite easily.

    This picture shows quite clearly why big O notation can be misleading. Both quicksort (the inner loop of introsort) and heapsort have "the same" average time complexity but every call to heapsort takes approximately 4.5 times as long.

    • Pl chevron_right

      Jakub Steiner: Friday Sketches (part 2)

      news.movim.eu / PlanetGnome • 0:00

    Two years have passed since I last shared my Friday app icon sketches, but the sketching itself hasn't stopped.

    For me, it's the best way to figure out the right metaphors before we move to final pixels. These sketches are just one part of the GNOME Design Team's wider effort to keep our icons consistent and meaningful—it is an endeavor that’s been going on for years.

    If you design a GNOME app following the GNOME Design Guidelines , feel free to request an icon to be made for you. If you are serious and apply for inclusion in GNOME Circle , you are way more likely to get a designer's attention.

    Ai Assistant Aria 1 Articulate Bazaar 1 Bazaar 2 Bazaar 3 Bazaar 4 Bazaar Bouncer Carburetor 1 Carburetor 2 Carburetor 3 Carburetor Censor 1 Censor 2 Censor CoBang Constrict 1 Constrict 2 Constrict Deer 1 Deer 2 Deer Dev Toolbox 1 Dev Toolbox 2 Dev Toolbox Digitakt 2 Displaytune Drafting 1 Drafting 2 Drafting Drum Machine 1 Drum Machine Elfin 1 Elfin Exercise Timer Field Monitor 1 Field Monitor 2 Field Monitor 3 Field Monitor Gamepad Mirror Gelly Gems GNOME OS Installer 1 GNOME OS Installer 2 GNOME OS Installer Gnurd Gradia Identity 1 Identity 2 Identity Letters 1 Letters M8 Gate Mango Juice Concepts Memories 1 Memories 2 Memories 3 Memories 4 Memories Mercator 1 Mercator Meshtastic Chat Millisecond 1 Millisecond 2 Millisecond Mini Screenshot 1 Mini Screenshot 2 Mini Screenshot Mixtape 1 Mixtape 2 Mixtape 3 Mixtape 4 Mixtape Motivation Moviola 1 Moviola 2 Moviola Mutter Viewer Nucleus Passwords Pavucontrol Poliedros Push Reflection 1 Reflection 2 Reflection 3 Reflection Aria Rissole 1 Rissole 2 Rissole Rotor SSH Studio 1 SSH Studio Scriptorium Scrummy 1 Scrummy 2 Scrummy 3 Scrummy 4 Scrummy 5 Scrummy 6 Scrummy Serial Console 1 Serial Console Shaper 1 Shaper 2 Shaper 3 Shaper Sitra 1 Sitra Sitra 1 Sitra 2 Sitra Solitaire 1 Solitaire 2 Solitaire 3 Solitaire Tablets Tabs Template Tomodoro 1 Tomodoro 2 Tomodoro Twofun Typesetter 1 Typesetter 2 Typesetter Typester Typewriter Vocalis 1 Vocalis Wardrobe 1 Wardrobe 2 Wardrobe Web Apps eSIM

    Previously

    • Pl chevron_right

      Colin Walters: LLMs and core software: human driven

      news.movim.eu / PlanetGnome • 1 day ago • 5 minutes

    It’s clear LLMs are one of the biggest changes in technology ever. The rate of progress is astounding: recently due to a configuration mistake I accidentally used Claude Sonnet 3.5 (released ~2 years ago) instead of Opus 4.6 for a task and looked at the output and thought “what is this garbage”?

    But daily now: Opus 4.6 is able to generate reasonable PoC level Rust code for complex tasks for me. It’s not perfect – it’s a combination of exhausting and exhilarating to find the 10% absolutely bonkers/broken code that still makes it past subagents.

    So yes I use LLMs every day, but I will be clear: if I could push a button to “un-invent” them I absolutely would because I think the long term issues in larger society (not being able to trust any media, and many of the things from Dario’s recent blog etc.) will outweigh the benefits.

    But since we can’t un-invent them: here’s my opinion on how they should be used. As a baseline, I agree with a lot from this doc from Oxide about LLMs . What I want to talk about is especially around some of the norms/tools that I see as important for LLM use, following principles similar to those.

    On framing: there’s “core” software vs “bespoke”. An entirely new capability of course is for e.g. a nontechnical restaurant owner to use an LLM to generate (“vibe code”) a website (excepting hopefully online orderings and payments!). I’m not overly concerned about this.

    Whereas “core” software is what organizations/businesses provide/maintain for others. I work for a company (Red Hat) that produces a lot of this. I am sure no one would want to run for real an operating system, cluster filesystem, web browser, monitoring system etc. that was primarily “vibe coded”.

    And while I respect people and groups that are trying to entirely ban LLM use, I don’t think that’s viable for at least my space.

    Hence the subject of this blog is my perspective on how LLMs should be used for “core” software: not vibe coding, but using LLMs responsibly and intelligently – and always under human control and review.

    Agents should amplify and be controlled by humans

    I think most of the industry would agree we can’t give responsibility to LLMs. That means they must be overseen by humans. If they’re overseen by a human, then I think they should be amplifying what that human thinks/does as a baseline – intersected with the constraints of the task of course.

    On “amplification”: Everyone using a LLM to generate content should inject their own system prompt (e.g. AGENTS.md ) or equivalent. Here’s mine – notice I turn off all the emoji etc. and try hard to tune down bulleted lists because that’s not my style. This is a truly baseline thing to do.

    Now most LLM generated content targeted for core software is still going to need review, but just ensuring that the baseline matches what the human does helps ensure alignment.

    Pull request reviews

    Let’s focus on a very classic problem: pull request reviews. Many projects have wired up a flow such that when a PR comes in, it gets reviewed by a model automatically. Many projects and tools pitch this. We use one on some of my projects.

    But I want to get away from this because in my experience these reviews are a combination of:

    • Extremely insightful and correct things (there’s some amazing fine-tuning and tool use that must have happened to find some issues pointed out by some of these)
    • Annoying nitpicks that no one cares about (not handling spaces in a filename in a shell script used for tests)
    • Broken stuff like getting confused by things that happened after its training cutoff (e.g. Gemini especially seems to get confused by referencing the current date, and also is unaware of newer Rust features, etc)

    In practice, we just want the first of course.

    How I think it should work:

    • A pull request comes in
    • It gets auto-assigned to a human on the team for review
    • A human contributing to that project is running their own agents (wherever: could be local or in the cloud) using their own configuration (but of course still honoring the project’s default development setup and the project’s AGENTS.md etc)
    • A new containerized/sandboxed agent may be spawned automatically, or perhaps the human needs to click a button to do so – or perhaps the human sees the PR come in and thinks “this one needs a deeper review, didn’t we hit a perf issue with the database before?” and adds that to a prompt for the agent.
    • The agent prepares a draft review that only the human can see.
    • The human reviews/edits the draft PR review, and has the opportunity to remove confabulations, add their own content etc. And to send the agent back to look more closely at some code (i.e. this part can be a loop)
    • When the human is happy they click the “submit review” button.
    • Goal: it is 100% clear what parts are LLM generated vs human generated for the reader.

    I wrote this agent skill to try to make this work well, and if you search you can see it in action in a few places, though I haven’t truly tried to scale this up.

    I think the above matches the vision of LLMs amplifying humans.

    Code Generation

    There’s no doubt that LLMs can be amazing code generators, and I use them every day for that. But for any “core” software I work on, I absolutely review all of the output – not just superficially, and changes to core algorithms very closely.

    At least in my experience the reality is still there’s that percentage of the time when the agent decided to reimplement base64 encoding for no reason, or disable the tests claiming “the environment didn’t support it” etc.

    And to me it’s still a baseline for “core” software to require another human review to merge (per above!) with their own customized LLM assisting them (ideally a different model, etc).

    FOSS vs closed

    Of course, my position here is biased a bit by working on FOSS – I still very much believe in that, and working in a FOSS context can be quite different than working in a “closed environment” where a company/organization may reasonably want to (and be able to) apply uniform rules across a codebase.

    While for sure LLMs allow organizations to create their own Linux kernel filesystems or bespoke Kubernetes forks or virtual machine runtime or whatever – it’s not clear to me that it is a good idea for most to do so. I think shared (FOSS) infrastructure that is productized by various companies, provided as a service and maintained by human experts in that problem domain still makes sense. And how we develop that matters a lot.

    • Pl chevron_right

      Alberto Ruiz: Booting with Rust: Chapter 3

      news.movim.eu / PlanetGnome • 1 day ago • 5 minutes

    In Chapter 1 I gave the context for this project and in Chapter 2 I showed the bare minimum: an ELF that Open Firmware loads, a firmware service call, and an infinite loop.

    That was July 2024. Since then, the project has gone from that infinite loop to a bootloader that actually boots Linux kernels. This post covers the journey.

    The filesystem problem

    The Boot Loader Specification expects BLS snippets in a FAT filesystem under loaders/entries/ . So the bootloader needs to parse partition tables, mount FAT, traverse directories, and read files. All #![no_std] , all big-endian PowerPC.

    I tried writing my own minimal FAT32 implementation, then integrating simple-fatfs and fatfs . None worked well in a freestanding big-endian environment.

    Hadris

    The breakthrough was hadris , a no_std Rust crate supporting FAT12/16/32 and ISO9660. It needed some work to get going on PowerPC though. I submitted fixes upstream for:

    • thiserror pulling in std : default features were not disabled, preventing no_std builds.
    • Endianness bug : the FAT table code read cluster entries as native-endian u32 . On x86 that’s invisible; on big-endian PowerPC it produced garbage cluster chains.
    • Performance : every cluster lookup hit the firmware’s block I/O separately. I implemented a 4MiB readahead cache for the FAT table, made the window size parametric at build time, and improved read_to_vec() to coalesce contiguous fragments into a single I/O. This made kernel loading practical.

    All patches were merged upstream.

    Disk I/O

    Hadris expects Read + Seek traits. I wrote a PROMDisk adapter that forwards to OF’s read and seek client calls, and a Partition wrapper that restricts I/O to a byte range. The filesystem code has no idea it’s talking to Open Firmware.

    Partition tables: GPT, MBR, and CHRP

    PowerVM with modern disks uses GPT (via the gpt-parser crate): a PReP partition for the bootloader and an ESP for kernels and BLS entries.

    Installation media uses MBR. I wrote a small mbr-parser subcrate using explicit-endian types so little-endian LBA fields decode correctly on big-endian hosts. It recognizes FAT32, FAT16, EFI ESP, and CHRP (type 0x96 ) partitions.

    The CHRP type is what CD/DVD boot uses on PowerPC. For ISO9660 I integrated hadris-iso with the same Read + Seek pattern.

    Boot strategy? Try GPT first, fall back to MBR, then try raw ISO9660 on the whole device (CD-ROM). This covers disk, USB, and optical media.

    The firmware allocator wall

    This cost me a lot of time.

    Open Firmware provides claim and release for memory allocation. My initial approach was to implement Rust’s GlobalAlloc by calling claim for every allocation. This worked fine until I started doing real work: parsing partitions, mounting filesystems, building vectors, sorting strings. The allocation count went through the roof and the firmware started crashing.

    It turns out SLOF has a limited number of tracked allocations. Once you exhaust that internal table, claim either fails or silently corrupts state. There is no documented limit; you discover it when things break.

    The fix was to claim a single large region at startup (1/4 of physical RAM, clamped to 16-512 MB) and implement a free-list allocator on top of it with block splitting and coalescing. Getting this right was painful: the allocator handles arbitrary alignment, coalesces adjacent free blocks, and does all this without itself allocating. Early versions had coalescing bugs that caused crashes which were extremely hard to debug – no debugger, no backtrace, just writing strings to the OF console on a 32-bit big-endian target.

    And the kernel boots!

    March 7, 2026. The commit message says it all: “And the kernel boots!”

    The sequence:

    1. BLS discovery : walk loaders/entries/*.conf , parse into BLSEntry structs, filter by architecture ( ppc64le ), sort by version using rpmvercmp .

    2. ELF loading : parse the kernel ELF, iterate PT_LOAD segments, claim a contiguous region, copy segments to their virtual address offsets, zero BSS.

    3. Initrd : claim memory, load the initramfs.

    4. Bootargs : set /chosen/bootargs via setprop .

    5. Jump : inline assembly trampoline – r3=initrd address, r4=initrd size, r5=OF client interface, branch to kernel:

    core::arch::asm!(
        "mr 7, 3",   // save of_client
        "mr 0, 4",   // r0 = kernel_entry
        "mr 3, 5",   // r3 = initrd_addr
        "mr 4, 6",   // r4 = initrd_size
        "mr 5, 7",   // r5 = of_client
        "mtctr 0",
        "bctr",
        in("r3") of_client,
        in("r4") kernel_entry,
        in("r5") initrd_addr as usize,
        in("r6") initrd_size as usize,
        options(nostack, noreturn)
    )
    

    One gotcha: do NOT close stdout/stdin before jumping. On some firmware, closing them corrupts /chosen and the kernel hits a machine check. We also skip calling exit or release – the kernel gets its memory map from the device tree and avoids claimed regions naturally.

    The boot menu

    I implemented a GRUB-style interactive menu:

    • Countdown : boots the default after 5 seconds unless interrupted.
    • Arrow/PgUp/PgDn/Home/End navigation .
    • ESC : type an entry number directly.
    • e : edit the kernel command line with cursor navigation and word jumping (Ctrl+arrows).

    This runs on the OF console with ANSI escape sequences. Terminal size comes from OF’s Forth interpret service ( #columns / #lines ), with serial forced to 80×24 because SLOF reports nonsensical values.

    Secure boot (initial, untested)

    IBM POWER has its own secure boot: the ibm,secure-boot device tree property (0=disabled, 1=audit, 2=enforce, 3=enforce+OS). The Linux kernel uses an appended signature format – PKCS#7 signed data appended to the kernel file, same format GRUB2 uses on IEEE 1275.

    I wrote an appended-sig crate that parses the appended signature layout, extracts an RSA key from a DER X.509 certificate (compiled in via include_bytes! ), and verifies the signature (SHA-256/SHA-512) using the RustCrypto crates, all no_std .

    The unit tests pass, including an end-to-end sign-and-verify test. But I have not tested this on real firmware yet. It needs a PowerVM LPAR with secure boot enforced and properly signed kernels, which QEMU/SLOF cannot emulate. High on my list.

    The ieee1275-rs crate

    The crate has grown well beyond Chapter 2. It now provides: claim / release , the custom heap allocator, device tree access ( finddevice , getprop , instance-to-package ), block I/O, console I/O with read_stdin , a Forth interpret interface, milliseconds for timing, and a GlobalAlloc implementation so Vec and String just work.

    Published on crates.io at github.com/rust-osdev/ieee1275-rs .

    What’s next

    I would like to test the Secure Boot feature on an end to end setup but I have not gotten around to request access to a PowerVM PAR. Beyond that I want to refine the menu. Another idea would be to perhaps support the equivalent of the Unified Kernel Image using ELF. Who knows, if anybody finds this interesting let me know!

    The source is at the powerpc-bootloader repository . Contributions welcome, especially from anyone with POWER hardware access.

    • Pl chevron_right

      Jakub Steiner: GNOME 50 Wallpapers

      news.movim.eu / PlanetGnome • 1 day ago • 1 minute

    GNOME 50 just got released! To celebrate, I thought it’d be fun to look into the background ( ding ) of the newest additions to the collection.

    While the general aesthetic remains consistent, you might be surprised to see the default shifting from the long-standing triangular theme to hexagons .

    Default Wallpaper in GNOME 50

    Well, maybe not that surprised if you’ve been following the gnome-backgrounds repo closely during the development cycle. We saw a rounded hexagon design surface back in 2024, but it was pulled after being deemed a bit too "flat" despite various lighting and color iterations. We’ve actually seen other hex designs pop up in 2020 and 2022, but they never quite got the ultimate spot as the default. Until now.

    Hexagon iteration 1 Hexagon iteration 2 Hexagon iteration 3 Hexagon iteration 4

    Beyond the geometry shift of the default, the Symbolics wallpaper has also received its latest makeover. Truth be told, it hasn’t historically been a fan favorite. I rarely see it in the wild in "show off your desktop" threads. Let's see if this new incarnation does any better.

    Similarly, the glass chip wallpaper has undergone a bit of a makeover as well. I'll also mention a… let's say less original design that caters to the dark theme folks out there. While every wallpaper in GNOME features a light and dark variant, Tubes comes with a dark and darker variant. I hope to see more of those "where did you get that wallpaper?" on reddit in 2026.

    • Pl chevron_right

      Emmanuele Bassi: Let’s talk about Moonforge

      news.movim.eu / PlanetGnome • 2 days ago • 4 minutes

    Last week, Igalia finally announced Moonforge , a project we’ve been working on for basically all of 2025. It’s been quite the rollercoaster, and the announcement hit various news outlets, so I guess now is as good a time as any to talk a bit about what Moonforge is, its goal, and its constraints.

    Of course, as soon as somebody announces a new Linux-based OS , folks immediately think it’s a new general purpose Linux distribution, as that’s the square shaped hole where everything OS -related ends up. So, first things first, let’s get a couple of things out of the way about Moonforge :

    • Moonforge is not a general purpose Linux distribution
    • Moonforge is not an embedded Linux distribution

    What is Moonforge

    Moonforge is a set of feature-based, well-maintained layers for Yocto , that allows you to assemble your own OS for embedded devices, or single-application environments, with specific emphasys on immutable, read-only root file system OS images that are easy to deploy and update, through tight integration with CI / CD pipelines.

    Why?

    Creating a whole new OS image out of whole cloth is not as hard as it used to be; on the desktop (and devices where you control the hardware), you can reasonably get away with using existing Linux distributions, filing off the serial numbers, and removing any extant packaging mechanism; or you can rely on the containerised tech stack , and boot into it.

    When it comes to embedded platforms, on the other hand, you’re still very much working on bespoke, artisanal, locally sourced, organic operating systems. A good number of device manufacturers coalesced their BSPs around the Yocto Project and OpenEmbedded , which simplifies adaptations, but you’re still supposed to build the thing mostly as a one off.

    While Yocto has improved leaps and bounds over the past 15 years, putting together an OS image, especially when it comes to bundling features while keeping the overall size of the base image down, is still an exercise in artisanal knowledge.

    A little detour: Poky

    Twenty years ago, I moved to London to work for this little consultancy called OpenedHand. One of the projects that OpenedHand was working on was taking OpenEmbedded and providing a good set of defaults and layers, in order to create a “reference distribution” that would help people getting started with their own project. That reference was called Poky .

    We had a beaver mascot before it was cool

    poky-beaver.jpeg

    These days, Poky exists as part of the Yocto Project, and it’s still the reference distribution for it, but since it’s part of Yocto, it has to abide to the basic constraint of the project: you still need to set up your OS using shell scripts and copy-pasting layers and recipes inside your own repository. The Yocto project is working on a setup tool to simplify those steps, but there are alternatives…

    Another little detour: Kas

    One alternative is kas , a tool that allows you to generate the local.conf configuration file used by bitbake through various YAML fragments exported by each layer you’re interested in, as well as additional fragments that can be used to set up customised environments.

    Another feature of kas is that it can spin up the build environment inside a container, which simplifies enourmously its set up time. It avoids unadvertedly contaminating the build, and it makes it very easy to run the build on CI / CD pipelines that already rely on containers.

    What Moonforge provides

    Moonforge lets you create a new OS in minutes, selecting a series of features you care about from various available layers .

    Each layer provides a single feature, like:

    • support for a specific architecture or device ( QEMU x86_64, RaspberryPi)
    • containerisation (through Docker or Podman)
    • A/B updates (through RAUC , systemd-sysupdate, and more)
    • graphical session, using Weston
    • a WPE environment

    Every layer comes with its own kas fragment, which describes what the layer needs to add to the project configuration in order to function.

    Since every layer is isolated, we can reason about their dependencies and interactions, and we can combine them into a final, custom product.

    Through various tools, including kas, we can set up a Moonforge project that generates and validates OS images as the result of a CI / CD pipeline on platforms like GitLab, GitHub, and BitBucket; OS updates are also generated as part of that pipeline, just as comprehensive CVE reports and Software Bill of Materials ( SBOM ) through custom Yocto recipes.

    More importantly, Moonforge can act both as a reference when it comes to hardware enablement and support for BSPs; and as a reference when building applications that need to interact with specific features coming from a board.

    While this is the beginning of the project, it’s already fairly usable; we are planning a lot more in this space, so keep an eye out on the repository .

    Trying Moonforge out

    If you want to check out Moonforge, I will point you in the direction of its tutorials , as well as the meta-derivative repository, which should give you a good overview on how Moonforge works, and how you can use it.

    • Pl chevron_right

      Lucas Baudin: Improving Signatures in Papers: Malika's Outreachy Internship

      news.movim.eu / PlanetGnome • 5 days ago • 1 minute

    Last week was the end of Malika' internship within Papers about signatures that I had the pleasure to mentor. After a post about the first phase of Outreachy, here is the sequel of the story.

    Nowadays, people expect to be able to fill and sign PDF documents. We previously worked on features to insert text into documents and signatures needed to be improved.

    There is actually some ambiguity when speaking about signatures in PDFs: there are cryptographic signatures that guarantee that a certificate owner approved a document (now denoted by "digital" signatures) and there are also signatures that are just drawings on the document. These latter ones of course do not guarantee any authenticity but are more or less accepted in various situations, depending on the country. Moreover, getting a proper certificate to digitally sign documents may be complicated or costly (with the notable exception of a few countries providing them to their residents such as Spain).

    Papers lacked any support for this second category (that I will call "visual" signatures from now on). On the other hand, digital signing was implemented a few releases ago, but it heavily relies on Firefox certificate database 1 and in particular there is no way to manage personal certificates within Papers.

    During her three months internship, Malika implemented a new visual signatures management dialog and the corresponding UI to insert them, including nice details such as image processing to import signature pictures properly. She also contributed to the poppler PDF rendering library to compress signature data.

    Then she looked into digital signatures and improved the insertion dialog, letting users choose visual signatures for them as well. If all goes well, all of this should be merged before Papers 51!

    New signature dialog

    Malika also implemented a prototype that allows users to import certificates and also deal with multiple NSS databases. While this needs more testing and code review 2 , it should significantly simplify digital signing.

    I would like to thank everyone who made this internship possible, and especially everyone who took the time to do calls and advise us during the internship. And of course, thanks to Malika for all the work she put into her internship!

    1

    or on NSS command line tools.

    2

    we don't have enough NSS experts, so help is very welcomed .

    • Pl chevron_right

      This Week in GNOME: #240 Big Reworks

      news.movim.eu / PlanetGnome • 6 days ago • 3 minutes

    Update on what happened across the GNOME project in the week from March 06 to March 13.

    GNOME Core Apps and Libraries

    Files

    Providing a simple and integrated way of managing your files and browsing your file system.

    Peter Eisenmann announces

    For version 50 Files aka nautilus has retrieved many bug fixes, tiny niceties and big reworks. The most prominent are:

    • Faster thumbnail and icon loading
    • Pop-out property dialogs for free-floating windows
    • Reworked batch rename mechanism and highligths for replaced text
    • Shorter file operation descriptions in sidebar
    • Support for multiple simulatenous file type search filters
    • Case-insensitive pathbar completions
    • Dedicated dialog for grid view captions
    • Reduced memory usage
    • Internal modernizations including usage of Blueprint and glycin
    • Increased test coverage (23% 📈 37%)

    A big thank you to all contributing coders and translators! 🙌

    Document Viewer (Papers)

    View, search or annotate documents in many different formats.

    lbaudin says

    Malika’s Outreachy internship just ended! If all goes well, her work on improving signatures in Papers should land during next cycle. Read more about it here .

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 says

    I released libadwaita 1.9! Read the accompanying blog post to see what’s new

    Third Party Projects

    Haydn Trowell says

    Typesetter, the minimalist Typst editor, now speaks more languages. With the latest update, you can now use it in Chinese, French, Spanish, Turkish, and German. Thanks to Dawn Chan, Philippe Charlanes, XanderLeaDaren, Roger Weissenbrunner, Sabri Ünal, and Sebastian Kern for their time and effort!

    Get in on Flathub: https://flathub.org/apps/net.trowell.typesetter

    If you want to help bring Typesetter to your language, translations can be contributed via Weblate ( https://translate.codeberg.org/engage/typesetter/ ).

    Anton Isaiev announces

    I am incredibly excited to share the latest news about RustConn, covering the massive journey from version 0.9.4 to 0.9.15! This release cycle focused on making the app’s internal architecture as robust as its features. During this time, we closed dozens of feature requests and fixed numerous critical bugs.

    Here are the most important improvements from the recent updates:

    • Flawless Flatpak Experience: I completely resolved issues with importing Remmina configurations inside the sandbox and fixed specific SSH password prompt display bugs in environments like KDE.
    • Memory-Level Security: I introduced strict zeroing of Bitwarden master passwords in memory immediately after use. Additionally, I completely dropped the external sshpass dependency to enhance overall security.
    • Advanced Connections: The native SPICE client is now enabled by default. For RDP sessions, I added a convenient “Quick Actions” menu (one-click access to Task Manager, PowerShell, etc.), and for VNC, I introduced flexible encoding options.
    • Code & UI Cleanup: I completed a major refactoring of the UI modules (some became 5x lighter!), which eliminated text-clipping issues in dialogs and significantly improved application performance.

    I want to express a huge thank you to everyone who uses RustConn and takes the time to provide feedback! Your positive reviews and comments are the main thing that motivates me to work on the project every single day. At the same time, your bug reports and feature ideas are exactly what make these releases possible. Thank you for being such an amazing community!

    https://github.com/totoshko88/RustConn https://flathub.org/en/apps/io.github.totoshko88.RustConn

    image.Dq1BuM4E_Y1Xsy.webp

    Mikhail Kostin announces

    Vinyl is a new (one more :D) music player. Vinyl built on rust with relm4 . The first stable version already available on Flathub and provides features:

    • Simple user-friendly interface inspired by amberol.
    • Basic media controls.
    • Lyrics (.lrc) support
    • MPRIS support for controlling Vinyl from other applications.
    • Save playlist and track/position of track, that played before the app close

    preview1.i_C0WAj7_Z5fp5b.webp

    Gir.Core

    Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

    Marcel Tiede reports

    GirCore released new C# bindings in version 0.8.0-preview.1 . It incldues new GTK composite template support and added bindings for GdkWayland-4.0.

    Miscellaneous

    GNOME OS

    The GNOME operating system, development and testing platform

    Valentin David announces

    GNOME OS now has kmscon enabled by default. Kmscon is a KMS/DRM userspace terminal that replaces the Linux virtual terminals (the ones from ctrl-alt-f#). It is a lot more configurable. So next time you try to debug GNOME Shell from a virtual terminal and the font is too small, press “ctrl +”.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!