• Pl chevron_right

      Aryan Kaushik: Open Forms is now 0.4.0 - and the GUI Builder is here

      news.movim.eu / PlanetGnome • 3 months from now • 3 minutes

    Open Forms is now 0.4.0 - and the GUI Builder is here

    A quick recap for the newcomers

    Ever been to a conference where you set up a booth or tried to collect quick feedback and experienced the joy of:

    • Captive portal logout
    • Timeouts
    • Flaky Wi-Fi drivers on Linux devices
    • Poor bandwidth or dead zones

    Meme showcasing wifi fails when using forms

    This is exactly what happened while setting up a booth at GUADEC. The Wi-Fi on the Linux tablet worked, we logged into the captive portal, the chip failed, Wi-Fi gone. Restart. Repeat.

    Meme showing a person giving their child a book on 'Wifi drivers on linux' as something to cry about

    We eventually worked around it with a phone hotspot, but that locked the phone to the booth. A one-off inconvenience? Maybe. But at any conference, summit, or community event, at least one of these happens reliably.

    So I looked for a native, offline form collection tool. Nothing existed without a web dependency. So I built one.

    Open Forms is a native GNOME app that collects form inputs locally, stores responses in CSV, works completely offline, and never touches an external service. Your data stays on your device. Full stop.

    Open Forms pages

    What's new in 0.4.0 - the GUI Form Builder

    The original version shipped with one acknowledged limitation: you had to write JSON configs by hand to define your forms.

    Now, I know what you're thinking. "Writing JSON to set up a form? That's totally normal and not at all a terrible first impression for non-technical users." And you'd be completely wrong, to me it was normal and then my sis had this to say "who even thought JSON for such a basic thing is a good idea, who'd even write one" which was true. I knew it and hence it was always on the roadmap to fix, which 0.4.0 finally fixes.

    Open Forms now ships a full visual form builder.

    Design a form entirely from the UI - add fields, set labels, reorder things, tweak options, and hit Save. That's it. The builder writes a standard JSON config to disk, same schema as always, so nothing downstream changes.

    It also works as an editor. Open an existing config, click Edit, and the whole form loads up ready to tweak. Save goes back to the original file. No more JSON editing required.

    Open forms builder page

    Libadwaita is genuinely great

    The builder needed to work well on both a regular desktop and a Linux phone without me maintaining two separate layouts or sprinkling breakpoints everywhere. Libadwaita just... handles that.

    The result is that Open Forms feels native on GNOME and equally at home on a Linux phone, and I genuinely didn't have to think hard about either. That's the kind of toolkit win that's hard to overstate when you're building something solo over weekends.


    The JSON schema is unchanged

    If you already have configs, they work exactly as before. The builder is purely additive, it reads and writes the same format. If you like editing JSON directly, nothing stops you. I'm not going to judge, but my sister might.

    Also thanks to Felipe and all others who gave great ideas about increasing maintainability. JSON might become a technical debt in future, and I appreciate the insights about the same. Let's see how it goes.

    Install

    Snap Store

    snap install open-forms
    

    Flatpak / Build from source

    See the GitHub repository for build instructions. There is also a Flatpak release available .

    What's next

    • A11y improvements
    • Maybe and just maybe an optional sync feature
    • Hosting on Flathub - if you've been through that process and have advice, please reach out

    Open Forms is still a small, focused project doing one thing. If you've ever dealt with Wi-Fi pain while collecting data at an event, give it a try. Bug reports, feature requests, and feedback are all very welcome.

    And if you find it useful - a star on GitHub goes a long way for a solo project. 🙂

    Open Forms on GitHub

    • Pl chevron_right

      This Week in GNOME: #243 Delayed Trains

      news.movim.eu / PlanetGnome • 1 day ago • 8 minutes

    Update on what happened across the GNOME project in the week from March 27 to April 03.

    GNOME Core Apps and Libraries

    Maps

    Maps gives you quick access to maps all across the world.

    mlundblad says

    Now Maps shows delays for public transit journeys (when there’s a realtime GTFS-RT feed available for the affected journey)

    public-transit-delays.BxDJr5pj_Z1FLOS8.webp

    Glycin

    Sandboxed and extendable image loading and editing.

    Sophie (she/her) reports

    After four weeks of work, glycin now supports compiled-in loaders. The main benefit of this is that glycin should now work on other operating systems like FreeBSD, Windows, or macOS.

    Glycin uses Linux exclusive technologies to sandbox image operations. For this, the image processing is happening in a separate, isolated, process. It would be very hard to impossible to replicate this technology for other operating systems. Therefore, glycin now supports building loaders into it directly. This still provides a huge benefit compared to traditional image loaders, since almost all of the code is written in safe Rust. The feature of ‘builtin’ loaders can, in theory, be combined with ‘external’ loaders.

    The glycin crate now builds with external loaders for Linux, and automatically uses builtin loaders for all other operating systems. That means that libglycin should work on other operating systems as well. So far, the CI only contains builds cross compiled for x86_64-pc-windows-gnu and tested on Wine. Further testing, feedback, and fixes are very welcome.

    Image loaders that are written in C/C++ like for HEIF, AVIF, SVG, and JPEG XL, are currently not supported for being used without sandbox. Since AVIF and JPEG XL already have rust-based implementations, and rsvg might move away from libxml2 in the future, potentially allowing for safe builtin loaders for these formats in the future.

    If you want, you can support my work financially on various platforms .

    GNOME Circle Apps and Libraries

    Pika Backup

    Keep your data safe.

    Sophie (she/her) reports

    On March 31, we observed Trans Day of Visibility 🏳️‍⚧️, World Backup Day, and fittingly, the release of Pika Backup 0.8 . After two years of work, this release not only brings many small improvements , but also a rework of the code base that dates back to 2018. This will greatly help to keep Pika Backup stable and maintainable for another eight years.

    You can support the development on Open Collective or support my work via various other platforms .

    Big thanks to everyone who makes Pika Backup possible, especially BorgBackup , our donors, and translators.

    pika-backup.Lg0f3L1V_Z19xA4K.webp

    Third Party Projects

    Antonio Zugaldia announces

    Speed of Sound, voice typing for the Linux desktop, is now available on Flathub!

    Main features:

    • Offline, on-device transcription using Whisper. No data leaves your machine.
    • Multiple activation options: click the in-app button or use a global keyboard shortcut.
    • Types the result directly into any focused application using Portals for wide desktop support (X11, Wayland).
    • Multi-language support with switchable primary and secondary languages on the fly.
    • Works out of the box with the built-in Whisper Tiny model. Download additional models from within the app to improve accuracy.
    • Optional text polishing with LLMs, with support for a custom context and vocabulary.
    • Supports self-hosted services like vLLM, Ollama, and llama.cpp (cloud services supported but not required).
    • Built with the fantastic Java GI bindings, come hang out on #java-gi:matrix.org .

    Get it from https://flathub.org/en/apps/io.speedofsound.SpeedOfSound . Learn more on https://www.speedofsound.io .

    demo-light.CceIw5Qt_28oUCv.webp

    Ronnie Nissan reports

    This week I released Embellish v1.0,0.This is a major rewrite of the app from Gjs to Vala, using all the experience I gain from making GTK apps for the past few years. The app now uses view models for the fonts ListBox and a GridView for the Icons. Not only is the app more performant now, the code is much nicer and easier to maintain and hack.

    You can get Embellish from Flatpak

    Or you can contribute to it’s Development/Translation on Github

    embellish-v1.0.0-icons-page.Cyp8_cns_Z25gbLa.webp

    embellish-v1.0.0-preview-dialog.CTu5TX7P_w1Nj0.webp

    Daniel Wood reports

    Design, 2D computer aided design (CAD) for GNOME sees a new release, highlights include:

    • Polyline Trim (TR)
    • Polyline Extend (EX)
    • Chamfer Command (CHA)
    • Fillet Command (F)
    • Inferred direction for Arc Command (A)
    • Diameter input for Circle Command (C)
    • Close option for Line Command (L)
    • Close and Undo options for Polyline Command (PL)
    • Multiple copies with Copy Command (CO)
    • Show angle in Distance Command (DI)
    • Performance improvements when panning
    • Nested elements with Hatch Command (H)
    • Consistent Toast message format
    • Plus many fixes!

    Design is available from Flathub:

    https://flathub.org/apps/details/io.github.dubstar_04.design

    drawing.B8ObgpVr_lqAxg.webp

    Cleo Menezes Jr. reports

    Serigy has reached version 2, evolving into a focused, minimal clipboard manager. The release brings substantial improvements across functionality, performance, and user experience.

    The new version introduces automatic expiration of old clipboard items, incognito mode for privacy, and a grid view that brings clarity to your slots. Advanced features are now accessible through context menus and tooltips, while global shortcuts let you summon Serigy instantly from anywhere.

    Several bug fixes have improved stability and reliability, and the UI is significantly more responsive. The application now persists window size across sessions, and Wayland clipboard detection has been improved.

    Serigy 2 also refines its design. The app now supports Brazilian Portuguese, Russian, and Spanish (Chile).

    Get it on Flathub Follow the development

    serigy.mzQ6JWYu_Z38cfs.webp

    Wildcard

    Test your regular expressions.

    says

    Wildcard 0.3.5 released, bringing matching of regex groups, sidebar now shows overall match and group information, and new quick reference dialog showing common regular expression use cases! You can download the latest release from Flathub!

    wildcard-0.3.5.B3uIty9O_18spUb.webp

    Files

    Providing a simple and integrated way of managing your files and browsing your file system.

    Romain says

    I have created a Nautilus extension that adds an “Open in” context menu for installed IDEs , allowing to easily open directories and files in them.

    It works with any IDE marked as such in their desktop entry. That includes IDEs in development containers such as Toolbx if you create a desktop file for it on the host system.

    To install, download the latest version and follow the instructions in the README file.

    Metadata Cleaner

    View and clean metadata in files.

    GeopJr 🏳️‍⚧️🏳️‍🌈 says

    Metadata Cleaner is back, now with more adaptive layouts, bug fixes and features!

    Grab the latest release from Flathub !

    metadatacleaner.NvR--Orx_GMvMN.webp

    Fractal

    Matrix messaging app for GNOME written in Rust.

    Kévin Commaille reports

    Things have been fairly quiet since the Jason AI takeover , but here comes Fractal 14.beta.

    • Sending files & location is properly disabled while editing/replying, as it doesn’t work anyway.
    • Call rooms are identified with a camera icon in the sidebar and show a banner to warn that other users might not read messages in these rooms.
    • While we still support signing in via SSO, we have dropped support for identity providers, to simplify our code and a have a closer experience to signing in with OAuth 2.0.
    • Map markers now use a darker variant of the accent color to have a better contrast with the map underneath.
    • Many small behind the scenes changes, mostly through dependency updates, and we have removed a few of them. Small improvements to the technical docs as well.

    As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

    It is available to install via Flathub Beta, see the instructions in our README .

    As the version implies, there might be a slight risk of regressions, but it should be mostly stable. If all goes well the next step is the release candidate!

    We are very excited to see several new contributors opening MRs lately to take care of their pet peeves with Fractal, which will benefit everyone in the end. If you have a little bit of time on your hands, you can try to join them by fixing one of our newcomers issues .

    Flood It

    Flood the board

    tfuxu reports

    Flood It 2.0 has been released! It now comes with a simple game explanation dialog, the ability to replay recently played board, full translation support, and a Ctrl+1…6 keyboard shortcut for color buttons.

    It also contains many under-the-hood improvements, like the transition from Gotk4 to Puregotk, a runtime update to GNOME 50, custom seed support, and much more.

    Check it out on Flathub !

    floodit_2.0_game_rules.DkS1U6q0_2tFl7y.webp

    floodit_2.0_play_again.CDSKQkJ3_c77v1.webp

    Bouncer

    Bouncer is an application to help you choose the correct firewall zone for wireless connections.

    justinrdonnelly announces

    Bouncer 50 was released this week, using the GNOME 50 runtime. It has bug fixes related to NetworkManager restarts, and autostart status. It also includes translations for Italian and Polish. Check it out on Flathub !

    GNOME Websites

    Guillaume Bernard says

    GNOME Damned Lies has seen a few UX improvements! For the release of GNOME 50, I added a specific tag for Damned Lies to track the changes and link them to existing GNOME cycles. You can see all the changes I already spoke about like merge request support, background refresh of statistics, etc. (see: https://gitlab.gnome.org/Infrastructure/damned-lies/-/releases/gnome_50 ). After that, I am working towards GNOME 50.1. Why follow the GNOME release calendar? Because it provides pace, and pace is important while developing. After more than 3 years fixing technical debt and refactoring, upgrading existing code, you can see the pace of changes has increased a lot. At the beginning of GNOME 50, we had more than 120 open issues in the Damned Lies tracker; it’s down to 76 at the time of writing these lines. So what’s new this week? I worked a lot on long-standing UX-related issues and can proudly announce a few changes in Damned Lies:

    • Better consistency in many strings (‘Release’ vs ‘Release Set’, past tense in action history that previously used the infinitive form).
    • Administrators now see the modules maintained in the site backend.
    • String freeze notifications now expose the affected versions and are far more stable when detecting string freeze breaks.
    • You now have anchors in your team pages for each language your team is working on.
    • i18n coordinators are now identified by a mini badge in their profile, helping any user to reach them more easily.
    • i18n coordinators can take action in any workflow without being a member of the team they act on.
    • Users can remove their own accounts.

    That’s all for this week! 😃

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Michael Meeks: <h1>TDF ejects its core developers</h1>

      news.movim.eu / PlanetGnome • 1 day ago • 6 minutes

    For a rather more polished write-up, complete with pretty pictures please see TDF ejects its core developers . Here is a more personal take. My feeling is that this action has been planned by the TDF rump board's majority for many months, if not for some years. While we have tried to avoid this outcome, it has been eventually forced on us.

    Trends in TDF board membership

    There are many great ways to contribute to FLOSS projects and coding is only one of them - let me underline that. However - coding is the primary production that drives many of the other valuable contributions: translation, marketing, etc. We have been blessed to have many excellent developers around LibreOffice but coder's board representation has been declining. This means losing a valuable part of the board's perspective on the complexity of the problem. The elected board is (typically) ten people - seven full board members, and three deputies as spares if needed, here is how that looks:

    Composition of TDF boards over time

    Another way of looking at board composition is to look at board members' affiliations over time. Of course affiliations change - sometimes during a board term, but this is the same graph broken down by (rough) affiliation:

    Affiliation of TDF boards over time
    What is easy to see is the huge drop in corporate affiliation - along with all the business experience that brings. The added '2026' is for the current 'rump' board which continues to over-stay its term and has also lost several of its most popular developer members - including Eike - recently of RedHat (because of a person tragedy) and Bjoern. In 'Interested' I include those with a business interest in LibreOffice who are not part of a larger corporate, one: Laszlo is the last coder on the board.

    One of the major surprises of the 2024 election is the 'TDF' chunk in which I bucket paid TDF staff, and those closely related to them. The current chair of the TDF board (Eliane) who manages the Executive Director (ED) is curiously related to a staff member who is managed by the (ED) - arguably an extremely poor governance practice. Having three TDF affiliated directors is also in contradiction of the statutes.

    It is also worth noting that for over two years, no Collaboran or any of our partners have been on the TDF board. It was hoped that this would give ample time and space to address any of the issues left from previous boards.

    Meritocracy - do-ers decide

    TDF is defined as a meritocracy in its statutes. Why is that ? the experience we had from the OpenOffice project was that often those who were doing the work were excluded from decision making. That made it hard to get teams to scale, and to make quick decisions, let leaders grow in areas, and also gives an incentive to contribute more among many other reasons.

    Some claim that the sole manifestation of the statute's requirement for meritocracy is a flat entry / membership criteria (as every other organization has). This seems to me to be near to the root of the problems here. Those used to functioning FLOSS projects find it hard to understand why you wouldn't at least listen to those who are working hardest to improve things in whatever area. These days some at TDF seem to emphasize equality instead.

    It is interesting then to see the appointed Membership Committee ejecting people without any thanks or apology who have contributed so very much over so many years. We built a quick tool to count those. This excludes the long departed Sun Microsostem's release engineers who committed many other people's patches for them - and it struggles to 'see' into the CVS era where branches were flattened; but ... as far as git and gitdm-alias files can easily tell this is the picture of the top committers to the largest 'core' code repository over all time.

    Name Commits Last commit Affiliation
    Caolán McNamara 37,556 Collabora
    Stephan Bergmann 21,732 Collabora
    Noel Grandin 20,851 Collabora
    Miklos Vajna 10,466 Collabora
    Tor Lillqvist 9,233 Collabora
    Michael Stahl 8,742 Collabora
    Kohei Yoshida 5,655 Collabora
    Eike Rathke 5,398 Volunteer/RedHat
    Markus Mohrhard 5,230 Volunteer
    Frank Schönheit 5,025 2011 Sun/Oracle
    Michael Weghorn 4,956 TDF
    Mike Kaganski 4,864 Collabora
    Andrea Gelmini 4,582 Volunteer
    Xisco Fauli 4,215 TDF
    Julien Nabet 4,031 Volunteer
    Tomaž Vajngerl 3,797 Collabora
    David Tardon 3,648 2021 RedHat
    Luboš Luňák 3,201 Collabora
    Hans-Joachim Lankenau 3,007 2011 Sun/Oracle
    Ocke Janssen 2,852 2011 Sun/Oracle
    Oliver Specht 2,699 Sun/Oracle
    Jan Holesovsky 2,689 Collabora
    Mathias Bauer 2,580 2011 Sun/Oracle
    Olivier Hallot 2,561 TDF
    Michael Meeks 2,553 Collabora
    Bjoern Michaelsen 2,503 Volunteer/Canonical
    Norbert Thiebaud 2,176 2017 Volunteer
    Thomas Arnhold 2,176 2014 Volunteer
    Andras Timar 2,099 Collabora
    Philipp Lohmann 2,096 2011 Sun/Oracle

    It is a humbling privilege for me to serve in such a dedicated team of people who have contributed so much. Take just one example - Cáolan has worked from StarDivision to Sun to Oracle to RedHat to Collabora; 37000 commits in ~25 years - ~four per day sustained, every day. By grokking his commits quickly you can see that this is far more than a job - over 6000 of those were at the weekend, and of course commits don't show the reviews, mentoring, love and care and more. That is just one contributor - but the passion scales across the rest of the team.

    Why remove individuals ?

    While writing this a response from TDF showed up. While there are things to welcome, it seems that this speculative concern about individual contributors is at the core of the concern:

    "people made decisions in the interest of their employers rather than in the interest of The Document Foundation."

    Really!? the primary privilege that members of TDF have is voting for their representatives in elections, and this right is earned only by contribution. Elections are secret ballots. So it seems the most plausible reason for disenfranchising so many is a unhealthy fear of the electorate. Is it possible that the board majority want to avoid accountability for their actions at the next election (which is already delayed without adequate explanation), like this:

    The amazing elastic election timeline

    I have no idea how our staff voted in past elections - but I have to assume they did this with integrity and for the best for TDF as they saw it at the time. It seems that a more plausible reason to remove such long term contributors is electoral gerrymandering.

    Some thank yous

    After 15+ years of service with LibreOffice, it is unfortunate to be ejected. It is possible to imagine a counter-factual world where this might actually be necessary. But even in this case - to do so with no thank-you, or apology is unconscionable. It is great to see the team making up for that by publicly thanking their colleagues as they are kicked out. I found it deeply encouraging to remember and celebrate all the fantastic work that has been contributed, let me add my own big thank you to everyone!

    Where we are now

    Well much more can be said, perhaps I'll update this later with more details as they emerge, but for now we're re-focusing on making Collabora Office great, getting our gerrit and CI humming smoothly, and starting to dung-out bits we are not using in the code-base. If you're interested in getting involved have a wave in #cool-dev:matrix.org and join in, we welcome anyone to join us. Thanks for reading and trying to understand this tangled topic !

    • Pl chevron_right

      GNOME Shell and Mutter Development: What is new in GNOME Kiosk 50

      news.movim.eu / PlanetGnome • 3 days ago • 3 minutes

    GNOME Kiosk, the lightweight, specialized compositor continues to evolve In GNOME 50 by adding new configuration options and improving accessibility.

    Window configuration

    User configuration file monitoring

    The user configuration file gets reloaded when it changes on disk, so that it is not necessary to restart the session.

    New placement options

    New configuration options to constrain windows to monitors or regions on screen have been added:

    • lock-on-monitor : lock a window to a monitor.
    • lock-on-monitor-area : lock to an area relative to a monitor.
    • lock-on-area : lock to an absolute area.

    These options are intended to replicate the legacy „ Zaphod “ mode from X11, where windows could be tied to a specific monitor. It even goes further than that, as it allows to lock windows on a specific area on screen.

    The window/monitor association also remains true when a monitor is disconnected. Take for example a setup where each monitor, on a multiple monitors configuration, shows different timetables. If one of the monitors is disconnected (for whatever reason), the timetable showing on that monitor should not be moved to another remaining monitor. The lock-on-monitor option prevents that.

    Initial map behavior was tightened

    Clients can resize or change their state  before the window is mapped, so size, position, and fullscreen as set from the configuration could be skipped. Kiosk now makes sure to apply configured size, position, and fullscreen on first map when the initial configuration was not applied reliably.

    Auto-fullscreen heuristics were adjusted

    • Only normal windows are considered when checking whether another window already covers the monitor (avoids false positives from e.g. xwaylandvideobridge ).
    • The current window is excluded when scanning “ other ” fullscreen sized windows (fixes Firefox restoring monitor-sized geometry).
    • Maximized or fullscreen windows are no longer treated as non-resizable so toggling fullscreen still works when the client had already maximized.

    Compositor behavior and command-line options

    New command line options have been added:

    • --no-cursor : hides the pointer.
    • --force-animations : forces animations to be enabled.
    • --enable-vt-switch : restores VT switching with the keyboard.

    The --no-cursor option can be used to hide the pointer cursor entirely for setups where user input does not involve a pointing device (it is similar to the -nocursor option in Xorg).

    Animations can now be disabled using the desktop settings, and will also be automatically disabled when the backend reports no hardware-accelerated rendering for performance purpose. The option --force-animations can be used to forcibly enable animations in that case, similar to GNOME Shell.

    The native keybindings, which include VT switching keyboard shortcuts are now disabled by default for kiosk hardening. Applications that rely on the user being able to switch to another console VT on Linux, such as e.g Anaconda, will need to explicit re-enable VT switching using --enable-vt-switch in their session.

    These options need to be passed from the command line starting gnome-kiosk , which would imply updating the systemd definitions files, or better, create a custom one (taking example on the the ones provided with the GNOME Kiosk sessions).

    Accessibility

    Accessibility panel

    An example of an accessibility panel is now included, to control the platform accessibility settings with a GUI. It is a simple Python application using GTK4.

    (The gsettings options are also documented in the CONFIG.md file.)

    Screen magnifier

    Desktop magnification is now implemented, using the same settings as the rest of the GNOME desktop (namely screen-magnifier-enabled , mag-factor , see the CONFIG.md file for details).

    It can can be enabled from the accessibility panel or from the keyboard shortcuts through the gnome-settings-daemon’s “mediakeys” plugin.

    Accessibility settings

    The default systemd session units now start the gnome-settings-daemon accessibility plugin so that Orca (the screen reader) can be enabled through the dedicated keyboard shortcut.

    Notifications

    • A new, optional notification daemon implements org.freedesktop.Notifications and org.gtk.Notifications using GTK 4 and libadwaita.
    • A small utility to send notifications via org.gtk.Notifications is also provided.

    Input sources

    GNOME Kiosk was ported to the new Mutter’s keymap API which allows remote desktop servers to mirror the keyboard layout used on the client side.

    Session files and systemd

      • X-GDM-SessionRegister is now set to false in kiosk sessions as GNOME Kiosk does not register the session itself (unlike GNOME Shell). That fixes a hang when terminating the session.
      • Script session: systemd is no longer instructed to restart the session when the script exits, so that users can logout of the script session when the script terminates.
    • Pl chevron_right

      Matthew Garrett: Self hosting as much of my online presence as practical

      news.movim.eu / PlanetGnome • 3 days ago • 4 minutes

    Because I am bad at giving up on things, I’ve been running my own email server for over 20 years. Some of that time it’s been a PC at the end of a DSL line, some of that time it’s been a Mac Mini in a data centre, and some of that time it’s been a hosted VM. Last year I decided to bring it in house, and since then I’ve been gradually consolidating as much of the rest of my online presence as possible on it. I mentioned this on Mastodon and a couple of people asked for more details, so here we are.

    First: my ISP doesn’t guarantee a static IPv4 unless I’m on a business plan and that seems like it’d cost a bunch more, so I’m doing what I described here : running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can, with an additional IP address allocated to the VM and NATted over the VPN link. The practical outcome of this is that my home IP address is irrelevant and can change as much as it wants - my DNS points at the OVH IP, and traffic to that all ends up hitting my server.

    The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found under a pile of laptops in my office. We’re not talking rackmount Xeon levels of performance, but it’s entirely adequate for everything I’m doing here.

    So. Let’s talk about the services I’m hosting.

    Web

    This one’s trivial. I’m not really hosting much of a website right now, but what there is is served via Apache with a Let’s Encrypt certificate. Nothing interesting at all here, other than the proxying that’s going to be relevant later.

    Email

    Inbound email is easy enough. I’m running Postfix with a pretty stock configuration, and my MX records point at me. The same Let’s Encrypt certificate is there for TLS delivery. I’m using Dovecot as an IMAP server (again with the same cert). You can find plenty of guides on setting this up.

    Outbound email? That’s harder. I’m on a residential IP address, so if I send email directly nobody’s going to deliver it. Going via my OVH address isn’t going to be a lot better. I have a Google Workspace, so in the end I just made use of Google’s SMTP relay service . There’s various commerical alternatives available, I just chose this one because it didn’t cost me anything more than I’m already paying.

    Blog

    My blog is largely static content generated by Hugo . Comments are Remark42 running in a Docker container. If you don’t want to handle even that level of dynamic content you can use a third party comment provider like Disqus .

    Mastodon

    I’m deploying Mastodon pretty much along the lines of the upstream compose file . Apache is proxying /api/v1/streaming to the websocket provided by the streaming container and / to the actual Mastodon service. The only thing I tripped over for a while was the need to set the “X-Forwarded-Proto” header since otherwise you get stuck in a redirect loop of Mastodon receiving a request over http (because TLS termination is being done by the Apache proxy) and redirecting to https, except that’s where we just came from.

    Mastodon is easily the heaviest part of all of this, using around 5GB of RAM and 60GB of disk for an instance with 3 users. This is more a point of principle than an especially good idea.

    Bluesky

    I’m arguably cheating here. Bluesky’s federation model is quite different to Mastodon - while running a Mastodon service implies running the webview and other infrastructure associated with it, Bluesky has split that into multiple parts . User data is stored on Personal Data Servers, then aggregated from those by Relays, and then displayed on Appviews. Third parties can run any of these, but a user’s actual posts are stored on a PDS. There are various reasons to run the others, for instance to implement alternative moderation policies, but if all you want is to ensure that you have control over your data, running a PDS is sufficient. I followed these instructions , other than using Apache as the frontend proxy rather than nginx, and it’s all been working fine since then. In terms of ensuring that my data remains under my control, it’s sufficient.

    Backups

    I’m using borgmatic , backing up to a local Synology NAS and also to my parents’ home (where I have another HP EliteDesk set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check that I’m actually able to restore them.

    Conclusion

    Most of what I post is now stored on a system that’s happily living under a TV, but is available to the rest of the world just as visibly as if I used a hosted provider. Is this necessary? No. Does it improve my life? In no practical way. Does it generate additional complexity? Absolutely. Should you do it? Oh good heavens no. But you can, and once it’s working it largely just keeps working, and there’s a certain sense of comfort in knowing that my online presence is carefully contained in a small box making a gentle whirring noise.

    • Pl chevron_right

      Andy Wingo: wastrelly wabbits

      news.movim.eu / PlanetGnome • 3 days ago • 11 minutes

    Good day! Today (tonight), some notes on the last couple months of Wastrel , my ahead-of-time WebAssembly compiler.

    Back in the beginning of February, I showed Wastrel running programs that use garbage collection , using an embedded copy of the Whippet collector , specialized to the types present in the Wasm program. But, the two synthetic GC-using programs I tested on were just ported microbenchmarks, and didn’t reflect the output of any real toolchain.

    In this cycle I worked on compiling the output from the Hoot Scheme-to-Wasm compiler . There were some interesting challenges!

    bignums

    When I originally wrote the Hoot compiler, it targetted the browser, which already has a bignum implementation in the form of BigInt , which I worked on back in the day . Hoot-generated Wasm files use host bigints via externref (though wrapped in structs to allow for hashing and identity ).

    In Wastrel, then, I implemented the imports that implement bignum operations: addition, multiplication, and so on. I did so using mini-gmp , a stripped-down implementation of the workhorse GNU multi-precision library. At some point if bignums become important, this gives me the option to link to the full GMP instead.

    Bignums were the first managed data type in Wastrel that wasn’t defined as part of the Wasm module itself, instead hiding behind externref , so I had to add a facility to allocate type codes to these “host” data types. More types will come in time: weak maps, ephemerons, and so on.

    I think bignums would be a great proposal for the Wasm standard, similar to stringref ideally ( sniff! ), possibly in an attenuated form .

    exception handling

    Hoot used to emit a pre-standardization form of exception handling , and hadn’t gotten around to updating to the newer version that was standardized last July. I updated Hoot to emit the newer kind of exceptions, as it was easier to implement them in Wastrel that way.

    Some of the problems Chris Fallin contended with in Wasmtime don’t apply in the Wastrel case: since the set of instances is known at compile-time, we can statically allocate type codes for exception tags. Also, I didn’t really have to do the back-end: I can just use setjmp and longjmp .

    This whole paragraph was meant to be a bit of an aside in which I briefly mentioned why just using setjmp was fine. Indeed, because Wastrel never re-uses a temporary, relying entirely on GCC to “re-use” the register / stack slot on our behalf, I had thought that I didn’t need to worry about the “volatile problem”. From the C99 specification:

    [...] values of objects of automatic storage duration that are local to the function containing the invocation of the corresponding setjmp macro that do not have volatile-qualified type and have been changed between the setjmp invocation and longjmp call are indeterminate.

    My thought was, though I might set a value between setjmp and longjmp , that would only be the case for values whose lifetime did not reach the longjmp (i.e., whose last possible use was before the jump). Wastrel didn’t introduce any such cases, so I was good.

    However, I forgot about local.set : mutations of locals (ahem, objects of automatic storage duration ) in the source Wasm file could run afoul of this rule. So, because of writing this blog post, I went back and did an analysis pass on each function to determine the set of locals which are mutated inside a try_block . Thank you, rubber duck readers!

    bugs

    Oh my goodness there were many bugs. Lacunae, if we are being generous; things not implemented quite right, which resulted in errors either when generating C or when compiling the C. The type-preserving translation strategy does seem to have borne fruit, in that I have spent very little time in GDB: once things compile, they work.

    coevolution

    Sometimes Hoot would use a browser facility where it was convenient, but for which in a better world we would just do our own thing. Such was the case for the number->string operation on floating-point numbers: we did something awful but expedient .

    I didn’t have this facility in Wastrel, so instead we moved to do float-to-string conversions in Scheme . This turns out to have been a good test for bignums too; the algorithm we use is a bit dated and relies on bignums to do its thing. The move to Scheme also allows for printing floating-point numbers in other radices.

    There are a few more Hoot patches that were inspired by Wastrel, about which more later; it has been good for both to work on the two at the same time.

    tail calls

    My plan for Wasm’s return_call and friends was to use the new musttail annotation for calls, which has been in clang for a while and was recently added to GCC. I was careful to limit the number of function parameters such that no call should require stack allocation, and therefore a compiler should have no reason to reject any particular tail call.

    However, there were bugs. Funny ones, at first: attributes applying to a preceding label instead of the following call , or the need to insert if (1) before the tail call . More dire ones, in which tail callers inlined into their callees would cause the tail calls to fail , worked around with judicious application of noinline . Thanks to GCC’s Andrew Pinski for help debugging these and other issues; with GCC things are fine now.

    I did have to change the code I emitted to return “top types only”: if you have a function returning type T, you can tail-call a function returning U if U is a subtype of T, but there is no nice way to encode this into the C type system . Instead, we return the top type of T (or U, it’s the same), e.g. anyref , and insert downcasts at call sites to recover the precise types. Not so nice, but it’s what we got.

    Trying tail calls on clang, I ran into a funny restriction: clang not only requires that return types match, but requires that tail caller and tail callee have the same parameters as well. I can see why they did this (it requires no stack shuffling and thus such a tail call is always possible, even with 500 arguments), but it’s not the design point that I need. Fortunately there are discussions about moving to a different constraint .

    scale

    I spent way more time that I had planned to on improving the speed of Wastrel itself. My initial idea was to just emit one big C file, and that would provide the maximum possibility for GCC to just go and do its thing: it can see everything, everything is static, there are loads of always_inline helpers that should compile away to single instructions, that sort of thing. But, this doesn’t scale, in a few ways.

    In the first obvious way, consider whitequark’s llvm.wasm . This is all of LLVM in one 70 megabyte Wasm file. Wastrel made a huuuuuuge C file, then GCC chugged on it forever; 80 minutes at -O1 , and I wasn’t aiming for -O1 .

    I realized that in many ways, GCC wasn’t designed to be a compiler target. The shape of code that one might emit from a Wasm-to-C compiler like Wastrel is different from that that one would write by hand. I even ran into a segfault compiling with -Wall , because GCC accidentally recursed instead of iterated in the -Winfinite-recursion pass.

    So, I dealt with this in a few ways. After many hours spent pleading and bargaining with different -O options, I bit the bullet and made Wastrel emit multiple C files. It will compute a DAG forest of all the functions in a module, where edges are direct calls, and go through that forest, greedily consuming (and possibly splitting) subtrees until we have “enough” code to split out a partition , as measured by number of Wasm instructions. They say that -flto makes this a fine approach, but one never knows when a translation unit boundary will turn out to be important. I compute needed symbol visibilities as much as I can so as to declare functions that don’t escape their compilation unit as static ; who knows if this is of value. Anyway, this partitioning introduced no performance regression in my limited tests so far, and compiles are much much much faster.

    scale, bis

    A brief observation: Wastrel used to emit indented code, because it could, and what does it matter, anyway. However, consider Wasm’s br_table : it takes an array of n labels and an integer operand, and will branch to the n th label, or the last if the operand is out of range. To set up a label in Wasm, you make a block, of which there are a handful of kinds; the label is visible in the block, and for n labels, the br_table will be the most nested expression in the n nested blocks.

    Now consider that block indentation is proportional to n . This means, the file size of an indented C file is quadratic in the number of branch targets of the br_table .

    Yes, this actually bit me; there are br_table instances with tens of thousands of targets. No, wastrel does not indent any more.

    scale, ter

    Right now, the long pole in Wastrel is the compile-to-C phase; the C-to-native phase parallelises very well and is less of an issue. So, one might think: OK, you have partitioned the functions in this Wasm module into a number of files, why not emit the files in parallel?

    I gave this a go. It did not speed up C generation. From my cursory investigations, I think this is because the bottleneck is garbage collection in Wastrel itself; Wastrel is written in Guile, and Guile still uses the Boehm-Demers-Weiser collector, which does not parallelize well for multiple mutators. It’s terrible but I ripped out parallelization and things are fine. Someone on Mastodon suggested fork ; they’re not wrong, but also not Right either. I’ll just keep this as a nice test case for the Guile-on-Whippet branch I want to poke later this year.

    scale, quator

    Finally, I had another realization: GCC was having trouble compiling the C that Wastrel emitted, because Hoot had emitted bad WebAssembly. Not bad as in “invalid”; rather, “not good”.

    There were two cases in which Hoot emitted ginormous (technical term) functions. One, for an odd debugging feature: Hoot does a CPS transform on its code, and allocates return continuations on a stack. This is a gnarly technique but it gets us delimited continuations and all that goodness even before stack switching has landed, so it’s here for now. It also gives us a reified return stack of funcref values, which lets us print Scheme-level backtraces.

    Or it would, if we could associate data with a funcref . Unfortunately func is not a subtype of eq , so we can’t. Unless... we pass the funcref out to the embedder (e.g. JavaScript), and the embedder checks the funcref for equality (e.g. using === ); then we can map a funcref to an index, and use that index to map to other properties.

    How to pass that funcref /index map to the host? When I initially wrote Hoot, I didn’t want to just, you know, put the funcrefs of interet into a table and let the index of a function’s slot be the value in the key-value mapping; that would be useless memory usage. Instead, we emitted functions that took an integer, and which would return a funcref. Yes, these used br_table , and yes, there could be tens of thousands of cases, depending on what you were compiling.

    Then to map the integer index to, say, a function name, likewise I didn’t want a table; that would force eager allocation of all strings. Instead I emitted a function with a br_table whose branches would return string.const values.

    Except, of course, stringref didn’t become a thing , and so instead we would end up lowering to allocate string constants as globals.

    Except, of course, Wasm’s idea of what a “constant” is is quite restricted, so we have a pass that moves non-constant global initializers to the “start” function . This results in an enormous start function. The straightforward solution was to partition global initializations into separate functions, called by the start function.

    For the funcref debugging, the solution was more intricate: firstly, we represent the funcref -to-index mapping just as a table. It’s fine. Then for the side table mapping indices to function names and sources, we emit DWARF, and attach a special attribute to each “introspectable” function. In this way, reading the DWARF sequentially, we reconstruct a mapping from index to DWARF entry, and thus to a byte range in the Wasm code section, and thus to source information in the .debug_line section. It sounds gnarly but Guile already used DWARF as its own debugging representation; switching to emit it in Hoot was not a huge deal, and as we only need to consume the DWARF that we emit, we only needed some 400 lines of JS for the web/node run-time support code.

    This switch to data instead of code removed the last really long pole from the GCC part of Wastrel’s pipeline. What’s more, Wastrel can now implement the code_name and code_source imports for Hoot programs ahead of time: it can parse the DWARF at compile-time, and generate functions that look up functions by address in a sorted array to return their names and source locations. As of today, this works!

    fin

    There are still a few things that Hoot wants from a host that Wastrel has stubbed out: weak refs and so on. I’ll get to this soon; my goal is a proper Scheme REPL. Today’s note is a waypoint on the journey. Until next time, happy hacking!

    • Pl chevron_right

      Thibault Martin: TIL that Sveltia is a good CMS for Astro

      news.movim.eu / PlanetGnome • 4 days ago • 3 minutes

    This website is built with the static site generator Astro . All my content is written in markdown and uploaded to a git repository. Once the content is merged into the main branch, Cloudflare deploys it publicly. The process to publish involves:

    1. Creating a new markdown file.
    2. Filling it with thoughts.
    3. Pushing it to a new branch.
    4. Waiting for CI to check my content respects some rules.
    5. Pressing the merge button.

    This is pretty involved and of course requires access to a computer. This goes directly against the goal I’ve set for myself to reduce friction to publish .

    I wanted a simple solution to write and publish short posts directly from mobile, without hosting an additional service.

    Such an app is called a git-based headless CMS. Decap CMS is the most frequently cited solution for git-based content management, but it has two show-stoppers for me:

    1. It’s not mobile friendly (yet, since 2017) although there are community workarounds .
    2. It’s not entirely client-side. You need to host a serverless script e.g. on a Cloudflare Worker to complete authentication.

    Because my website is completely static, it’s easy to take it off GitHub and Cloudflare and move it elsewhere. I want the CMS solution I choose to be purely client-side, so it doesn’t get in the way of moving elsewhere.

    It turns out that Sveltia , an API-compatible and self-proclaimed successor to Decap, is a good fit for this job, with a few caveats.

    Sveltia is a mobile-friendly Progressive Web App (PWA) that doesn’t require a backend. It's a static app that can be added to my static website. It has a simple configuration file to describe what fields each post expects (title, publication date, body, etc).

    Once the configuration and authentication are done, I have access to a lightweight PWA that lets me create new posts.

    The authentication is straightforward for technical people. I need to paste a GitHub Personal Access Token (PAT) in the login page, and that's it. Sveltia will fetch the existing content and display it.

    The PWA itself is also easy to deploy: I need to add a page served under the /admin route, that imports the app. I could just import it from a third party CDN, but there’s also a npm package for it. It allows me to serve the javascript as a first party instead, all while easily staying up to date.

    I installed it with

    $ pnpm add @sveltia/cms
    

    I then created an Astro page under src/pages/admin/index.astro with the following content

    title="src/pages/admin/index.astro"
    <!DOCTYPE html>
    <html lang="en">
      <head>
        <meta charset="utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1.0" />
        <title>Content Manager – ergaster.org</title>
        <script>
        import { init } from "@sveltia/cms";
        init();
        </script>
      </head>
      <body></body>
    </html>
    

    I also created the config file under public/admin/config.yml with Sveltia Entry Collections matching my Astro content collections. The setup is straightforward and well documented.

    Sveltia has a few caveats though:

    1. It can only work on a single branch, and not create a new branch per post. According to the maintainer, it should be possible to create new branches with “ Editorial Workflow by Q2 or Q3 this year .
    2. It pushes content directly to its target branch, including drafts . I still want to run CI checks before merging my content, so I’ve created a drafts branch and configured Sveltia to push content there. Once the CI checks have passed I merge the branch manually from the GitHub mobile app.
    3. Having a single target branch also means I can only have one draft coming from Sveltia at a time. If I edited two drafts concurrently on the drafts branch, they would both be published the next time I merged drafts into main .
    4. It’s clunky to rename a picture uploaded via Sveltia.

    Those are not deal breakers to me. The maintainer seems reactive, and the Editorial Workflow feature coming in Q2 or Q3 will fix the remaining clunkiness.

    • Pl chevron_right

      Thibault Martin: Gear review: Garmin Forerunner 165

      news.movim.eu / PlanetGnome • 5 days ago • 5 minutes

    Last year I bought one of those rock solid, simple, sturdy Casio Watches that are supposed to last you a long time. I still love it with all my heart and sometimes wear it, but my primary watch is now the Garmin Forerunner 165.

    A dozen years ago I got into running and installed an app on my phone to track my progress. I kept pushing harder to beat my previous records, and eventually injured myself badly. I used to think that the quantified self was the root of all evil and that it was the reason why I overtrained. It actually was out of ego, and it turns out that quantified self and coaching to make sense of the data can yield amazing results.

    [!success] I got a Garmin Forerunner 165 and I am very happy with it.

    I recommend caution nonetheless: the watch is a great tool to gather metrics about how you run, but it is a terrible replacement for a coach. Working with a professional will give you much better results.

    Why I got it

    In my mid 30s I realized that my metabolism wasn't what it used to be. I started gaining weight and felt uneasy in my body. After two kids and today's geopolitics , my mental health started to degrade too.

    I've decided to get back into running to help with both aspects. Exercising makes the body burn calories and produce endorphins. Both are useful to feel better. Yes, it's only in my mid 30s that I realized that exercising was a physiological need. A need I had neglected for too long.

    But the last time I got into running I injured myself badly. I'm the least competitive person against others, but I'm very competitive against myself. This means I'm subject to overtraining. I needed something to keep me on track.

    Several friends had Garmin watches and told me that their watches actually prevented them from overtraining. That was my cue: I would buy one, and use it to get back in shape. Even better: the model I wanted, the Forerunner 165, could last 11 days on a single charge. It means I could wear it at night to follow my sleep patterns, and it would vibrate gently on my wrist to wake me up silently. This promised to be a low maintenance and useful watch.

    [!info] In summary

    I bought the Forerunner 165 to:

    • Get back into running.
    • Prevent overtraining.
    • Follow my sleep patterns.
    • Not babysit the watch or be constantly nagged by it.

    First impressions

    I bought the watch at the end of August 2025, and I started running immediately with it. In addition to the watch, I also bought a pair of Merrell Trail Glove 7: "barefoot" shoes that have such a thin sole that you have to land on the front of the foot and not on the heel.

    I installed the Garmin Connect app on my phone, and I was delighted to see that I didn't any subscription to start a coaching plan. I enrolled in a Garmin Coach program for beginners, and I started following it. At the end of each run, the watch would ask me how I felt, and each run was more painful than the last. I thought it was just muscles building up so I kept following the program. And this is how I injured myself.

    I went to a physiotherapist, and we started a specific training plan for barefoot shoes. Of course I quit the Garmin Coach one. He taught me that barefoot shoes require a higher cadence (number of steps per minute) than regular running shoes. I could configure the watch to keep track of my cadence during my runs. A gauge would tell me if I ran too few or too many steps per minute.

    After the physiotherapy sessions ended, I could keep running normally. As of writing, I go running three times a week. Each session is between 6 and 12km long.

    What I like

    I didn't want a smart watch because I don't want it to pester me, and I don't want to charge it every day. On those two fronts the Forerunner delivered. I don't receive any of my phone notifications on my watch, and it doesn't pester me with anything during the day. I use my watch when I need it, not when it needs me.

    As for the battery, it is fantastic for this type of watch. With 3 runs a week, my watch lasts 9 to 10 days on a single charge. I keep my watch at night so it monitors my sleep too. And charging is fast: I can charge it from 10 to 100% in about 1.5 hour.

    I can confidently go to sleep with it and know it will still have plenty of battery to wake me up the next day with a gentle vibration on my wrist. I hate alarms that scream at you in the morning. This gentle nudge is infinitely better.

    I have pathologically bad sleep and the watch does a good job at tracking it. It even helped me detect sleep apnea, that doctors later confirmed. The watch gives you several metrics for the night: how long you've slept, your average and resting heart rate, your average and lowest respiration and more. It can also measure your pulse oximetry, but that depletes the battery twice as fast as if it doesn't. That watch also supports tracking naps.

    When it comes to exercising, I only use it for running. I can't say anything about its accuracy, but my physiotherapist seemed to believe that all the measures were plausible.

    The wristband has many holes, making it easy to adjust. It is comfortable to wear, even during exercise when the wrist can swell and sweat a bit. It is also slightly elastic, so it can stretch a bit for extra comfort.

    It is possible to use the watch only to track how you run, or to configure workouts in the app depending on your objectives. During my physiotherapy training I would make it track my cadence, but you can track a lot more metrics. You can also configure several steps, e.g. warm-up, light run, fast run, series, etc.

    There are also built-in, free Garmin Coach programs depending on your needs, but I can't say I have a positive experience with them. If you're new to running I really recommend going to a professional coach or physiotherapist to get you started.

    At €230, the watch is not cheap, but seems fairly priced for the amount of value I get from it. I also don't expect to replace it anytime soon.

    What I don't like

    The watch has an odd recovery time metric can be difficult to understand: you can workout lightly to recover. To this day I'm not entirely sure what it does.

    Beyond that it's a good watch I can't complain about!

    Conclusion

    I’m very happy with my Forerunner 165. It’s important to bear in mind it’s just a tool, not something that can replace a human coach. If your knees or tendons hurt and the watch tells you to go running, don’t. Go see a professional.

    • Pl chevron_right

      Thibault Martin: I realized that You don't care

      news.movim.eu / PlanetGnome • 5 days ago • 1 minute

    Quite a few of us maintain our own websites and publish our thoughts. We play in hard mode:

    • We need to build our website before even publishing our first post.
    • We don’t benefit from the network effect of bigger platforms to get eyeballs on our writing.
    • LLMs aggressively scrape the web and can serve our thoughts or expertise to their users without them visiting our websites.

    And on top of that, you don’t care.

    And I don’t expect you to care. Like the rest of us, you are flooded with information constantly. You’re fed so many words that you read the equivalent of whole books every day. How entitled would I be to expect you to care about my words when you have to filter through every story you’re bombarded with.

    So why do we keep the small web alive?

    I can’t speak for others, but I know why I maintain my website and why I publish my thoughts there. By increasing order of importance:

    1. I keep my web development skills reasonably up to date.
    2. I can shape my website to adapt to my content, and not the other way around.
    3. I have freedom of tone and vocabulary. I don’t have to censor words like "suicide" or "sex".
    4. I write long form posts that help me shape my thoughts, develop ideas, and receive feedback from my peers and readers.

    If you can afford to, I can only encourage you to write and publish your thoughts on your own platform, as long as you don’t expect others to care in return.