call_end

    • chevron_right

      Jussi Pakkanen: C++ compiler daemon testing tool

      news.movim.eu / PlanetGnome • 10 February • 1 minute

    In an earlier blog post I wrote about a potential way of speeding up C++ compilations (or any language that has a big up-front cost). The basic idea is to have a process that reads in all stdlib header code that is suspended. Compilations are done by sending the actual source file + flags to this process, which then forks and resumes compilation. Basically this is a way to persist the state of the compiler without writing (or executing) a single line of serialization code.

    The obvious follow up question is what is the speedup of this scheme. That is difficult to say without actually implementing the system. There are way too many variables and uncertainties to make any sort of reasonable estimate.

    So I implemented it.

    Not in an actual compiler, heavens no, I don't have the chops for that. Instead I implemented a completely fake compiler that does the same steps a real compiler would need to take. It spawns the daemon process. It creates a Unix domain socket. It communicates with the running daemon. It produces output files. The main difference is that it does not actually do any compilation, instead it just sleeps to simulate work. Since all sleep durations are parameters, it is easy to test the "real" effect of various schemes.

    The code is in this GH repo .

    The default durations were handwavy estimates based on past experience. In past measurements, stdlib includes take by far the largest chunk of the total compilation time. Thus I estimated that compilation without this scheme would take 5 seconds per file whereas compilations with it would take 1 second. If you disagree with these assumptions, feel free to run the test yourself with your own time estimates.

    The end result was that on this laptop that has 22 cores a project with 100 source files took 26 seconds to compile without the daemon and 7 seconds with it. This means the daemon version finished in just a hair over 25% of a "regular" build.

    Wouldn't you want your compilations to finish in a quarter of the time with zero code changes? I sure would.

    (In reality the speedup is probably less than that. How much? No idea. Someone's got to implement that to find out.)

    • wifi_tethering open_in_new

      This post is public

      nibblestew.blogspot.com /2025/02/c-compiler-daemon-testing-tool.html

    • chevron_right

      Andy Wingo: baffled by generational garbage collection

      news.movim.eu / PlanetGnome • 9 February • 12 minutes

    Usually in this space I like to share interesting things that I find out; you might call it a research-epistle-publish loop. Today, though, I come not with answers, but with questions, or rather one question, but with fractal surface area: what is the value proposition of generational garbage collection?

    hypothesis

    The conventional wisdom is encapsulated in a 2004 Blackburn, Cheng, and McKinley paper, “Myths and Realities: The Performance Impact of Garbage Collection” , which compares whole-heap mark-sweep and copying collectors to their generational counterparts, using the Jikes RVM as a test harness. (It also examines a generational reference-counting collector, which is an interesting predecessor to the 2022 LXR work by Zhao, Blackburn, and McKinley.)

    The paper finds that generational collectors spend less time than their whole-heap counterparts for a given task. This is mainly due to less time spent collecting, because generational collectors avoid tracing/copying work for older objects that mostly stay in the same place in the live object graph.

    The paper also notes an improvement for mutator time under generational GC, but only for the generational mark-sweep collector, which it attributes to the locality and allocation speed benefit of bump-pointer allocation in the nursery. However for copying collectors, generational GC tends to slow down the mutator, probably because of the write barrier, but in the end lower collector times still led to lower total times.

    So, I expected generational collectors to always exhibit lower wall-clock times than whole-heap collectors.

    test workbench

    In whippet , I have a garbage collector with an abstract API that specializes at compile-time to the mutator’s object and root-set representation and to the collector’s allocator, write barrier, and other interfaces. I embed it in whiffle , a simple Scheme-to-C compiler that can run some small multi-threaded benchmarks, for example the classic Gabriel benchmarks. We can then test those benchmarks against different collectors, mutator (thread) counts, and heap sizes. I expect that the generational parallel copying collector takes less time than the whole-heap parallel copying collector.

    results?

    So, I ran some benchmarks. Take the splay-tree benchmark, derived from Octane’s splay.js . I have a port to Scheme, and the results are... not good!

    splay.scm-generational-scalability-8.png

    In this graph the “pcc” series is the whole-heap copying collector, and “generational-pcc” is the generational counterpart, with a nursery sized such that after each collection, its size is 2 MB times the number of active mutator threads in the last collector. So, for this test with eight threads, on my 8-core Ryzen 7 7840U laptop, the nursery is 16MB including the copy reserve, which happens to be the same size as the L3 on this CPU. New objects are kept in the nursery one cycle before being promoted to the old generation.

    There are also results for “mmc” and “generational-mmc” collectors , which use an Immix-derived algorithm that allows for bump-pointer allocation but which doesn’t require a copy reserve. There, the generational collectors use a sticky mark-bit algorithm , which has very different performance characteristics as promotion is in-place, and the nursery is as large as the available heap size.

    The salient point is that at all heap sizes, and for these two very different configurations (mmc and pcc), generational collection takes more time than whole-heap collection. It’s not just the splay benchmark either; I see the same thing for the very different nboyer benchmark . What is the deal?

    I am honestly quite perplexed by this state of affairs. I wish I had a narrative to tie this together, but in lieu of that, voici some propositions and observations.

    “generational collection is good because bump-pointer allocation”

    Sometimes people say that the reason generational collection is good is because you get bump-pointer allocation, which has better locality and allocation speed. This is misattribution: it’s bump-pointer allocators that have these benefits. You can have them in whole-heap copying collectors, or you can have them in whole-heap mark-compact or immix collectors that bump-pointer allocate into the holes. Or, true, you can have them in generational collectors with a copying nursery but a freelist-based mark-sweep allocator. But also you can have generational collectors without bump-pointer allocation, for free-list sticky-mark-bit collectors. To simplify this panorama to “generational collectors have good allocators” is incorrect.

    “generational collection lowers pause times”

    It’s true, generational GC does lower median pause times:

    nboyer.scm-5-generational-p50-8.png

    But because a major collection is usually slightly more work under generational GC than in a whole-heap system, because of e.g. the need to reset remembered sets, the maximum pauses are just as big and even a little bigger:

    nboyer.scm-5-generational-p100-8.png

    I am not even sure that it is meaningful to compare median pause times between generational and non-generational collectors, given that the former perform possibly orders of magnitude more collections than the latter.

    Doing fewer whole-heap traces is good, though, and in the ideal case, the less frequent major traces under generational collectors allows time for concurrent tracing, which is the true mitigation for long pause times.

    is it whiffle?

    Could it be that the test harness I am using is in some way unrepresentative? I don’t have more than one test harness for Whippet yet. I will start work on a second Whippet embedder within the next few weeks, so perhaps we will have an answer there. Still, there is ample time spent in GC pauses in these benchmarks, so surely as a GC workload Whiffle has some utility.

    One reasons that Whiffle might be unrepresentative is that it is an ahead-of-time compiler, whereas nursery addresses are assigned at run-time. Whippet exposes the necessary information to allow a just-in-time compiler to specialize write barriers, for example the inline check that the field being mutated is not in the nursery, and an AOT compiler can’t encode this as an immediate. But it seems a small detail.

    Also, Whiffle doesn’t do much compiler-side work to elide write barriers. Could the cost of write barriers be over-represented in Whiffle, relative to a production language run-time?

    Relatedly, Whiffle is just a baseline compiler. It does some partial evaluation but no CFG-level optimization, no contification, no nice closure conversion, no specialization, and so on: is it not representative because it is not an optimizing compiler?

    is it something about the nursery size?

    How big should the nursery be? I have no idea.

    As a thought experiment, consider the case of a 1 kilobyte nursery. It is probably too small to allow the time for objects to die young, so the survival rate at each minor collection would be high. Above a certain survival rate, generational GC is probably a lose, because your program violates the weak generational hypothesis: it introduces a needless copy for all survivors, and a synchronization for each minor GC.

    On the other hand, a 1 GB nursery is probably not great either. It is plenty large enough to allow objects to die young, but the number of survivor objects in a space that large is such that pause times would not be very low, which is one of the things you would like in generational GC. Also, you lose out on locality: a significant fraction of the objects you traverse are probably out of cache and might even incur TLB misses.

    So there is probably a happy medium somewhere. My instinct is that for a copying nursery, you want to make it about as big as L3 cache, which on my 8-core laptop is 16 megabytes. Systems are different sizes though; in Whippet my current heuristic is to reserve 2 MB of nursery per core that was active in the previous cycle, so if only 4 threads are allocating, you would have a 8 MB nursery. Is this good? I don’t know.

    is it something about the benchmarks?

    I don’t have a very large set of benchmarks that run on Whiffle, and they might not be representative. I mean, they are microbenchmarks.

    One question I had was about heap sizes. If a benchmark’s maximum heap size fits in L3, which is the case for some of them, then probably generational GC is a wash, because whole-heap collection stays in cache. When I am looking at benchmarks that evaluate generational GC, I make sure to choose those that exceed L3 size by a good factor, for example the 8-mutator splay benchmark in which minimum heap size peaks at 300 MB, or the 8-mutator nboyer-5 which peaks at 1.6 GB.

    But then, should nursery size scale with total heap size? I don’t know!

    Incidentally, the way that I scale these benchmarks to multiple mutators is a bit odd: they are serial benchmarks, and I just run some number of threads at a time, and scale the heap size accordingly, assuming that the minimum size when there are 4 threads is four times the minimum size when there is just one thread. However, multithreaded programs are unreliable , in the sense that there is no heap size under which they fail and above which they succeed; I quote:

    "Consider 10 threads each of which has a local object graph that is usually 10 MB but briefly 100MB when calculating: usually when GC happens, total live object size is 10×10MB=100MB, but sometimes as much as 1 GB; there is a minimum heap size for which the program sometimes works, but also a minimum heap size at which it always works."

    is it the write barrier?

    A generational collector partitions objects into old and new sets, and a minor collection starts by visiting all old-to-new edges, called the “remembered set”. As the program runs, mutations to old objects might introduce new old-to-new edges. To maintain the remembered set in a generational collector, the mutator invokes write barriers : little bits of code that run when you mutate a field in an object. This is overhead relative to non-generational configurations, where the mutator doesn’t have to invoke collector code when it sets fields.

    So, could it be that Whippet’s write barriers or remembered set are somehow so inefficient that my tests are unrepresentative of the state of the art?

    I used to use card-marking barriers, but I started to suspect they cause too much overhead during minor GC and introduced too much cache contention. I switched to precise field-logging barriers some months back for Whippet’s Immix-derived space, and we use the same kind of barrier in the generational copying (pcc) collector. I think this is state of the art. I need to see if I can find a configuration that allows me to measure the overhead of these barriers, independently of other components of a generational collector.

    is it something about the generational mechanism?

    A few months ago, my only generational collector used the sticky mark-bit algorithm, which is an unconventional configuration: its nursery is not contiguous, non-moving, and can be as large as the heap. This is part of the reason that I implemented generational support for the parallel copying collector, to have a different and more conventional collector to compare against. But generational collection loses on some of these benchmarks in both places!

    is it something about collecting more often?

    On one benchmark which repeatedly constructs some trees and then verifies them, I was seeing terrible results for generational GC, which I realized were because of cooperative safepoints: generational GC collects more often, so it requires that all threads reach safepoints more often, and the non-allocating verification phase wasn’t emitting any safepoints. I had to change the compiler to emit safepoints at regular intervals (in my case, on function entry), and it sped up the generational collector by a significant amount.

    This is one instance of a general observation, which is that any work that doesn’t depend on survivor size in a GC pause is more expensive with a generational collector, which runs more collections. Synchronization can be a cost. I had one bug in which tracing ephemerons did work proportional to the size of the whole heap, instead of the nursery; I had to specifically add generational support for the way Whippet deals with ephemerons during a collection to reduce this cost.

    is it something about collection frequency?

    Looking deeper at the data, I have partial answers for the splay benchmark, and they are annoying :)

    Splay doesn’t actually allocate all that much garbage. At a 2.5x heap, the stock parallel MMC collector (in-place, sticky mark bit) collects... one time. That’s all. Same for the generational MMC collector, because the first collection is always major. So at 2.5x we would expect the generational collector to be slightly slower. The benchmark is simply not very good – or perhaps the most generous interpretation is that it represents tasks that allocate 40 MB or so of long-lived data and not much garbage on top.

    Also at 2.5x heap, the whole-heap copying collector runs 9 times, and the generational copying collector does 293 minor collections and... 9 major collections. We are not reducing the number of major GCs. It means either the nursery is too small, so objects aren’t dying young when they could, or the benchmark itself doesn’t conform to the weak generational hypothesis.

    At a 1.5x heap, the copying collector doesn’t have enough space to run. For MMC, the non-generational variant collects 7 times, and generational MMC times out. Timing out indicates a bug, I think. Annoying!

    I tend to think that if I get results and there were fewer than, like, 5 major collections for a whole-heap collector, that indicates that the benchmark is probably inapplicable at that heap size, and I should somehow surface these anomalies in my analysis scripts.

    collecting more often redux

    Doing a similar exercise for nboyer at 2.5x heap with 8 threads (4GB for 1.6GB live data), I see that pcc did 20 major collections, whereas generational pcc lowered that to 8 major collections and 3471 minor collections. Could it be that there are still too many fixed costs associated with synchronizing for global stop-the-world minor collections? I am going to have to add some fine-grained tracing to find out.

    conclusion?

    I just don’t know! I want to believe that generational collection was an out-and-out win, but I haven’t yet been able to prove it is true.

    I do have some homework to do. I need to find a way to test the overhead of my write barrier – probably using the MMC collector and making it only do major collections. I need to fix generational-mmc for splay and a 1.5x heap. And I need to do some fine-grained performance analysis for minor collections in large heaps.

    Enough for today. Feedback / reactions very welcome. Thanks for reading and happy hacking!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2025/02/09/baffled-by-generational-garbage-collection

    • chevron_right

      Ross Burton: Fact-checking ChatGPT with Yocto Project release names

      news.movim.eu / PlanetGnome • 7 February • 2 minutes

    For fun, I decided to ask to "explain the themes in the Yocto Project release names". It went about as well as I expected, to be honest:

    The Yocto Project follows a thematic naming convention for its release names. The themes have evolved over time:

    So far so good.

    Types of Birds (Older Releases) Early Yocto Project releases were named after different bird species. Examples include:

    Incorrect, that was not one of the themes.

    Dizzy (a reference to Dizzy Gillespie, but also sounds like a bird name)

    A jazz musician that "sounds like a bird name"? One hell of a stretch, and not the reference.

    Fido (a breed of pigeon)

    From what I can tell, Fido is not a breed of pigeon. Also not the reference.

    Jethro (a reference to Jethro Tull, but also resembles a bird-like theme)

    Jethro Tull the rock band? Not the reference.

    I do wonder why it decided the theme was types of birds, then made up one bird breed and listed two musicians that "sound like" bird names.

    Continuing on...

    Muppets Characters (Mid-Series) The Yocto Project later transitioned to naming releases after characters from The Muppets, a popular puppet show created by Jim Henson. Examples include:

    No, it didn't.

    Krogoth (a lesser-known Muppet)

    Krogoth is so lesser-known that the Muppets Wiki doesn't mention it, so I think this is made-up.

    Morty (another Muppet character)

    According to the Muppets Wiki "Morty is Rick Moranis' agent. He is seen in Muppets Tonight episode 202 [in 1997]".

    Considering how incorrect everything else is I actually think this is more likely to be a coincidence than an actual reference to reality. There's been a lot of characters over the decades of shows.

    Rocko (potentially a reference to Rocko from Rocko’s Modern Life, though not strictly Muppet-related)

    "not strictly"? A cartoon from the 1990s by some of the people that later gave us Spongebob Squarepants is not the reference for this release.

    Sci-Fi and Fantasy References (Current Theme) The most recent trend in Yocto release names leans toward science fiction and fantasy references. Examples include:

    Surprisingly, this is not the current theme.

    Langdale (a reference to a location, possibly inspired by British landscapes, but also fits a fantasy tone)

    Oh you're close, go on you can do it!

    Mickledore (a name with a Tolkien-esque or fantasy feel)

    I can only imagine why that is...

    Nanbield (continuing the theme of mysterious, fantasy-like names)

    I give up.

    Remember people: verify everything that LLMs say, as they are first-rate bullshit artists.

    • wifi_tethering open_in_new

      This post is public

      www.burtonini.com /blog/2025/02/07/yocto-llm/

    • chevron_right

      Michael Meeks: 2025-02-07 Friday

      news.movim.eu / PlanetGnome • 7 February

    • Morning calls, contract review, updated blog: renumbering to accomodate missed 31st January.
    • Started on the mail backlog.
    • wifi_tethering open_in_new

      This post is public

      meeksfamily.uk /~michael/blog/2025-02-07.html

    • chevron_right

      Michael Meeks: 2025-02-06 Thursday

      news.movim.eu / PlanetGnome • 6 February

    • Tech planning call, worked on a statement of work, and intersecting contractuals through the afternoon - lots of sitting still in one place.
    • Published the next strip:
    • Dinner, and home group in the evening.
    • wifi_tethering open_in_new

      This post is public

      meeksfamily.uk /~michael/blog/2025-02-06.html

    • chevron_right

      Jens Georg: Remote deploy to server using SSH and Gitlab

      news.movim.eu / PlanetGnome • 6 February • 1 minute

    These are my notes for setting up a “deploy-to-remote-webserver” workflow with ssh and an sftp-only chroot, in case someone might find it useful.

    Compiled from various bits and pieces around the internet.

    Setting up SSH

    • Add a group: groupadd -r remote-deploy . This will add a system group
    • Create a folder for authorized keys for users in that group: mkdir -p /etc/ssh/authorized
    • Modify ssh_config
    AuthorizedKeysFile /etc/ssh/authorized/%u .ssh/authorized_keys
     
    Match Group remote-deploy
        ChrootDirectory %h
        ForceCommand internal-sftp
        AllowTcpForwarding no
        X11Forwarding no
        PasswordAuthentication no
    

    Setting up the chroot jail

    It is important that the path up until the home folder is owned by root:root. Below that, create a folder that is owned by the user and any supplementary group you might need to access (e.g. www-data)

    mkdir -p /path/to/chroot/jail/deploy-user
    chown -R root:root /path/to/chroot/jail/deploy-user
    chmod -R 755 /path/to/chroot/jail/deploy-user
    mkdir /path/to/chroot/jail/deploy-user/public
    chown deploy-user:www-data /path/to/chroot/jail/deploy-user
    chmod 755 /path/to/chroot/jail/deploy-user
    

    Setting up the individual user(s)

    • Add a user to the group created above: useradd -g remote-deploy -M -d /path/to/chroot/jail -s /usr/sbin/nologin deploy-user
    • Generate a new pair of keys and move the public key to the newly configured added keystore
    ssh-keygen -o deploy-user -t ed25519
    mv deploy-user.pub /etc/ssh/authorized/deploy-user
    

    Setting up the gitlab pipeline

    • Obtain the host key frm the remote host. To be able to make the variable masked in Gitlab later, it needs to be encoded: ssh-keyscan -q deploy-host.example.com 2>/dev/null | bas64 -w0

    • Log into your gitlab instance, go to Settings->CI/CD->Variables

      • Click on “Add variable”

      • Set “Masked and Hidden” and “Protect variable”

      • Add a comment like “SSH host key for deploy host”

      • Set name to SSH_HOST_KEY

      • Paste output of the keyscan command

      • Create another variable with the same Settings

      • Add a comment like “SSH private key for deploy user”

      • Set name to SSH_PRIVATE_KEY

      • Paste output of base64 -w0 deploy-user , where deploy-user is the private key generated above

    • Setting up ssh in the pipeline script

    mkdir ~/.ssh
    echo -n $SSH_HOST_KEY | base64 -d > ~/.ssh/known_hosts
    
    # If necessary, start ssh-agent
    eval `ssh-agent`
    echo -n $SSH_PRIVATE_KEY | base64 -d | ssh-add -
    
    • wifi_tethering open_in_new

      This post is public

      jensge.org /2025/02/06/remote-deploy-to-server-using-ssh-and-gitlab/

    • chevron_right

      Alberto Garcia: Keeping your system-wide configuration files intact after updating SteamOS

      news.movim.eu / PlanetGnome • 5 February • 2 minutes

    Introduction

    If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.


    As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.

    However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).

    /etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.

    SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded 1 .

    However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d , listing the additional files that need to be kept.

    There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.

    Sample configuration file for the SteamOS updater

    Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer :

    https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273

    As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker . Enjoy!


    1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup ↩
    • wifi_tethering open_in_new

      This post is public

      blogs.igalia.com /berto/2025/02/05/keeping-your-system-wide-configuration-files-intact-after-updating-steamos/

    • chevron_right

      Felix Häcker: Shortwave 5.0

      news.movim.eu / PlanetGnome • 5 February • 1 minute

    You want background playback? You get background playback! Shortwave 5.0 is now available and finally continues playback when you close the window, resolving the “most popular” issue on GitLab !

    Shortwave uses the new Flatpak background portal for this, which means that the current playback status is now also displayed in the “Background Apps” menu.

    The recording feature has also been overhauled. I have addressed a lot of user feedback here, e.g. you can now choose between 3 different modes:

    • Save All Tracks: Automatically save all recorded tracks
    • Decide for Each Track: Temporarily record tracks and save only the ones you want
    • Record Nothing: Stations are played without recording

    In addition to that the directory for saving recorded tracks can be customized, and users can now configure the minimum and maximum duration of recordings.

    There is a new dialog window with additional details and options for current or past played tracks. For example, you no longer need to worry about forgetting to save your favorite track when the recording is finished – you can now mark tracks directly during playback so that they are automatically saved when the recording is completed.

    You don’t even need to open Shortwave for this, thanks to the improved notifications you can decide directly when a new track gets played whether you want to save it or not.

    Of course the release also includes the usual number of bug fixes and improvements. For example, the volume can now be changed using the keyboard shortcut.

    Enjoy!

    Get it on Flathub

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /haeckerfelix/2025/02/05/shortwave-5-0/

    • chevron_right

      Flathub Blog: On the Go: Making it Easier to Find Linux Apps for Phones & Tablets

      news.movim.eu / PlanetGnome • 5 February • 2 minutes

    With apps made for different form factors, it can be hard to find what works for your specific device. For example, we know it can be a bit difficult to find great apps that are actually designed to be used on a mobile phone or tablet. To help solve this, we’re introducing a new collection: On the Go.

    On the go: Apps for your Linux phones and tablets On the go: Apps for your Linux phones and tablets

    As the premier source of apps for Linux, Flathub serves a wide range of people across a huge variety of hardware: from ultra powerful developer workstations to thin and light tablets; from handheld gaming consoles to a growing number of mobile phones. Generally any app on Flathub will work on a desktop or laptop with a large display, keyboard, and mouse or trackpad. However, devices with only touch input and smaller screen sizes have more constraints.

    Revealing the App Ecosystem

    Using existing data and open standards, we’re now highlighting apps on Flathub that report as being designed to work on these mobile form factors. This new On the Go collection uses existing device support data submitted by app developers in their MetaInfo, the same spec that is used to build those app’s listings for Flathub and other app store clients. The collection is featured on the Flathub.org home page for all devices.

    Foliate app adapting across a desktop, tablet, and phone

    Many of these apps are adaptive across screen sizes and input methods; you might be surprised to know that your favorite app on your desktop will also work great on a Linux phone, tablet, or Steam Deck’s touch screen. We aim to help reveal just how rich and well-rounded the app ecosystem already is for these devices—and to give app developers another place for their apps to shine and be discovered.

    Developers: It’s Up to You

    As of this writing there are over 150 apps in the collection, but we expect there are cases where app developers have not provided the requisite device support data.

    If you’re the creator of an app that should work well on mobile form factors but isn’t featured in the collection, take a minute to double-check the documentation and your own apps’s MetaInfo to ensure it’s accurate. Device support data can also be used by native app store clients across form factors to determine what apps are displayed or how they are ranked, so it’s a good idea to ensure it’s up to date regardless of what your app supports.

    • wifi_tethering open_in_new

      This post is public

      docs.flathub.org /blog/on-the-go-linux-mobile-collection