call_end

    • chevron_right

      Thibault Martin: TIL that htop can display more useful metrics

      news.movim.eu / PlanetGnome • 13 June • 1 minute

    A program on my Raspberry Pi was reading data on disk, performing operations, and writing the result on disk. It did so at an unusually slow speed. The problem could either be that the CPU was too underpowered to perform the operations it needed or the disk was too slow during read, write, or both.

    I asked colleagues for opinions, and one of them mentioned that htop could orient me in the right direction. The time a CPU spends waiting for an I/O device such as a disk is known as the I/O wait. If that wait time is superior to 10%, then the CPU spends a lot of time waiting for data from the I/O device, so the disk would likely be the bottleneck. If the wait time remains low, then the CPU is likely the bottleneck.

    By default htop doesn't show the wait time. By pressing F2 I can access htop's configuration. There I can use the right arrow to move to the Display options, select Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest) , and press Space to enable it.

    I can then press the left arrow to get back to the options menu, and move to Meters . Using the right arrow I can go to the rightmost column, select CPUs (1/1): all CPUs by pressing Enter, move it to one of the two columns, and press Enter when I'm done. With it still selected, I can press Enter to alternate through the different visualisations. The most useful to me is the [Text] one.

    I can do the same with Disk IO to track the global read / write speed, and Blank to make the whole set-up more readable.

    With htop configured like this, I can trigger my slow program again see that the CPU is not waiting for the disk. All CPUs have a wa of 0%

    If you know more useful tools I should know about when chasing bottlenecks, or if you think I got something wrong, please email me at thib@ergaster.org!

    • wifi_tethering open_in_new

      This post is public

      ergaster.org /til/htop-is-super-useful/

    • chevron_right

      Lennart Poettering: ASG! 2025 CfP Closes Tomorrow!

      news.movim.eu / PlanetGnome • 12 June

    The All Systems Go! 2025 Call for Participation Closes Tomorrow!

    The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow , on 13th of June! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!

    • wifi_tethering open_in_new

      This post is public

      0pointer.net /blog/asg-2025-cfp-closes-tomorrow.html

    • chevron_right

      Andy Wingo: whippet in guile hacklog: evacuation

      news.movim.eu / PlanetGnome • 11 June • 1 minute

    Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile’s memory manager.

    So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.

    Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone—it is still available for allocation!—but it won’t be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.

    The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don’t think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.

    So that’s where I’m at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we’ll see!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2025/06/11/whippet-in-guile-hacklog-evacuation

    • chevron_right

      Alireza Shabani: Why GNOME’s Translation Platform Is Called “Damned Lies”

      news.movim.eu / PlanetGnome • 11 June • 2 minutes

    Damned Lies is the name of GNOME’s web application for managing localization (l10n) across its projects. But why is it named like this?

    Damned Lies about GNOME

    Screenshot of Gnome Damned Lies from Google search with the title: Damned Lies about GNOME

    On the About page of GNOME’s localization site , the only explanation given for the name Damned Lies is a link to a Wikipedia article called “ Lies, damned lies, and statistics .

    “Damned Lies” comes from the saying “Lies, damned lies, and statistics” which is a 19th-century phrase used to describe the persuasive power of statistics to bolster weak arguments, as described on Wikipedia . One of its earliest known uses appeared in a 1891 letter to the National Observer , which categorised lies into three types:

    “Sir, —It has been wittily remarked that there are three kinds of falsehood: the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate of national pensions relies …”

    To find out more, I asked in GNOME’s i18n Matrix room , and Alexandre Franke helped a lot, he said:

    Stats are indeed lies, in many ways.
    Like if GNOME 48 gets 100% translated in your language on Damned Lies, it doesn’t mean the version of GNOME 48 you have installed on your system is 100% translated, because the former is a real time stat for the branch and the latter is a snapshot (tarball) at a specific time.
    So 48.1 gets released while the translation is at 99%, and then the translators complete the work, but you won’t get the missing translations until 48.2 gets released.
    Works the other way around: the translation is at 100% at the time of the release, but then there’s a freeze exception and the stats go 99% while the released version is at 100%.
    Or you are looking at an old version of GNOME for which there won’t be any new release, which wasn’t fully translated by the time of the latest release, but then a translator decided that they wanted to see 100% because the incomplete translation was not looking as nice as they’d like, and you end up with Damned Lies telling you that version of GNOME was fully translated when it never was and never will be.
    All that to say that translators need to learn to work smart, at the right time, on the right modules, and not focus on the stats.

    So there you have it: Damned Lies is a name that reminds us that numbers and statistics can be misleading even on GNOME’s I10n Web application.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /alirezash/2025/06/11/why-gnomes-translation-platform-is-called-damned-lies/

    • chevron_right

      Adrian Vovk: Introducing stronger dependencies on systemd

      news.movim.eu / PlanetGnome • 10 June • 5 minutes

    Doesn’t GNOME already depend on systemd?

    Kinda… GNOME doesn’t have a formal and well defined policy in place about systemd. The rule of thumb is that GNOME doesn’t strictly depend on systemd for critical desktop functionality, but individual features may break without it.

    GNOME does strongly depend on logind, systemd’s session and seat management service. GNOME first introduced support for logind in 2011 , then in 2015 ConsoleKit support was removed and logind became a requirement. However, logind can exist in isolation from systemd: the modern elogind service does just that, and even back in 2015 there were alternatives available . Some distributors chose to patch ConsoleKit support back into GNOME . This way, GNOME can run in environments without systemd, including the BSDs.

    While GNOME can run with other init systems, most upstream GNOME developers are not testing GNOME in these situations. Our automated testing infrastructure (i.e. GNOME OS) doesn’t test any non-systemd codepaths. And many modules that have non-systemd codepaths do so with the expectation that someone else will maintain them and fix them when they break.

    What’s changing?

    GNOME is about to gain a few strong dependencies on systemd, and this will make running GNOME harder in environments that don’t have systemd available.

    Let’s start with the easier of the changes. GDM is gaining a dependency on systemd’s userdb infrastructure. GNOME and systemd do not support running more than one graphical session under the same user account, but GDM supports multi-seat configurations and Remote Login with RDP . This means that GDM may try to display multiple login screens at once, and thus multiple graphical sessions at once. At the moment, GDM relies on legacy behaviors and straight-up hacks to get this working, but this solution is incompatible with the modern dbus-broker and so we’re looking to clean this up. To that end, GDM now leverages systemd-userdb to dynamically allocate user accounts, and then runs each login screen as a unique user.

    In the future, we plan to further depend on userdb by dropping the AccountsService daemon, which was designed to be a stop-gap measure for the lack of a rich user database. 15 years later, this “temporary” solution is still in use. Now that systemd’s userdb enables rich user records , we can start work on replacing AccountsService.

    Next, the bigger change. Since GNOME 3.34, gnome-session uses the systemd user instance to start and manage the various GNOME session services. When systemd is unavailable, gnome-session falls back to a builtin service manager. This builtin service manager uses .desktop files to start up the various GNOME session services, and then monitors them for failure. This code was initially implemented for GNOME 2.24, and is starting to show its age. It has received very minimal attention in the 17 years since it was first written. Really, there’s no reason to keep maintaining a bespoke and somewhat primitive service manager when we have systemd at our disposal. The only reason this code hasn’t completely bit rotted is the fact that GDM’s aforementioned hacks break systemd and so we rely on the builtin service manager to launch the login screen.

    Well, that has now changed. The hacks in GDM are gone, and the login screen’s session is managed by systemd. This means that the builtin service manager will now be completely unused and untested. Moreover: we’d like to implement a session save/restore feature , but the builtin service manager interferes with that. For this reason, the code is being removed .

    So what should distros without systemd do?

    First, consider using GNOME with systemd. You’d be running in a configuration supported, endorsed, and understood by upstream. Failing that, though, you’ll need to implement replacements for more systemd components, similarly to what you have done with elogind and eudev.

    To help you out, I’ve put a temporary alternate code path into GDM that makes it possible to run GDM without an implementation of userdb. When compiled against elogind, instead of trying to allocate dynamic users GDM will look-up and use the gdm-greeter user for the first login screen it spawns, gdm-greeter-2 for the second, and gdm-greeter-N for the Nth. GDM will have similar behavior with the gnome-initial-setup[-N] users. You can statically allocate as many of these users as necessary, and GDM will work with them for now. It’s quite likely that this will be necessary for GNOME 49.

    Next: you’ll need to deal with the removal of gnome-session’s builtin service manager. If you don’t have a service manager running in the user session, you’ll need to get one. Just like system services, GNOME session services now install systemd unit files, and you’ll have to replace these unit files with your own service manager’s definitions. Next, you’ll need to replace the “session leader” process: this is the main gnome-session binary that’s launched by GDM to kick off session startup. The upstream session leader just talks to systemd over D-Bus to upload its environment variables and then start a unit, so you’ll need to replace that with something that communicates with your service manager instead. Finally, you’ll probably need to replace “gnome-session-ctl”, which is a tiny helper binary that’s used to coordinate between the session leader, the main D-Bus service, and systemd. It is also quite likely that this will be needed for GNOME 49

    Finally: You should implement the necessary infrastructure for the userdb Varlink API to function. Once AccountsService is dropped and GNOME starts to depend more on userdb, the alternate code path will be removed from GDM. This will happen in some future GNOME release (50 or later). By then, you’ll need at the very least:

    • An implementation of systemd-userdbd’s io.systemd.Multiplexer
    • If you have NSS, a bridge that exposes NSS-defined users through the userdb API.
    • A bridge that exposes userdb-defined users through your libc’s native user lookup APIs (such as getpwent ).

    Apologies for the short timeline, but this blog post could only be published after I knew how exactly I’m splitting up gnome-session into separate launcher and main D-Bus service processes. Keep in mind that GNOME 48 will continue to receive security and bug fixes until GNOME 50 is released. Thus, if you cannot address these changes in time, you have the option of holding back the GNOME version. If you can’t do that, you might be able to get GNOME 49 running with gnome-session 48, though this is a configuration that won’t be tested or supported upstream so your mileage will vary (much like running GNOME on other init systems). Still, patching that scenario to work may buy you more time to upgrade to gnome-session 49.

    And that should be all for now!

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /adrianvovk/2025/06/10/gnome-systemd-dependencies/

    • chevron_right

      GNOME Foundation News: GNOME Has a New Infrastructure Partner: Welcome AWS!

      news.movim.eu / PlanetGnome • 10 June • 4 minutes

    This post was contributed by Andrea Veri from the GNOME Foundation.

    GNOME has historically hosted its infrastructure on premises. That changed with an A WS Open Source Credits program sponsorship which has allowed our team of two SREs to migrate the majority of the workloads to the cloud and turn the existing OpenShift environment into a fully scalable and fault tolerant one thanks to the infrastructure provided by AWS. By moving to the cloud, we have dramatically reduced the maintenance burden, achieved lower latency for our users and contributors and increased security through better access controls.

    Our original infrastructure did not account for the exponential growth that GNOME has seen in its contributors and userbase over the past 4-5 years thanks to the introduction of GNOME Circle . GNOME Circle is composed of applications that are not part of core GNOME but are meant to extend the ecosystem without being bound to the stricter core policies and release schedules. Contributions on these projects also make contributors eligible for GNOME Foundation membership and potentially allow them to receive direct commit access to GitLab in case the contributions are consistent over a long period of time in order to gain more trust from the community. GNOME recently migrated to GitLab, away from cgit and Bugzilla.

    In this post, we’d like to share some of the improvements we’ve made as a result of our migration to the cloud.

    A history of network and storage challenges

    In 2020, we documented our main architectural challenges:

    1. Our infrastructure was built on OpenShift in a hyperconverged setup, using OpenShift Data Foundations (ODF), running Ceph and Rook behind the scenes. Our control plane and workloads were also running on top of the same nodes.
    2. Because GNOME historically did not have an L3 network and generally had no plans to upgrade the underlying network equipment and/or invest time in refactoring it, we would have to run our gateway using a plain Linux VM with all the associated consequences.
    3. We also wanted to make use of an external Ceph cluster with slower storage, but this was not supported in ODF and required extra glue to make it work.
    4. No changes were planned on the networking equipment side to make links redundant. That meant a code upgrade on switches would have required full service downtime.
    5. We had to work with with Dell support for every broken hardware component, which added further toil.
    6. With the GNOME user and contributor base always increasing, we never really had a good way to scale our compute resources due to budget constraints.

    Cloud migration improvements

    In 2024, during a hardware refresh cycle, we started evaluating the idea of migrating to the public cloud. We have been participating in the AWS Open Source Credits program for many years and received sponsorship for a set of Amazon Simple Storage Service (S3) buckets that we use widely across GNOME services. Based on our previous experience with the program and the people running it, we decided to request sponsorship from AWS for the entire infrastructure, which was kindly accepted.

    I believe it’s crucial to understand how AWS resolved the architectural challenges we had as a small SRE team (just two engineers!). Most importantly, the move dramatically reduced the maintenance toil we had:

    1. Using AWS’s provided software-defined networking services, we no longer have to rely on an external team to apply changes to the underlying networking layout. This also gave us a way to use a redundant gateway and NAT without having to expose worker nodes to the internet.
    2. We now use AWS Elastic Load Balancing (ELB) instances (classic load balancers are the only type supported by OpenShift for now) as a traffic ingress for our OpenShift cluster. This reduces latency as we now operate within the same VPC instead of relying on an external load balancing provider. This also comes with the ability to have access to the security group APIs which we can use to dynamically add IP addresses. This is critical when we have individuals or organizations abusing specific GNOME services with thousands of queries per minute.
    3. We also use Amazon Elastic Block Store (EBS) and Amazon Elastic File System (EFS) via the OpenShift CSI driver. This allows us to avoid having to manage a Ceph cluster, which is a major win in terms of maintenance and operability.
    4. With AWS Graviton instances, we now have access to ARM64 machines, which we heavily leverage as they’re generally cheaper than their Intel counterparts.
    5. Given how extensively we use Amazon S3 across the infrastructure, we were able to reduce latency and costs due to the use of internal VPC S3 endpoints.
    6. We took advantage of AWS Identity and Access Management (IAM) to provide granular access to AWS services, giving us the possibility to allow individual contributors to manage a limited set of resources without requiring higher privileges.
    7. We now have complete hardware management abstraction, which is vital for a team of only two engineers who are trying to avoid any additional maintenance burden.

    Thank you, AWS!

    I’d like to thank AWS for their sponsorship and the massive opportunity they are giving to the GNOME Infrastructure to provide resilient, stable and highly available workloads to GNOME’s users and contributors across the globe.

    • wifi_tethering open_in_new

      This post is public

      foundation.gnome.org /2025/06/10/gnome-has-a-new-infrastructure-partner-welcome-aws/

    • chevron_right

      Daniel García Moreno: Log Detective: Google Summer of Code 2025

      news.movim.eu / PlanetGnome • 9 June • 1 minute

    I'm glad to say that I'll participate again in the GSoC , as mentor. This year we will try to improve the RPM packaging workflow using AI, as part of the openSUSE project.

    So this summer I'll be mentoring an intern that will research how to integrate Log Detective with openSUSE tooling to improve the packager workflow to maintain rpm packages.

    gsoc.png

    Log Detective

    Log Detective is an initiative created by the Fedora project , with the goal of

    "Train an AI model to understand RPM build logs and explain the failure in simple words, with recommendations how to fix it. You won't need to open the logs at all."

    log-detective.png

    As a project that was promoted by Fedora, it's highly integrated with the build tools around this distribution and RPM packages. But RPM packages are used in a lot of different distributions, so this "expert" LLM will be helpful for everyone doing RPM, and everyone doing RPM, should contribute to it.

    This is open source, so if, at openSUSE, we want to have something similar to improve the OBS , we don't need to reimplement it, we can collaborate. And that's the idea of this GSoC project.

    We want to use Log Detective, but also collaborate with failures from openSUSE to improve the training and the AI, and this should benefit openSUSE but also will benefit Fedora and all other RPM based distributions.

    The intern

    The selected intern is Aazam Thakur . He studies at University of Mumbai, India. He has experience in using SUSE as he has previously worked on SLES 15.6 during his previous summer mentorship at OpenMainFrame Project for RPM packaging.

    I'm sure that he will be able to achieve great things during these three months. The project looks very promising and it's one of the things where AI and LLM will shine, because digging into logs is always something difficult and if we train a LLM with a lot of data it can be really useful to categorize failures and give a short description of what's happening.

    • wifi_tethering open_in_new

      This post is public

      danigm.net /gsoc-2025.html

    • chevron_right

      Tanmay Patil: Acrostic Generator for GNOME Crossword Editor

      news.movim.eu / PlanetGnome • 8 June • 11 minutes

    The experimental Acrostic Generator has finally landed inside the Crossword editor and is currently tagged as BETA .
    I’d classify this as one of the trickiest and most interesting projects I’ve worked on.
    Here’s how an acrostic puzzle loaded inside Crossword editor looks like:

    In my previous blog post (published about a year ago), I explained one part of the generator. Since then, there have been many improvements.
    I won’t go into detail about what an acrostic puzzle is, as I’ve covered that in multiple previous posts already.
    If you’re unfamiliar, please check out my earlier post for a brief idea.

    Coming to the Acrostic Generator , I’ll begin by showing an illustration that shows the input and the corresponding output generated by it. After that, I’ll walk through the implementation and challenges I faced.

    Let’s take the quote: “ CATS ALWAYS TAKE NAPS ” whose author is a “ CAT ”.

    Here’s what the Acrostic Generator essentially does

    It generates answers like “CATSPAW” , “ALASKAN” and “TYES” which, as you can probably guess from the color coding, are made up of letters from the original quote.

    Core Components

    Before explaining how the Acrostic generator works, I want to briefly explain some of the key components involved.
    1. Word list
    The word list is an important part of Crosswords. It provides APIs to efficiently search for words. Refer to the documentation to understand how it works.
    2. IpuzCharset
    The performance of the Acrostic Generator heavily depends on IpuzCharset, which is essentially a HashMap that stores characters and their frequencies.
    We perform numerous ipuz_charset_add_text and ipuz_charset_remove_text operations on the QUOTE charset. I'd especially like to highlight ipuz_charset_remove_text, which used to be computationally very slow. Last year, charset was rewritten in Rust by Federico. Compared to the earlier implementation in C using a GTree, the Rust version turned out to be quite faster.
    Here’s F ederico’s blog post on rustifying libipuz’s charset.

    Why is ipuz_charset_remove_text latency so important? Let's consider the following example:

    QUOTE: "CARNEGIE VISITED PRINCETON AND TOLD WILSON WHAT HIS YOUNG MEN NEEDED WAS NOT A LAW SCHOOL BUT A LAKE TO ROW ON IN ADDITION TO BEING A SPORT THAT BUILT CHARACTER AND WOULD LET THE UNDERGRADUATES RELAX ROWING WOULD KEEP THEM FROM PLAYING FOOTBALL A ROUGHNECK SPORT CARNEGIE DETESTED"
    SOURCE: "DAVID HALBERSTAM THE AMATEURS"

    In this case, the total number of maximum ipuz_charset_remove_text operations required in the worst case would be:

    73205424239083486088110552395002236620343529838736721637033364389888000000

    …which is a lot .

    Terminology

    I’d also like you guys to take a note of a few things.
    1. Answers and Clues refer to the same thing, they are the solutions generated by the Acrostic Generator. I’ll be using them interchangeably throughout.
    2. We’ve set two constants in the engine: MIN_WORD_SIZE = 3 and MAX_WORD_SIZE = 20. These make sure the answers are not too short or too long and help stop the engine from running indefinitely.
    3. Leading characters here are all the characters of source. Each one is the first letter of corresponding answer.

    Setting up things

    Before running the engine, we need to set up some data structures to store the results.

    typedef struct {
    /* Representing a answer */
    gunichar leading_char;
    const gchar *letters;
    guint word_length;

    /* Searching the answer */
    gchar *filter;
    WordList *word_list;
    GArray *rand_offset;
    } ClueEntry;

    We use a ClueEntry structure to store the answer for each clue. It holds the leading character (from the source), the letters of the answer, the word length, and some additional word list information.
    Oh wait, why do we need the word length since we are already storing letters of the answer?
    Let’s backtrack. Initially, I wrote the following brute-force recursive algorithm:

    void
    acrostic_generator_helper (AcrosticGenerator *self,
    gchar nth_source_char)
    {
    // Iterate from min_word_size to max_word_size for every answer
    for (word_length = min_word_size; word_length <= max_word_size; word_length++)
    {
    // get list of words starting from `nth_source_char`
    // and with length equal to word_length
    word_list = get_word_list (starting_letter = nth_source_char, word_length);

    // Iterate throught the word list
    for (guint i = 0; i < word_list_get_n_items (word_list); i++)
    {
    word = word_list[i];

    // check if word is present in the quote charset
    if (ipuz_charset_remove_text (quote_charset, word))
    {
    // if present we forward to the next source char
    acrostic_generator_helper (self, nth_source_char + 1)
    }
    }
    }
    }

    The problem with this approach is that it is too slow. We were iterating from MIN_WORD_SIZE to MAX_WORD_SIZE and trying to find a solution for every possible size. Yes, this would work and eventually we’ll find a solution, but it would take a lot of time. Also, many of the answers for the initial source characters would end up having length equal to MIN_WORD_SIZE .
    To quantify this, compared to the latest approach (which I’ll discuss shortly), we would be performing roughly 20 times the current number (7.3 × 10⁷³) of ipuz_charset_remove_text operations.

    To fix this, we added randomness by calculating and assigning random lengths to clue answers before running the engine.
    To generate these random lengths, we break a number equal to the length of the quote string into n parts (where n is the number of source characters), each part having a random value.

    static gboolean
    generate_random_lengths (GArray *clues,
    guint number,
    guint min_word_size,
    guint max_word_size)
    {
    if ((clues->len * max_word_size) < number)
    return FALSE;

    guint sum = 0;

    for (guint i = 0; i < clues->len; i++)
    {
    ClueEntry *clue_entry;
    guint len;
    guint max_len = MAX (min_word_size,
    MIN (max_word_size, number - sum));

    len = rand() % (max_len - min_word_size + 1) + min_word_size;
    sum += len;

    clue_entry = &(g_array_index (clues, ClueEntry, i));
    clue_entry->word_length = len;
    }

    return sum == number;
    }

    I have been continuously researching ways to generate random lengths that help the generator find answers as quickly as possible.
    What I concluded is that the Acrostic Generator performs best when the word lengths follow a right-skewed distribution .

    static void
    fill_clue_entries (GArray *clues,
    ClueScore *candidates,
    WordListResource *resource)
    {
    for (guint i = 0; i < clues->len; i++)
    {
    ClueEntry *clue_entry;

    clue_entry = &(g_array_index (clues, ClueEntry, i));

    // Generate filter in order to get words with starting letter nth char of source string
    // For eg. char = D, answer_len = 5
    // filter = "D????"
    clue_entry->filter = generate_individual_filter (clue_entry->leading_char,
    clue_entry->word_length);


    // Load all words with starting letter equal to nth char in source string
    clue_entry->word_list = word_list_new ();
    word_list_set_resource (clue_entry->word_list, resource);
    word_list_set_filter (clue_entry->word_list, clue_entry->filter, WORD_LIST_MATCH);

    candidates[i].index = i;
    candidates[i].score = clue_entry->word_length;

    // Randomise the word list which is sorted by default
    clue_entry->rand_offset = generate_random_lookup (word_list_get_n_items (clue_entry->word_list));
    }

    Now that we have random lengths, we fill up the ClueEntry data structure.
    Here, we generate individual filters for each clue, which are used to set the filter on each word list. For example, the filters for the example illustrated above are C??????, A??????, and T??? .
    We also maintain a separate word list for each clue entry. Note that we do not store the huge word list individually for every clue. Instead, each word list object refers to the same memory-mapped word list resource.
    Additionally, each clue entry contains a random offsets array, which stores a randomized order of indices. We use this to traverse the filtered word list in a random order. This randomness helps fix the problem where many answers for the initial source characters would otherwise end up with length equal to MIN_WORD_SIZE.
    The advantage of pre-calculating all of this before running the engine is that the main engine loop only performs the heavy operations: ipuz_charset_remove_text and ipuz_charset_add_text.

    static gboolean
    acrostic_generator_helper (AcrosticGenerator *self,
    GArray *clues,
    guint index,
    IpuzCharsetBuilder *remaining_letters,
    ClueScore *candidates)
    {
    ClueEntry *clue_entry;

    if (index == clues->len)
    return TRUE;

    clue_entry = &(g_array_index (clues, ClueEntry, candidates[index].index));

    for (guint i = 0; i < word_list_get_n_items (clue_entry->word_list); i++)
    {
    const gchar *word;

    g_atomic_int_inc (self->count);


    // traverse based on random indices
    word = word_list_get_word (clue_entry->word_list,
    g_array_index (clue_entry->rand_offset, gushort, i));

    clue_entry->letters = word;

    if (ipuz_charset_builder_remove_text (remaining_letters, word + 1))
    {
    if (!add_or_skip_word (self, word) &&
    acrostic_generator_helper (self, clues, index + 1, remaining_letters, candidates))
    return TRUE;

    clean_up_word (self, word);
    ipuz_charset_builder_add_text (remaining_letters, word + 1);
    clue_entry->letters = NULL;
    }

    }

    clue_entry->letters = NULL;

    return FALSE;
    }

    The approach is quite simple. As you can see in the code above, we perform ipuz_charset_remove_text many times, so it was crucial to make the ipuz_charset_remove_text operation efficient.
    When all the characters in the charset have been used/removed and the index becomes equal to number of clues, it means we have found a solution. At this point, we return, store the answers in an array, and continue our search for new answers until we receive a stop signal.
    We also maintain a skip list that is updated whenever we find an clue answer and is cleaned up during backtracking. This makes sure there are no duplicate answers in the answers list.

    Performance Improvements

    I compared the performance of the acrostic generator using the current Rust charset implementation against the previous C GTree implementation. I have used the following quote and source strings with the same RNG seed for both implementations:

    QUOTE: "To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment."
    SOURCE: "TBYIWTCTMYSEGA"
    Results:
    +-----------------+--------------------+
    | Implementation | Time taken(secs) |
    +-----------------+--------------------+
    | C GTree | 74.39 |
    | Rust HashMap | 17.85 |
    +-----------------+--------------------+

    The Rust HashMap implementation is nearly 4 times faster than the original C GTree version for the same random seed and traversal order.

    I have also been testing the generator to find small performance improvements. Here are some of them:

    1. When searching for answers, looking for answers for clues with longer word lengths first helps find solutions faster
    2. We switched to using nohash_hasher for the hashmap because we are essentially storing {char: frequency} pairs. Trace reports showed that significant time and resources were spent computing hash using Rust’s default SipHash implementation which was unnecessary. MR
    3. Inside ipuz_charset_remove_text, instead of cloning the original data, we use a rollback mechanism that tracks all modifications and rolls back in case of failure. MR

    I also remember running the generator on some quote and source input back in the early days. It ran continuously for four hours and still couldn’t find a single solution. We even overflowed the gint counter which tracks number of words tried. Now, the same generator can return 10 solutions in under 10 seconds . We’ve come a long way! 😀

    Crossword Editor

    Now that I’ve covered the engine, I’ll talk about the UI part.
    We started off by sketching potential designs on paper. @ jrb came up with a good design and we decided to move forward with it, making a few tweaks to it.

    First, we needed to display a list of the generated answers.

    For this, I implemented my own list model where each item stores a string for the answer and a boolean indicating whether the user wants to apply that answer.
    To allow the user to run, stop the generator and then apply answers, we reused the compact version of the original autofill component used in normal crosswords. The answer list gets updated whenever the slider is moved.

    We have tried to reuse as much code as possible for acrostics, keeping most of the code common between acrostics and normal crosswords.
    Here’s a quick demo of the acrostic editor in action:

    We also maintain a cute little histogram on the right side of the bottom panel to summarize clue lengths.

    You can also try out the Acrostic Generator using our CLI app, which I originally wrote to quickly test the engine. To use the binary, you’ll need to build Crosswords Editor locally. Example usage:

    $ ./_build/src/acrostic-generator -q "For most of history, Anonymous was a woman. I would venture to guess that Anon, who wrote so many poems without signing them, was often a woman. And it is for this reason that I would implore women to write all the more" -s "Virginia wolf"
    Starting acrostic generator. Press Ctrl+C to cancel.
    [ VASOTOMY ] [ IMFROMMISSOURI ] [ ROMANIANMONETARYUNIT ] [ GREATFEATSOFSTRENGTH ] [ ITHOUGHTWEHADADEAL ] [ NEWSSHOW ] [ INSTITUTION ] [ AWAYWITHWORDS ] [ WOOLSORTERSPNEUMONIA ] [ ONEWOMANSHOWS ] [ LOWMANONTHETOTEMPOLE ] [ FLOWOUT ]
    [ VALOROUSNESS ] [ IMMUNOSUPPRESSOR ] [ RIGHTEOUSINDIGNATION ] [ GATEWAYTOTHEWEST ] [ IWANTYOUTOWANTME ] [ NEWTONSLAWOFMOTION ] [ IMTOOOLDFORTHISSHIT ] [ ANYONEWHOHADAHEART ] [ WOWMOMENT ] [ OMERS ] [ LAWUNTOHIMSELF ] [ FORMATWAR ]

    Plans for the future

    To begin with, we’d really like to improve the overall design of the Acrostic Editor and make it more user friendly. Let us know if you have any design ideas, we’d love to hear your suggestions!
    I’ve also been thinking about different algorithms for generating answers in the Acrostic Generator. One idea is to use a divide-and-conquer approach, where we recursively split the quote until we find a set of sub-quotes that satisfy all constraints of answers.

    To conclude, here’s an acrostic for you all to solve, created using the Acrostic Editor ! You can load the file in Crosswords and start playing.

    Thanks for reading!

    • wifi_tethering open_in_new

      This post is public

      medium.com /@txnmxy3/acrostic-generator-for-gnome-crossword-editor-6f0c41bf4c24

    • chevron_right

      Jordan Petridis: An update on the X11 GNOME Session Removal

      news.movim.eu / PlanetGnome • 8 June • 2 minutes

    A year and a half ago, shortly after the GNOME 45 release, I opened a pair of Pull Requests to deprecate and remove the X11 Session.

    A lot has happened since. The GNOME 48 release addressed all the remaining blocking issues, mainly accessibility regressions, but it was too late in the development cycle to drop the session as well.

    Now the time has come.

    We went ahead and disabled the X11 session by default and form now on it needs to be explicitly enabled when building the affected modules. (gnome-session, GDM, mutter/gnome-shell). This does not affect XWayland, it’s only about the X11/Xorg session and related functionality. GDM’s ability to launch other X11 sessions will be also preserved .

    Usually we release a single Alpha snapshot, but this time we have released earlier snapshots (49.alpha.0), 3 weeks ahead of the normal schedule , to gather as much feedback and testing as possible. (There will be another snapshot along the complete GNOME 49 Alpha release).

    If you are a distributor, please try to not change the default or at least let us (or me directly) know why you’d need to still ship the X11 session.

    As I mentioned in the tracking issue ticket, there 3 possible scenarios.

    The most likely scenario is that all the X11 session code stays disabled by default for 49 with a planned removal for GNOME 50.

    The ideal scenario is that everything is perfect, there are no more issues and bugs, we can go ahead and drop all the code before GNOME 49.beta.

    And the very unlikely scenario is that we discover some deal-breaking issue, revert the changes and postpone the whole thing.

    Having gathered feedback from our distribution partners, it now depends entirely on how well the early testing will go and what bugs will be uncovered.

    You can test GNOME OS Nightly with all the changes today. We found a couple minor issues but everything is fixed in the alpha.0 snapshot. Given how smooth things are going so far I believe there is a high likely-hood there won’t be any further issues and we might be able to proceed with the Ideal scenario.

    TLDR: The X11 session for GNOME 49 will be disabled by default and it’s scheduled for removal, either during this development cycle or more likely during the next one (GNOME 50). There are release snapshots of 49.alpha.0 for some modules already available. Go and try them out!

    Happy Pride month and Free Palestine ✊

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /alatiera/2025/06/08/the-x11-session-removal/