call_end

    • Pl chevron_right

      Christian Hergert: Libdex 1.0

      news.movim.eu / PlanetGnome • 1 September • 6 minutes

    A couple years ago I spent a great deal of time in the waiting room of an allergy clinic. So much that I finally found the time to write a library that was meant to be a followup to libgtask/libiris libraries I wrote nearly two decades ago. A lot has changed in Linux since then and I felt that maybe this time, I could get it “right”.

    This will be a multi-part series, but today lets focus on terminology so we have a common language to communicate.

    Futures

    A future is a container that will eventually contain a result or an error.

    Programmers often use the words “future” and “promise” interchangeably. Libdex tries, when possible, to follow the academic nomenclature for futures. That is to say that a future is the interface and promise is a type of future.

    Futures exist in one of three states. The first state is pending . A future exists in this state until it has either rejected or resolved.

    The second state is resolved . A future reaches this state when it has successfully obtained a value.

    The last third state is rejected . If there was a failure to obtain a value a future will be in this state and contain a GError representing such failure.

    Promises and More

    A promise is a type of future that allows the creator to set the resolved value or error. This is a common type of future to use when you are integrating with things that are not yet integrated with Libdex.

    Other types of futures also exist.

    /* resolve to "true" */
    DexPromise *good = dex_promise_new ();
    dex_promise_resolve_boolean (good, TRUE);
    
    /* reject with error */
    DexPromise *bad = dex_promise_new ();
    dex_promise_reject (good,
                        g_error_new (G_IO_ERROR,
                                     G_IO_ERROR_FAILED,
                                     "Failed"));
    

    Static Futures

    Sometimes you already know the result of a future upfront.
    The DexStaticFuture is used in this case.
    Various constructors for DexFuture will help you create one.

    For example, Dex.Future.new_take_object() will create a static future for a GObject -derived instance.

    DexFuture *future = dex_future_new_for_int (123);
    

    Blocks

    One of the most commonly used types of futures in Libdex is a DexBlock .

    A DexBlock is a callback that is run to process the result of a future. The block itself is also a future meaning that you can chain these blocks together into robust processing groups.

    “Then” Blocks

    The first type of block is a “then” block which is created using Dex.Future.then() . These blocks will only be run if the dependent future resolves with a value. Otherwise, the rejection of the dependent future is propagated to the block.

    static DexFuture *
    further_processing (DexFuture *future,
                        gpointer   user_data)
    {
      const GValue *result = dex_promise_get_value (future, NULL);
    
      /* since future is completed at this point, you can also use
       * the simplified "await" API. Otherwise you'd get a rejection
       * for not being on a fiber. (More on that later).
       */
      g_autoptr(GObject) object = dex_await_object (dex_ref (future), NULL);
    
      return dex_ref (future);
    }
    

    “Catch” Blocks

    Since some futures may fail, there is value in being able to “catch” the failure and resolve it.

    Use Dex.Future.catch() to handle the result of a rejected future and resolve or reject it further.

    static DexFuture *
    catch_rejection (DexFuture *future,
                     gpointer   user_data)
    {
      g_autoptr(GError) error = NULL;
    
      dex_future_get_value (future, &error);
    
      if (g_error_matches (error, G_IO_ERROR, G_IO_ERROR_NOT_FOUND))
        return dex_future_new_true ();
    
      return dex_ref (future);
    }
    

    “Finally” Blocks

    There may be times when you want to handle completion of a future whether it resolved or rejected. For this case, use a “finally” block by calling Dex.Future.finally() .

    Infinite Loops

    If you find you have a case where you want a DexBlock to loop indefinitely, you can use the _loop variants of the block APIs.

    See Dex.Future.then_loop() , Dex.Future.catch_loop() , or Dex.Future.finally_loop() . This is generally useful when your block’s callback will begin the next stage of work as the result of the callback.

    Future Sets

    A FutureSet is a type of future that is the composition of multiple futures. This is an extremely useful construct because it allows you to do work concurrently and then process the results in a sort of “reduce” phase.

    For example, you could make a request to a database, cache server, and a timeout and process the first that completes.

    There are multiple types of future sets based on the type of problem you want to solve.

    Dex.Future.all() can be used to resolve when all dependent futures have resolved, otherwise it will reject with error once they are complete.
    If you want to reject as soon as the first item rejects, Dex.Future.all_race() will get you that behavior.

    Other useful Dex.FutureSet construtors include Dex.Future.any() and Dex.Future.first .

    /* Either timeout or propagate result of cache/db query */
    return dex_future_first (dex_timeout_new_seconds (60),
                             dex_future_any (query_db_server (),
                                             query_cache_server (),
                                             NULL),
                             NULL);
    

    Cancellable

    Many programmers who use GTK and GIO are familiar with GCancellable . Libdex has something similar in the form of DexCancellable . However, in the Libdex case, DexCancellable is a future.

    It allows for convenient grouping with other futures to perform cancellation when the Dex.Cancellable.cancel() method is called.

    It can also integrate with GCancellable when created using Dex.Cancellable.new_from_cancellable() .

    A DexCancellable will only ever reject.

    DexFuture *future = dex_cancellable_new ();
    dex_cancellable_cancel (DEX_CANCELLABLE (future));
    

    Timeouts

    A timeout may be represented as a future.
    In this case, the timeout will reject after a time period has passed.

    A DexTimeout will only ever reject.

    This future is implemented ontop of GMainContext via API like g_timeout_add() .

    DexFuture *future = dex_timeout_new_seconds (60);
    

    Unix Signals

    Libdex can represent unix signals as a future. That is to say that the future will resolve to an integer of the signal number when that signal has been raised.

    This is implemented using g_unix_signal_source_new() and comes with all the same restrictions.

    Delayed

    Sometimes you may run into a case where you want to gate the result of a future until a specific moment.

    For this case, DexDelayed allows you to wrap another future and decide when to “uncork” the result.

    DexFuture *delayed = dex_delayed_new (dex_future_new_true ());
    dex_delayed_release (DEX_DELAYED (delayed));
    

    Fibers

    Another type of future is a “fiber”.

    More care will be spent on fibers later on but suffice to say that the result of a fiber is easily consumable as a future via DexFiber .

    DexFuture *future = dex_scheduler_spawn (NULL, 0, my_fiber, state, state_free);
    

    Cancellation Propagation

    Futures within your application will enevitably depend on other futures.

    If all of the futures depending on a future have been released, the dependent future will have the opportunity to cancel itself. This allows for cascading cancellation so that unnecessary work may be elided.

    You can use Dex.Future.disown() to ensure that a future will continue to be run even if the dependent futures are released.

    Schedulers

    Libdex requires much processing that needs to be done on the main loop of a thread. This is generally handled by a DexScheduler .

    The main thread of an application has the default sheduler which is a DexMainScheduler .

    Libdex also has a managed thread pool of schedulers via the DexThreadPoolScheduler .

    Schedulers manage short tasks, executing DexBlock when they are ready, finalizing objects on their owning thread, and running fibers.

    Schedulers integrate with the current threads GMainContext via GSource making it easy to use Libdex with GTK and Clutter-based applications.

    Channels

    DexChannel is a higher-level construct built on futures that allow passing work between producers and consumers. They are akin to Go channels in that they have a read and a write side. However, they are much more focused on integrating well with DexFuture .

    You can find this article in the Libdex documentation under terminology .

    • Pl chevron_right

      Jussi Pakkanen: We need to seriously think about what to do with C++ modules

      news.movim.eu / PlanetGnome • 31 August • 11 minutes

    Note: Everything that follows is purely my personal opinion as an individual. It should not be seen as any sort of policy of the Meson build system or any other person or organization. It is also not my intention to throw anyone involved in this work under a bus. Many people have worked to the best of their abilities on C++ modules, but that does not mean we can't analyze the current situation with a critical eye.

    The lead on this post is a bit pessimistic, so let's just get it out of the way.

    If C++ modules can not show a 5× compilation time speedup (preferably 10×) on multiple existing open source code base, modules should be killed and taken out of the standard. Without this speedup pouring any more resources into modules is just feeding the sunk cost fallacy.

    That seems like a harsh thing to say for such a massive undertaking that promises to make things so much better. It is not something that you can just belt out and then mic drop yourself out. So let's examine the whole thing in unnecessarily deep detail. You might want to grab a cup of $beverage before continuing, this is going to take a while.

    What do we want?

    For the average developer the main visible advantages would be the following, ordered from the most important to the least.

    1. Much faster compilation times.
    If you look at old presentations and posts from back in the day when modules were voted in (approximately 2018-2019), this is the big talking point. This makes perfect sense, as the "header inclusion" way is an O(N²) algorithm and parsing C++ source code is slow. Splitting the code between source and header files is busywork one could do without. The core idea behind modules is that if you can store the "headery" bit in a preprocessed binary format that can be loaded from disk, things become massively faster.

    Then, little by little, build speed seems to fall by the wayside and the focus starts shifting towards "build isolation". This means avoiding bugs caused by things like macro leakage, weird namespace lookup issues and so on. Performance is still kind of there, but the numbers are a lot smaller, spoken aloud much more rarely and often omitted entirely. Now, getting rid of these sorts of bugs is fundamentally a good thing. However it might not be the most efficient use of resources. Compiler developer time is, sadly, a zero sum game so we should focus their skills and effort on things that provide the best results.

    Macro leakage and other related issues are icky but they are on average fairly rare. I have encountered a bug caused by them maybe once or twice a year. They are just not that common for the average developer. Things are probably different for people doing deep low level metaprogramming hackery, but they are a minuscule fraction of the total developer base. On the other hand slow build times are the bane of existence of every single C++ developer every single day. It is, without question, the narrowest bottleneck for developer productivity today and is the main issue modules were designed to solve. They don't seem to be doing that nowadays.

    How did we end up here in the first place?

    C++ modules were a C++ 20 feature. If a feature takes over five years of implementation work to get even somewhat working, you might ponder how it was accepted in the standard in the first place. As I was not there when it happened, I do not really know. However I have spoken to people who were present at the actual meetings where things were discussed and voted on. Their comments have been enlightening to say the least.

    Apparently there were people who knew about the implementation difficulty and other fundamental problems and were quite vocal that modules as specified are borderline unimplementable. They were shot down by a group of "higher up" people saying that "modules are such an important feature that we absolutely must have them in C++ 20".

    One person who was present told me: "that happened seven years ago [there is a fair bit of lead time in ISO standards] and [in practice] we still have nothing. In another seven years, if we are very lucky, we might have something that sort of works".

    The integration task from hell

    What sets modules apart from almost all other features is that they require very tight integration between compilers and build systems. This means coming up with schemes for things like what do module files actually contain, how are they named, how are they organized in big projects, how to best divide work between the different tools. Given that the ISO standard does not even acknowledge the fact that source code might reside in a file, none of this is in its purview. It is not in anybody's purview.

    The end result of all that is that everybody has gone in their own corner, done the bits that are the easiest for them and hoping for the best. To illustrate how bad things are, I have been in discussions with compiler developers about this. In said discussion various avenues were considered on how to get things actually working, but one compiler developer replied "we do not want to turn the compiler into a build system" to every single proposal, no matter what it was . The experience was not unlike talking to a brick wall. My guess is that the compiler team in question did not have resources to change their implementation so vetoing everything became the sensible approach for them (though not for the module world in general).

    The last time I looked into adding module support to Meson, things were so mind-bogglingly terrible, that you needed to create, during compilation time, additional compiler flags, store them in temp files and pass them along to compilation commands. I wish I was kidding but I am not . It's quite astounding that the module work started basically from Fortran modules, which are simple and work (in production even), and ended up in their current state, a kafkaesque nightmare of complexity which does not work.

    If we look at the whole thing from a project management viewpoint, the reason for this failure is fairly obvious. This is a big change across multiple isolated organizations. The only real way to get those done is to have a product owner who a) is extremely good at their job b) is tasked with and paid to get the thing done properly c) has sufficient stripes to give orders to the individual teams and d) has no problems slapping people on metaphorical wrists if they try to weasel out of doing their part.

    Such a person does not exist in the modules space. It is arguable whether such a person could exist even in theory. Because of this modules can never become good, which is a reasonable bar to expect a foundational piece of technology to reach.

    The design that went backwards

    If there is one golden rule of software design, it is "Do not do a grand design up front". This is mirrored in the C++ committee's guideline of "standardize existing practice".

    C++ modules may be the grandest up-frontest design the computing world has ever seen. There were no implementations (one might argue there still aren't, but I digress), no test code, no prototypes, nothing. Merely a strong opinion of "we need this and we need it yesterday".

    For the benefit of future generations, one better way to approach the task would have gone something like this. First you implement enough in the compiler to be able to produce one module file and then consume it in a different compilation unit. Keep it as simple as possible. It's fine to only serialize a subset of functionality and error out if someone tries to go outside the lines. Then take a build system that runs that. Then expand that to support a simple project, say, one that has ten source files and produces one executable. Implement features in the module file until you can compile the whole thing. Then measure the output. If you do not see performance increases, stop further development until you either find out why that is or you can fix your code to work better. Now you update the API so that no part of the integration makes people's eyes bleed of horror. Then scale the prototype to handle project with 100 sources. Measure again. Improve again. Then do two 100 source pairs, one that produces a library and one that creates an executable that uses the library. Measure again. Improve again. Then do 1000 sources in 10 subprojects. Repeat.

    If the gains are there, great, now you have base implementation that has been proven to work with real world code and which can be expanded to a full implementation. If the implementation can't be made fast and clean, that is a sign that there is a fundamental design flaw somewhere. Throw your code away and either start from scratch or declare the problem too difficult and work on something else instead.

    Hacking on an existing C++ compiler is really difficult and it takes months of work to even get started. If someone wants to try to work on modules but does not want to dive into compiler development, I have implemented a " module playground ", which consists of a fake C++ compiler, a fake build system and a fake module scanner all in ~300 lines of Python.

    The promise of import std

    There is a second way of doing modules in an iterative fashion and it is actually being pursued by C++ implementers, namely import std . This is a very good approach in several different ways. First of all, the most difficult part of modules is the way compilations must be ordered. For the standard library this is not an issue, because it has no dependencies and you can generate all of it in one go. The second thing is the fact that most of the slowness of most of C++ development comes from the standard library. For reference, merely doing an #include<vector> brings in 27 000 lines of code and that is fairly small amount compared to many other common headers.

    What sort of an improvement can we expect from this on real world code bases? Implementations are still in flux, so let's estimate using information we have. The way import std is used depends on the compiler but roughly:

    1. Replace all #include statements for standard library headers with import std .
    2. Run the compiler in a special mode.
    3. The compiler parses headers of the standard library and produces some sort of a binary representation of them
    4. The representation is written to disk.
    5. When compiling normally, add compiler flags that tell the compiler to load the file in question before processing actual source code

    If you are thinking "wait a minute, if we remove step #1, this is exactly how precompiled headers work", you are correct. Conceptually it is pretty much the same and I have been told (but have not verified myself) that in GCC at least module files are just repurposed precompiled headers with all the same limitations (e.g. you must use all the same compiler flags to use a module file as you did when you created it).

    Barring a major breakthrough in compiler data structure serialization, the expected speedup should be roughly equivalent to the speedup you get from precompiled headers. Which is to say, maybe 10-20% with Visual Studio and a few percentage points on Clang and GCC. OTOH if such a serialization improvement has occurred, it could probably be adapted to be usable in precompiled headers, too. Until someone provides verifiable measurements proving otherwise, we must assume that is the level of achievable improvement.

    For reference, here is a Reddit thread where people report improvements in the 10-20% range.

    But why 5×?

    A reasonable requirement for the speedup would be "better than can be achieved using currently available tools and technologies". As an experiment I wrote a custom standard library (not API compatible with the ISO one on purpose) whose main design goal was to be fast to compile. I then took an existing library, converted that to use the new library and measured. The code compiled four times faster. In addition the binary it produced was smaller and, unexpectedly, ran faster. Details can be found in this blog post .

    Given that 4× is already achievable (though, granted, only tested on one project, not proven in general), 5× seems like a reasonable target.

    But what's in it for you ?

    The C++ standard committee has done a lot of great (and highly underappreciated) work to improve the language. On several occasions Herb Sutter has presented new functionality with "all you have to do is to recompile your code with a new compiler and the end result runs faster and is safer". It takes a ton of work to get these kinds of results, and it is exactly where you want to be.

    Modules are not there. In fact they are in the exact opposite corner.

    Using modules brings with it the following disadvantages:

    1. Need to rewrite (possibly refactor) your code.
    2. Loss of portability.
    3. Module binary files (with the exception of MSVC) are not portable so you need to provide header files for libraries in any case.
    4. The project build setup becomes more complicated.
    5. Any toolchain version except the newest one does not work (at the time of writing Apple's module support is listed as " partial ")
    In exchange for all this you, the regular developer-about-town, get the following advantages:

    1. Nothing.

    • Pl chevron_right

      Debarshi Ray: Toolbx containers hit by CA certificates breakage

      news.movim.eu / PlanetGnome • 30 August • 2 minutes

    If you are using Toolbx 0.1.2 or newer, then some existing containers have been hit by a bug that breaks certificates from certificate authorities (or CAs) that are used for secure network communication. The bug prevents OpenSSL and other cryptography components from finding any certificates, which means that programs like pacman and DNF that use OpenSSL are unable to download any metadata or packages. For what it’s worth GnuTLS and, possibly, Network Security Services (or NSS) are unaffected.

    This is a serious breakage because OpenSSL is widely used, and when it breaks the standard mechanisms for shipping updates, it can be difficult for users to find a way forward. So, without going into too many details, here’s how to diagnose if you are in this situation and how to repair your containers.

    Diagnosis

    Among the different operating system distributions that we regularly test Toolbx on, so far it seems that Arch Linux and Fedora containers are affected. However, if you suddenly start experiencing network errors related to CA certificates inside your Toolbx containers, then you might be affected too.

    Within Arch Linux containers, you will see that files like /etc/ca-certificates/extracted/ca-bundle.trust.crt , /etc/ca-certificates/extracted/edk2-cacerts.bin , /etc/ca-certificates/extracted/tls-ca-bundle.pem , etc. have been emptied out and /etc/ca-certificates/extracted/java-cacerts.jks looks suspiciously truncated at 32 bytes.

    Within Fedora containers, you will see the same, but the file paths are slightly different. They are /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt , /etc/pki/ca-trust/extracted/edk2/cacerts.bin , /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem , /etc/pki/ca-trust/extracted/java/cacerts , etc..

    Workaround

    First, you need to disable something called remote p11-kit inside the containers.

    This requires getting rid of /etc/pkcs11/modules/p11-kit-trust.module from the containers:

    ⬢ $ sudo rm /etc/pkcs11/modules/p11-kit-trust.module

    … and the p11-kit-client.so PKCS #11 module.

    On Arch Linux:

    ⬢ $ sudo rm /usr/lib/pkcs11/p11-kit-client.so

    On Fedora 43 onwards, it’s provided by the p11-kit-client RPM, and on older releases it’s the p11-kit-server RPM:

    ⬢ $ sudo dnf remove p11-kit-client
    Package                 Arch   Version                 Repository           Size
    Removing:
     p11-kit-client         x86_64 0.25.5-9.fc43           ab42c14511ba47b   1.2 MiB
    
    Transaction Summary:
     Removing:           1 package
    
    After this operation, 1 MiB will be freed (install 0 B, remove 1 MiB).
    Is this ok [y/N]: y
    Running transaction
    [1/2] Prepare transaction                                                                                                                                                   100% |  66.0   B/s |   1.0   B |  00m00s
    [2/2] Removing p11-kit-client-0:0.25.5-9.fc43.x86_64                                                                                                                        100% |  16.0   B/s |   5.0   B |  00m00s
    >>> Running %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                 
    >>> Finished %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                
    >>> Scriptlet output:                                                                                                                                                                                               
    >>> Failed to connect to system scope bus via local transport: No such file or directory                                                                                                                            
    >>>                                                                                                                                                                                                                 
    >>> Running %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                 
    >>> Finished %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                
    >>> Scriptlet output:                                                                                                                                                                                               
    >>> Failed to connect to system scope bus via local transport: No such file or directory                                                                                                                            
    >>>                                                                                                                                                                                                                 
    Complete!

    Then, you need to restart the container, run update-ca-trust(8) , and that’s all:

    ⬢ $
    logout
    $ podman stop <CONTAINER>
    <CONTAINER>
    $ toolbox enter <CONTAINER>
    ⬢ $ sudo update-ca-trust

    • Pl chevron_right

      Gedit Technology blog: Donating to the gedit project

      news.movim.eu / PlanetGnome • 30 August

    A short news to announce that you can now donate to the gedit text editor project!

    Donations will be used to continue the development and maintenance of the application and its plugins.

    If you like gedit, your support is much appreciated! For years to come, it will be useful for the project.

    Donate using Liberapay

    Thanks!

    • Pl chevron_right

      Steven Deobald: So short, and thanks for all the flinch

      news.movim.eu / PlanetGnome • 29 August • 5 minutes

    As the board announced earlier today , I will be stepping down from the Executive Director role this week. It’s been an interesting four months. If you haven’t been following my work with the Foundation during that period, you can peruse the weekly Foundation Reports starting from May 3rd. You can also hear me in a few places at GUADEC 2025: the Day 1 Panel Discussion (where I was extremely sick), my Day 3 keynote , and the AGM from Day 3 .

    As Allan mentions in his post, he’s taken over the role of board President. This means he will also be taking over as acting Executive Director when I step down, as per GNOME Foundation bylaws , and picking up where I left off. I’m enthusiastic about this transition. I have enjoyed working closely with Allan over the past four months and, with him at the helm, my only disappointment is that I won’t get to continue working closely with him.

    It’s not only Allan, of course. In a few short months I’ve found it remarkable how hardworking the entire GNOME community is. You folks accomplish so much in such short spans of time, with limited resources. But even more surprising to me was just how warm and welcoming all the contributors are. A lot of folks have built a real home here: not just a professional network, but an international village. Language, cultural, and economic barriers are broken down to make GNOME such an inviting project and I’m honoured to have served GNOME as ED, even if only for a brief period.

    With that in mind, I would like to thank a number of hackers I’ve had the good fortune of speaking to over the past four months.

    First, those folks with non-development roles. Maria, Pietro, and other volunteer conference organizers. The unnamed contractors who balance the Foundation’s books, clear invoices, provide legal advice, consult, and run trainings. Karen, Deb, Loren, Amy, Stefano, Sumana, Michael D, and others who are long-time GNOME supporters and mentors. Andrea, who still contributes an inordinate amount of volunteer time to GNOME infrastructure and the elections process. Emmanuele, Alice, Michael C, Jordan, Chris, Brage, Sri, and others who quietly moderate GNOME’s social spaces to keep them safe. GNOME owes a great deal to all these folks. Their contributions largely go unnoticed. Everything that just magically works? That’s because someone is putting in a lot of time and effort to keep it that way.

    Second, the Design Team. I’ve had so many wonderful interactions with Jakub, Allan, Tobias, Sam H, Jamie, and Kramo that I’ve lost count. I’m in awe of how quickly you folks work and I’m grateful for every opportunity I’ve had to work with you, big or small. Jamie and Kramo, for your help outside the Design Team, in particular: thank you for keeping me honest, calling me out on my shit, and never giving up on me even if I say or do something stupid. 🙂 It means a lot.

    Thank you to GNOME’s partners, Advisory Board reps, and other folks who the community may not see on a regular basis. Jehan, thank you for everything you do for GIMP. Scott, it was a pleasure meeting you at GUADEC and learning more about SUSE! Jef, thank you for your continued advice and guidance. Jeremy and Michael: I really enjoyed our AdBoard day and I hope GNOME gets even more vertically integrated with upstreams like Debian and LibreOffice. Aaron, Mauro, Marco, and Jon: I’m optimistic you folks can bring Ubuntu development and core GNOME development closer together, and I’m grateful for all the time you’ve given me toward that goal. Matt, Alejandro, and Ousama: I genuinely believe the way the Linux desktop wins is by shipping pre-installed to your customers at Framework and Slimbook. Mirko and Tara: as a user, I’m grateful for STF continuing to finance the underpinnings of GNOME’s infrastructure, and I look forward to seeing what else STA can do. Aleix, Joseph, and Paul: KDE is lucky to have you folks and GNOME is lucky to have you as friends. Paul especially, thank you for all your fundraising advice based on your experience with KDE’s campaigns. Robin, Thib, and Jim, thank you for giving us a federated Slack alternative! I feel many people don’t appreciate how relentless you are in improving your product. Mila, Spot, Henri, Hannah, Chad, Rin, Esra’a, and others: thanks for being on our team, even if you work for proprietary software companies. 😉

    And, of course, the text hackers. Thanks to Alice, Jordan, Emmanuele, Martin, Neils, Adrian, Jeff, Alexandre, Georges, Hari Rana, Ignacy, Mazhar, Aaditya, Adrien, swick, Florian, Sophie, Aryan, Matthias C, Matthias K, Pablo, Philip W, Philip C, Dan, Cassidy, Julian, Federico, Carlos, Felix, Adam, Pan, Kolja, and so many others for your conversations not just about GNOME development but the entire process and ecosystem around it. My conversations with you guided my entire time as ED and I hope you continue to provide the board with the same strong feedback you’ve given me. Joseph G and Andy, I’m sorry we never managed to sync up!

    Last but not least, I’d like to thank the previous EDs over the past four years. Holly and Richard both had incredibly difficult situations to deal with. Richard, I’m very lucky to have met you in person and to get a real hug from you in June when I was struggling. I’ve said it before, but GNOME (and the board, in particular) is extremely lucky to have you as a friend and advisor.

    I have undoubtedly forgotten people. Thankfully, you can complain to me directly. 😉 If you like the work I’ve done at the Foundation, you can also speak to me about other roles in the space. I would love to continue working within the wider Free Software ecosystem and especially on the Linux desktop.

    I will be fully off-grid for two weeks mid-September so I can meditate a 10-Day “self course.” (Our proper 10-Day Vipassana course was cancelled due to the ongoing Nova Scotia wildfires.) But I’m back to the real world after that.

    I’ll be around on Matrix and Discourse as a GNOME Foundation Member. I’m on Mastodon. I have another blog . My name was a Googlewhack once. You can also reach me using the standard flastname@gnome.org email format. You can find me. My door is always open.

    • Pl chevron_right

      Allan Day: Thanks and farewell to Steven Deobald

      news.movim.eu / PlanetGnome • 29 August • 2 minutes

    Steven Deobald has been in the post of GNOME Foundation Executive Director for the past four months, during which time he has made major contributions to both the Foundation and the wider GNOME project. Sadly, Steven will be leaving the Foundation this week. The Foundation Board is extremely grateful to Steven and wish him the very best for his future endeavors.

    The Executive Director role is extremely diverse and it is hard to list all of Steven’s contributions since he has been in post. However, particular highlights include:

    • energetic engagement with the GNOME community, including weekly updates focused on the Foundation’s support of GNOME development, and attention to topics of importance to our contributors, such as Pride Month and Disability
      Pride
    • the creation of a new donations platform, which included both a new website, detailed evaluation of payment processors, and a strategy for distributing donations to GNOME development
    • a focus on partner outreach, including attending UN Open Source Week, adding postmarketOS to our Advisory board, and the creation of a new Advisory Board Matrix channel, alongside many conversations with partner organisations
    • internal policy and documentation work, particularly around spending and finances
    • the addition of new tooling to augment policies and documentation, such as an internal Foundation Handbook and vault.gnome.org
    • assistance with the board, including recruiting a new treasurer and vice-treasurer

    We are extremely grateful to Steven for all this and more. Despite these many positive achievements, Steven and the board have come to the conclusion that Steven is not the right fit for the Executive Director role at this time. We are therefore bidding Steven a fond farewell.

    I know that some members of the GNOME community will be disappointed by this decision. I can assure everyone that it wasn’t one that we took lightly, and had to consider from different perspectives.

    The good news is that Steven has left the Foundation with a strong platform on which to build, and we have an energetic and engaged board which is already working to fill in the gaps left by his departure. I’m confident that the Foundation can continue on the positive trajectory started by Steven, with a strong community-based executive taking the reins.

    To this end, the board held its regular annual meeting this week, and appointed new directors to key positions. I’ve taken over the president’s role from Rob McQueen, who has now joined Arun Raghavan as one of two Vice-Presidents. The Executive Committee has been expanded with the inclusion of Arun and Maria Majadas (who is our new Chair). We have also bolstered the Finance Committee , and are looking to create new groups for fundraising and communications.

    Steven has been very helpful in working on a smooth transition, and our staff are continuing to work as normal, so Foundation operations won’t be affected by these management changes. In the near future we’ll be pushing forward with the fundraising plans that Steven has set out, and are hopeful about being able to provide more financial support for the GNOME project as a result. If you want to help us with that, please get in touch.

    Should you have any questions or concerns, please feel free to reach out to president@gnome.org .

    On behalf of the GNOME Foundation Board of Directors,

    – Allan

    • Pl chevron_right

      Michael Meeks: 2025-08-28 Thursday

      news.movim.eu / PlanetGnome • 28 August

    • Up early, mail chew, tech planning call, sync with Naomi & Anna, more admin.
    • Pl chevron_right

      Ravgeet Dhillon: How to Implement AI in Your Small Business - A Complete 2025 Guide

      news.movim.eu / PlanetGnome • 28 August • 5 minutes

    Artificial intelligence isn’t just for tech giants anymore. In 2025, small businesses across every industry are leveraging AI to streamline operations, enhance customer experiences, and boost profitability. If you’re wondering how to implement AI in your small business, this comprehensive guide will walk you through every step of the process.

    Why Small Business AI Implementation Matters in 2025

    The AI revolution has reached a tipping point. According to recent studies, 73% of small businesses that have implemented AI report increased efficiency, while 67% see improved customer satisfaction within the first six months. More importantly, businesses using AI solutions show an average 15% increase in revenue growth compared to their non-AI counterparts.

    The competitive advantage is clear: Early AI adopters in the small business space are capturing market share while their competitors struggle with manual processes and outdated systems.

    Step 1: Identify AI Opportunities in Your Business

    Before diving into artificial intelligence consulting or purchasing AI tools, you need to identify where AI can make the biggest impact in your operations.

    Common AI Implementation Areas for Small Businesses:

    Customer Service & Support

    • Chatbots for 24/7 customer inquiries
    • Automated ticket routing and prioritization
    • Sentiment analysis for customer feedback

    Sales & Marketing

    • Lead scoring and qualification
    • Personalized email marketing campaigns
    • Predictive analytics for customer behavior

    Operations & Inventory

    • Demand forecasting and inventory optimization
    • Automated scheduling and resource allocation
    • Quality control through computer vision

    Financial Management

    • Automated bookkeeping and expense categorization
    • Fraud detection and risk assessment
    • Cash flow prediction and budgeting

    Action Item: Conduct an AI Readiness Assessment

    Create a simple spreadsheet listing your current business processes. For each process, ask:

    1. Is this task repetitive and rule-based?
    2. Do we have sufficient data to train an AI system?
    3. Would automation here save significant time or money?
    4. Could AI improve accuracy or customer experience?

    Processes scoring high on these criteria are prime candidates for AI implementation.

    Step 2: Choose the Right AI Solutions for Your Budget

    Not all AI solutions require massive investments. In 2025, small business AI options range from affordable SaaS tools to custom implementations.

    Budget-Friendly AI Tools ($50-500/month):

    Customer Relationship Management

    • HubSpot’s AI-powered lead scoring
    • Salesforce Einstein for small teams
    • Pipedrive’s sales automation features

    Marketing Automation

    • Mailchimp’s predictive analytics
    • Hootsuite’s AI content scheduling
    • Canva’s AI design assistant

    Business Operations

    • QuickBooks’ AI bookkeeping features
    • Calendly’s smart scheduling
    • Zapier’s workflow automation

    Mid-Range Solutions ($500-2,000/month):

    • Custom chatbot development
    • Advanced analytics dashboards
    • Inventory management AI systems
    • Employee scheduling optimization

    Enterprise-Level Implementations ($2,000+/month):

    • Custom machine learning models
    • Computer vision quality control systems
    • Predictive maintenance solutions
    • Advanced customer analytics platforms

    Step 3: Calculate Your AI Implementation ROI

    Smart small business owners measure twice and cut once. Before investing in AI, calculate your expected return on investment.

    ROI Calculation Framework:

    Time Savings Value

    • Hours saved per week × hourly wage × 52 weeks = Annual time savings value
    • Example: 10 hours/week × $25/hour × 52 = $13,000 annual savings

    Revenue Enhancement

    • Improved conversion rates × average customer value × annual customers
    • Example: 5% conversion improvement × $500 average sale × 1,000 customers = $25,000

    Cost Reductions

    • Reduced errors, fewer returns, lower customer service costs
    • Example: 20% reduction in customer service calls × $50 per call × 2,000 calls = $20,000

    Total Annual Benefit: $58,000 Annual AI Investment: $12,000 ROI: 383%

    This framework helps justify AI investments to stakeholders and ensures you’re making data-driven decisions.

    Step 4: Implementation Best Practices

    Successful AI implementation follows proven methodologies. Here’s how to ensure your small business AI project succeeds:

    Phase 1: Planning and Preparation (Weeks 1-2)

    • Define clear objectives and success metrics
    • Assess data quality and availability
    • Identify key stakeholders and champions
    • Create an implementation timeline

    Phase 2: Pilot Testing (Weeks 3-6)

    • Start with a small, low-risk use case
    • Test with a limited user group
    • Gather feedback and iterate quickly
    • Document lessons learned

    Phase 3: Full Deployment (Weeks 7-10)

    • Roll out to all relevant users
    • Provide comprehensive training
    • Monitor performance metrics closely
    • Establish ongoing support processes

    Phase 4: Optimization and Scaling (Weeks 11+)

    • Analyze performance data
    • Fine-tune algorithms and processes
    • Identify additional AI opportunities
    • Plan next implementation phase

    Real-World Success Stories

    Case Study 1: Local Restaurant Chain Challenge: Managing inventory across 5 locations AI Solution: Predictive analytics for demand forecasting Results: 30% reduction in food waste, 15% increase in profit margins

    Case Study 2: Manufacturing Startup Challenge: Quality control consistency AI Solution: Computer vision inspection system Results: 95% defect detection accuracy, 50% reduction in manual inspection time

    Case Study 3: E-commerce Retailer Challenge: Customer service scalability AI Solution: Intelligent chatbot with human handoff Results: 80% of inquiries resolved automatically, 24/7 customer support coverage

    Common Implementation Pitfalls to Avoid

    Learning from others’ mistakes can save you time and money:

    1. Starting Too Big : Begin with simple, high-impact use cases
    2. Ignoring Data Quality : Clean, organized data is essential for AI success
    3. Skipping User Training : Even the best AI tools fail without proper adoption
    4. Unrealistic Expectations : AI is powerful but not magic—set realistic timelines
    5. Going It Alone : Partner with experienced AI consultants for complex implementations

    Getting Started: Your Next Steps

    Ready to implement AI in your small business? Here’s your action plan:

    Week 1:

    • Complete the AI readiness assessment
    • Research 3-5 relevant AI tools for your industry
    • Calculate potential ROI for your top opportunities

    Week 2:

    • Schedule demos with AI solution providers
    • Consult with artificial intelligence consulting experts
    • Create an implementation budget and timeline

    Week 3:

    • Select your first AI implementation
    • Begin pilot testing with a small team
    • Document your process for future scaling

    Why Partner with AI Consulting Experts

    While many AI tools are user-friendly, working with experienced artificial intelligence consulting professionals can accelerate your success and avoid costly mistakes. Professional AI consultants bring:

    • Industry-specific expertise and best practices
    • Technical knowledge for complex integrations
    • Objective vendor evaluation and selection
    • Ongoing support and optimization services
    • Strategic guidance for long-term AI adoption

    The investment in professional AI consulting often pays for itself through faster implementation, better tool selection, and avoiding common pitfalls.

    Conclusion: The Future is AI-Powered

    Small business AI implementation is no longer a luxury—it’s a necessity for staying competitive in 2025 and beyond. By following this step-by-step guide, calculating your ROI, and learning from real-world examples, you can successfully integrate AI into your operations and unlock significant growth opportunities.

    Remember, the goal isn’t to implement AI for its own sake, but to solve real business problems and create measurable value. Start small, measure results, and scale what works.

    Ready to explore how AI can transform your small business? The journey begins with a single step—and the competitive advantages await those bold enough to take it.


    Need help implementing AI in your small business? RavSam Web Solutions specializes in artificial intelligence consulting for small and medium businesses. Contact us for a free AI readiness assessment and discover how we can help you leverage AI for growth.

    Related Resources:

    • Small Business AI Tools Comparison Chart
    • AI Implementation Checklist Template
    • ROI Calculator Spreadsheet
    • Pl chevron_right

      Victor Ma: When is an optimization not optimal?

      news.movim.eu / PlanetGnome • 28 August • 4 minutes

    In the past few weeks, I’ve been working on the MR for my lookahead algorithm. And now, it’s finally merged ! There’s still some more work to be done—particularly around the testing code—but the core change is finished, and is live.

    Suboptimal code?

    While working on my MR, I noticed that one of my functions, word_set_remove_unique () , could be optimized. Here is the function:

    void
    word_set_remove_unique (WordSet *word_set1, WordSet *word_set2)
    {
     g_hash_table_foreach_steal (word_set1, word_not_in_set, word_set2);
    }
    
    /* Helper function */
    static gboolean word_not_in_set (gpointer key,
     gpointer value,
     gpointer user_data)
    {
     WordIndex *word_index = (WordIndex *) key;
     WordSet *word_set = (WordSet *) user_data;
     return !g_hash_table_contains (word_set, word_index);
    }
    

    word_set_remove_unique () takes two sets and removes any elements in the first set that don’t exist in the second set. So essentially, it performs a set intersection, in-place, on the first set.

    My lookahead function calls word_set_remove_unique () multiple times, in a loop. My lookahead function always passes in the same persisted word set— clue_matches_set —as the first set. The second set changes with each loop iteration. So essentially, my lookahead function uses word_set_remove_unique () to gradually refine clue_matches_set .

    Now, importantly, clue_matches_set is sometimes larger than the second word set—potentially several orders of magnitude larger. And because of that, I realized that there was an optimization I could make.

    An obvious optimization

    See, g_hash_table_foreach_steal () runs the given boolean function on each element of the given hash table, and it removes the element if the function returns TRUE . And my code always passes in word_set1 as the hash table (with word_set2 being passed in as user_data ). This means that the boolean function is always run on the elements of word_set1 ( clue_matches_set ).

    But word_set1 is sometimes smaller than word_set2 . This means that g_hash_table_foreach_steal () sometimes has to perform this sort of calculation:

    WordSet *word_set1; /* Contains 1000 elements. */
    WordSet *word_set2; /* Contains 10 elements. */
    
    for (word : word_set1)
     {
     if (!g_hash_table_contains (word_set2, word))
     g_hash_table_remove (word_set1, word);
     }
    

    This is clearly inefficient. The point of word_set_remove_unique () is to calculate the intersection of two sets. It’s only an implementation detail that it does this by removing the elements from the first set.

    The function could also work by removing the unique elements from the second set. And in the case where word_set1 is larger than word_set2 , that would make more sense. It could be the difference between calling g_hash_table_contains () (and potentially g_hash_table_remove () ) 10 times and calling it 1000 times.

    So, I thought, I can optimize word_set_remove_unique () by reimplementing it like this:

    1. Figure out which word set is larger.
    2. Run g_hash_table_foreach_steal () on the larger word set.
    3. If the second set was the larger set, then swap the pointers.
    /* Returns whether or not the pointers were swapped. */
    gboolean
    word_set_remove_unique (WordSet **word_set1_pp, WordSet **word_set2_pp)
    {
     WordSet *word_set1_p = *word_set1_pp;
     WordSet *word_set2_p = *word_set2_pp;
     if (g_hash_table_size (word_set1_p) <= g_hash_table_size (word_set2_p))
     {
     g_hash_table_foreach_steal (word_set1_p, word_not_in_set, word_set2_p);
     return FALSE;
     }
     else
     {
     g_hash_table_foreach_steal (word_set2_p, word_not_in_set, word_set1_p);
     *word_set1_pp = word_set2_p;
     *word_set2_pp = word_set1_p;
     return TRUE;
     }
    }
    

    Not so obvious?

    Well, I made the change, and…it’s actually slower.

    We have some profiling code that measures how long each frame takes to render. The output looks like this:

    update_all():
     Average time (13.621 ms)
     Longest time (45.903 ms)
     Total iterations (76)
     Total number of iterations longer than one frame (27)
     Total time spent in this function (1035.203f ms)
    
    word_list_find_intersection():
     Average time (0.774 ms)
     Longest time (4.651 ms)
     Total iterations (388)
     Total time spent in this function (300.163f ms)
    

    And when I compared the results of the optimized and unoptimized word_set_remove_unique () , it turned out that the “optimized” version performed either worse or about the same. That just goes to show the value of profiling!

    More testing needed

    So…that was going to be the blog post. An optimization that turned out to not be so optimal. But in writing this post, I reimplemented the optimization—I lost the original code, because I never committed it—and that is the code that you see. But when I tested this code, it seems like it is performing better than the unoptimized version. So maybe I messed up the implementation last time? In any case, more testing is needed!