call_end

    • Pl chevron_right

      Christian Hergert: Dedicated Threads with Futures

      news.movim.eu / PlanetGnome • 5 September

    There are often needs to integrate with blocking APIs that do not fit well into the async or future-based models of the GNOME ecosystem. In those cases, you may want to use a dedicated thread for blocking calls so that you do not disrupt main loops, timeouts, or fiber scheduling.

    This is ideal when doing things like interacting with libflatpak or even libgit2 .

    Creating a Dedicated Thread

    Use the dex_thread_spawn() function to spawn a new thread. When the thread completes the resulting future will either resolve or reject.

    typedef DexFuture *(*DexThreadFunc) (gpointer user_data);
    
    DexFuture *future = dex_thread_spawn ("[my-thredad]", thread_func, thread_data,
                                          (GDestroyNotify)thread_data_free);
    

    Waiting for Future Completion

    Since dedicated threads do not have a Dex.Scheduler on them and are not a fiber, you may not await futures. Awaiting would suspend a fiber stack but there is no such fiber to suspend.

    To make integration easier, you may use dex_thread_wait_for() to wait for a future to complete. The mechanism used in this case is a mutex and condition variable which will be signaled when the dependent future completes.

    • Pl chevron_right

      Allan Day: Foundation Update, 2025-09-05

      news.movim.eu / PlanetGnome • 5 September • 4 minutes

    Steven’s weekly Foundation updates were fantastic and, since I’m filling in the ED role as President, I want to keep them going. I’m still mulling over the frequency and scheduling of the posts, but for now I’ll stick to weekly on Fridays.

    I’m trying to make this an update for the entire organisation, rather than for me personally, so a lot of this update is about others’ activities, rather than my own. Inevitably I can’t report on everything and will miss some items, so apologies to anyone if I don’t manage to cover your contributions. We can always squeeze omissions into a future update, so let me know if I missed anything.

    As many of you know, Steven departed his role as ED last week . Everyone on the board knows that this was a shock to many people, and that it has had an impact on the GNOME community. I think that, for now, there’s not a huge amount that I can say about that topic. However, what I will say is that its significance has not been lost on those of us who are working on the Foundation, and that we are committed to making good on our ambitions for the Foundation and for the GNOME Project more generally. With that said, here’s what we’ve been up to this week.

    Transition

    Transitioning the Foundation leadership has been a major focus for this week, due to Steven’s departure and new officers being appointed. Handing over all the accounts is quite a big task which has involved paperwork and calls, and this work will carry on for a while longer yet.

    I’d like to thank Steven for being exceptionally helpful during this process. Everyone on the Exec Committee is very appreciative.

    I’ve been handing the chair role over to Maria, which involved me dumping a lot of procedural information on her in a fairly uncontrolled manner. Sorry Maria!

    I’ve also started to participate in the management of our paid team members. Thanks to everyone for being accommodating during this process.

    There have also been a lot of messages and conversations with partners. I am trying to get around to everyone, but am still midway through the process. If there’s anyone out there who would like to say hi, please don’t be shy! My door is always open and I’d love to hear from anyone who has a relationship with the GNOME Foundation and would like to connect.

    Budget season

    The GNOME Foundation annual budget runs from October to September, so we are currently busy working on the budget that will come into effect in October this year. Deepa, our new treasurer, did an amazing job and provided the board with a draft budget ahead of time, and we had our first meeting to discuss it this week. There will be more meetings over September and we’ll provide a report on the new budget once it has been finalised.

    Sponsorships

    We got our first corporate sponsor through the new donations platform this week, which feels like a great milestone. Many thanks to Symless for your support! Our corporate supporter packages are a great way for businesses of all sizes to help sustain GNOME and the wider ecosystem, and these sponsorships a huge difference. If anyone reading this post thinks that their employer might be interested in a corporate sponsorship package, please do encourage them to sign up. And we’re always happy to talk to potential donors.

    Framework

    One particular highlight from this week was a call between GNOME developers and members of the Framework team. This was something that Steven had set in motion (again, thanks Steven!) and was incredibly exciting from my perspective. Partnerships between hardware vendors and upstream open source projects have so much potential, both in terms of delivering better products, but also improving the efficiency of development and testing. I can’t wait to see where this goes.

    Outreachy

    The Exec Committee approved funding for a single Outreachy intern for the December 2025 cohort. This funding is coming from an Endless grant , so thanks to Endless for your support. Outreachy is a fantastic program so we’re happy we can continue to fund GNOME’s participation.

    Audit

    The GNOME Foundation is due to be audited next calendar year. This is a standard process, but we’ve never been audited before, so it’s going to be a new experience for us. This week I signed the paperwork to secure an auditor, who came highly recommended to us. This was an important step in making sure that the audit happens according to schedule.

    All the usual

    Other Foundation programs continue to run this week. Flathub is being worked on, events are being planned, and we continue to have development activities happening that we’re supporting. There’s already a lot to mention in this post, so I won’t go into specifics right now, but all this work is important and valued. Thanks to everyone who is doing this work. Keep it up!

    Message ends

    That’s it from me. See you next week. o/

    • Pl chevron_right

      Christian Hergert: Asynchronous IO with Libdex

      news.movim.eu / PlanetGnome • 4 September • 3 minutes

    Previously , previously , and previously .

    The Gio.IOStream APIs already provide robust support for asynchronous IO. The common API allows for different types of implementation based on the stream implementation.

    Libdex provides wrappers for various APIs. Coverage is not complete but we do expect additional APIs to be covered in future releases.

    File Management

    See dex_file_copy() for copying files.

    See dex_file_delete() for deleting files.

    See dex_file_move() for moving files.

    File Attributes

    See dex_file_query_info() and dex_file_query_file_type() , and dex_file_query_exists() for basic querying.

    You can set file attributes using dex_file_set_attributes() .

    Directories

    You can create a directory or hierarchy of directories using dex_file_make_directory() and dex_file_make_directory_with_parents() respectively.

    Enumerating Files

    You can create a file enumerator for a directory using dex_file_enumerate_children() .

    You can also asynchronously enumerate the files of that directory using dex_file_enumerator_next_files() which will resolve to a g_autolist(GFileInfo) of infos.

    Reading and Writing Files

    The dex_file_read() will provide a Gio.FileInputStream which can be read from.

    A simpler interface to get the bytes of a file is provided via dex_file_load_contents_bytes() .

    The dex_file_replace() will replace a file on disk providing a Gio.FileOutputStream to write to. The dex_file_replace_contents_bytes() provides a simplified API for this when the content is readily available.

    Reading Streams

    See dex_input_stream_read() , dex_input_stream_read_bytes() , dex_input_stream_skip() , and dex_input_stream_close() for working with input streams asynchronously.

    Writing Streams

    See dex_output_stream_write() , dex_output_stream_write_bytes() , dex_output_stream_splice() , and dex_output_stream_close() for writing to streams asynchronously.

    Sockets

    The dex_socket_listener_accept() , dex_socket_client_connect() , and dex_resolver_lookup_by_name() may be helpful when writing socket servers and clients.

    D-Bus

    Light integration exists for D-Bus to perform asychronous method calls.

    See dex_dbus_connection_call() , dex_dbus_connection_call_with_unix_fd_list() , dex_dbus_connection_send_message_with_reply() and dex_dbus_connection_close() .

    We expect additional support for D-Bus to come at a later time.

    Subprocesses

    You can await completion of a subprocess using dex_subprocess_wait_check() .

    Foundry uses some helpers to do UTF-8 communication but I’d like to bring that to libdex in an improved way post-1.0.

    Asynchronous IO with File Descriptors

    Gio.IOStream and related APIs provides much opertunity for streams to be used asynchronously. There may be cases where you want similar behavior with traditional file-descriptors.

    Libdex provides a set of AIO-like functions for traditional file-descriptors which may be backed with more efficient mechanisms.

    Gio.IOStream typically uses a thread pool of blocking IO operations on Linux and other operating systems because that was the fastest method when the APIs were created. However, on some operating systems such as Linux, faster methods finally exist.

    On Linux, io_uring can be used for asynchronous IO and is provided in the form of a Dex.Future .

    Asynchronous Reads

    DexFuture *dex_aio_read  (DexAioContext *aio_context,
                              int            fd,
                              gpointer       buffer,
                              gsize          count,
                              goffset        offset);
    

    Use the dex_aio_read() function to read from a file-descriptor. The result will be a future that resolves to a gint64 containing the number of bytes read.

    If there was a failure, the future will reject using the appropriate error code.

    Your buffer must stay alive for the duration of the asynchronous read. One easy way to make that happen is to wrap the resulting future in a dex_future_then() which stores the buffer as user_data and releases it when finished.

    If you are doing buffer pooling, more effort may be required.

    Asynchronous Writes

    DexFuture *dex_aio_write (DexAioContext *aio_context,
                              int            fd,
                              gconstpointer  buffer,
                              gsize          count,
                              goffset        offset);
    

    A similar API exists as dex_aio_read() but for writing. It too will resolve to a gint64 containing the number of bytes written.

    buffer must be kept alive for the duration of the call and it is the callers responsibility to do so.

    You can find this article in the documentation under Asynchronous IO .

    • Pl chevron_right

      Hans de Goede: Leaving Red Hat

      news.movim.eu / PlanetGnome • 3 September

    After 17 years I feel that it is time to change things up a bit and for a new challenge. I'm leaving Red Hat and my last day at Red Hat will be October 31st.

    I would like to thank Red Hat for the opportunity to work on many interesting open-source projects during my time at Red Hat and for all the things I've learned while at Red Hat.

    I want to use this opportunity to thank everyone I've worked with, both my great Red Hat colleagues, as well as everyone from the community for all the good times during the last 17 years.

    I've a pretty good idea of what will come next, but this is not set in stone yet. I definitely will continue to work on open-source and on Linux hw-enablement.

    comment count unavailable comments
    • Pl chevron_right

      Christian Hergert: Scheduling & Fibers

      news.movim.eu / PlanetGnome • 3 September • 2 minutes

    Previously , and previously .

    Schedulers

    The Dex.Scheduler is responsible for running work items on a thread. This is performed by integrating with the threads GMainContext . The main thread of your application will have a Dex.MainScheduler as the assigned scheduler.

    The scheduler manages callbacks such as work items created with Dex.Scheduler.push() . This can include blocks, fibers, and application provided work.

    You can get the default scheduler for the application’s main thread using Dex.Scheduler.get_default() . The current thread’s scheduler can be retrieved with Dex.Scheduler.ref_thread_default() .

    Thread Pool Scheduling

    Libdex manages a thread pool which may be retrieved using Dex.ThreadPoolScheduler.get_default() .

    The thread pool scheduler will manage a number of threads that is deemed useful based on the number of CPU available. When io_uring is used, it will also restrict the number of workers to the number of uring available.

    Work items created from outside of the thread pool are placed into a global queue. Thread pool workers will take items from the global queue when they have no more items to process.

    To avoid “thundering herd” situations often caused by global queues and thread pools a pollable semaphore is used. On Linux , specifically, io_uring and eventfd combined with EFD_SEMAPHORE allow waking up a single worker when a work item is queued.

    All thread pool workers have a local Dex.Scheduler so use of timeouts and other GSource features continue to work.

    If you need to interact with long-blocking API calls it is better to use Dex.thread_spawn() rather than a thread pool thread.

    Thread pool workers use a work-stealing wait-free queue which allows the worker to push work items onto one side of the queue quickly. Doing so also helps improve cacheline effectiveness.

    Fibers

    Fibers are a type of stackfull co-routine . A new stack is created and a trampoline is performed onto the stack from the current thread.

    Use Dex.Scheduler.spawn() to create a new fiber.

    When a fiber calls one of the Dex.Future.await() functions or when it returns the fiber is suspended and execution returns to the scheduler.

    By default, fibers have a 128-kb stack with a guard page at the end. Fiber stacks are pooled so that they may be reused during heavy use.

    Fibers are a Dex.Future which means you can await the completion of a fiber just like any other future.

    Note that fibers are pinned to a scheduler. They will not be migrated between schedulers even when a thread pool is in use.

    Fiber Cancellation

    Fibers may be cancelled if the fiber has been discarded by all futures awaiting completion. Fibers will always exit through a natural exit point such as a pending “await”. All attempts to await will reject with error once a fiber has been cancelled.

    If you want to ignore cancellation of fibers, use Dex.Future.disown() on the fiber after creation.

    This article can be found at scheduling in the libdex documentation.

    • Pl chevron_right

      Christian Hergert: Integrating Libdex and GAsyncResult

      news.movim.eu / PlanetGnome • 2 September • 2 minutes

    Previously .

    Historically if you wanted to do asynchronous work in GObject-based applications you would use GAsyncReadyCallback and GAsyncResult .

    There are two ways to integrate with this form of asynchronous API.

    In one direction, you can consume this historical API and provide the result as a DexFuture . In the other direction, you can provide this API in your application or library but implement it behind the scenes with DexFuture .

    Converting GAsyncResult to Futures

    A typical case to integrate, at least initially, is to extract the result of a GAsyncResult and propagate it to a future.

    One way to do that is with a Dex.Promise which will resolve or reject from your async callback.

    static void
    my_callback (GObject      *object,
                 GAsyncResult *result,
                 gpointer      user_data)
    {
      g_autoptr(DexPromise) promise = user_data;
      g_autoptr(GError) error = NULL;
    
      if (thing_finish (THING (object), result, &error))
        dex_promise_resolve_boolean (promise, TRUE);
      else
        dex_promise_reject (promise, g_steal_pointer (&error));
    }
    
    DexFuture *
    my_wrapper (Thing *thing)
    {
      DexPromise *promise = dex_promise_new_cancellable ();
    
      thing_async (thing,
                   dex_promise_get_cancellable (promise),
                   my_callback,
                   dex_ref (promise));
    
      return DEX_FUTURE (promise);
    }
    

    Implementing AsyncResult with Futures

    In some cases you may not want to “leak” into your API that you are using DexFuture . For example, you may want to only expose a traditional GIO API or maybe even clean up legacy code.

    For these cases use Dex.AsyncResult . It is designed to feel familiar to those that have used GTask .

    Dex.AsyncResult.new() allows taking the typical cancellable , callback , and user_data parameters similar to GTask .

    Then call Dex.AsyncResult.await() providing the future that will resolve or reject with error. One completed, the users provided callback will be executed within the active scheduler at time of creation.

    From your finish function, call the appropriate propgate API such as Dex.AsyncResult.propagate_int() .

    void
    thing_async (Thing               *thing,
                 GCancellable        *cancellable,
                 GAsyncReadyCallback  callback,
                 gpointer             user_data)
    {
      g_autoptr(DexAsyncResult) result = NULL;
    
      result = dex_async_result_new (thing, cancellable, callback, user_data);
      dex_async_result_await (result, dex_future_new_true ());
    }
    
    gboolean
    thing_finish (Thing         *thing,
                  GAsyncResult  *result,
                  GError       **error)
    {
      return dex_async_result_propagate_boolean (DEX_ASYNC_RESULT (result), error);
    }
    

    Safety Notes

    One thing that Libdex handles better than GTask is ensuring that your user_data is destroyed on the proper thread. The design of Dex.Block was done in such a way that both the result and user_data are passed back to the owning thread at the same time. This ensures that your user_data will never be finalized on the wrong thread.

    This article can be found in the documentation at Integrating GAsyncResult .

    • Pl chevron_right

      Christian Hergert: Libdex 1.0

      news.movim.eu / PlanetGnome • 1 September • 6 minutes

    A couple years ago I spent a great deal of time in the waiting room of an allergy clinic. So much that I finally found the time to write a library that was meant to be a followup to libgtask/libiris libraries I wrote nearly two decades ago. A lot has changed in Linux since then and I felt that maybe this time, I could get it “right”.

    This will be a multi-part series, but today lets focus on terminology so we have a common language to communicate.

    Futures

    A future is a container that will eventually contain a result or an error.

    Programmers often use the words “future” and “promise” interchangeably. Libdex tries, when possible, to follow the academic nomenclature for futures. That is to say that a future is the interface and promise is a type of future.

    Futures exist in one of three states. The first state is pending . A future exists in this state until it has either rejected or resolved.

    The second state is resolved . A future reaches this state when it has successfully obtained a value.

    The last third state is rejected . If there was a failure to obtain a value a future will be in this state and contain a GError representing such failure.

    Promises and More

    A promise is a type of future that allows the creator to set the resolved value or error. This is a common type of future to use when you are integrating with things that are not yet integrated with Libdex.

    Other types of futures also exist.

    /* resolve to "true" */
    DexPromise *good = dex_promise_new ();
    dex_promise_resolve_boolean (good, TRUE);
    
    /* reject with error */
    DexPromise *bad = dex_promise_new ();
    dex_promise_reject (good,
                        g_error_new (G_IO_ERROR,
                                     G_IO_ERROR_FAILED,
                                     "Failed"));
    

    Static Futures

    Sometimes you already know the result of a future upfront.
    The DexStaticFuture is used in this case.
    Various constructors for DexFuture will help you create one.

    For example, Dex.Future.new_take_object() will create a static future for a GObject -derived instance.

    DexFuture *future = dex_future_new_for_int (123);
    

    Blocks

    One of the most commonly used types of futures in Libdex is a DexBlock .

    A DexBlock is a callback that is run to process the result of a future. The block itself is also a future meaning that you can chain these blocks together into robust processing groups.

    “Then” Blocks

    The first type of block is a “then” block which is created using Dex.Future.then() . These blocks will only be run if the dependent future resolves with a value. Otherwise, the rejection of the dependent future is propagated to the block.

    static DexFuture *
    further_processing (DexFuture *future,
                        gpointer   user_data)
    {
      const GValue *result = dex_promise_get_value (future, NULL);
    
      /* since future is completed at this point, you can also use
       * the simplified "await" API. Otherwise you'd get a rejection
       * for not being on a fiber. (More on that later).
       */
      g_autoptr(GObject) object = dex_await_object (dex_ref (future), NULL);
    
      return dex_ref (future);
    }
    

    “Catch” Blocks

    Since some futures may fail, there is value in being able to “catch” the failure and resolve it.

    Use Dex.Future.catch() to handle the result of a rejected future and resolve or reject it further.

    static DexFuture *
    catch_rejection (DexFuture *future,
                     gpointer   user_data)
    {
      g_autoptr(GError) error = NULL;
    
      dex_future_get_value (future, &error);
    
      if (g_error_matches (error, G_IO_ERROR, G_IO_ERROR_NOT_FOUND))
        return dex_future_new_true ();
    
      return dex_ref (future);
    }
    

    “Finally” Blocks

    There may be times when you want to handle completion of a future whether it resolved or rejected. For this case, use a “finally” block by calling Dex.Future.finally() .

    Infinite Loops

    If you find you have a case where you want a DexBlock to loop indefinitely, you can use the _loop variants of the block APIs.

    See Dex.Future.then_loop() , Dex.Future.catch_loop() , or Dex.Future.finally_loop() . This is generally useful when your block’s callback will begin the next stage of work as the result of the callback.

    Future Sets

    A FutureSet is a type of future that is the composition of multiple futures. This is an extremely useful construct because it allows you to do work concurrently and then process the results in a sort of “reduce” phase.

    For example, you could make a request to a database, cache server, and a timeout and process the first that completes.

    There are multiple types of future sets based on the type of problem you want to solve.

    Dex.Future.all() can be used to resolve when all dependent futures have resolved, otherwise it will reject with error once they are complete.
    If you want to reject as soon as the first item rejects, Dex.Future.all_race() will get you that behavior.

    Other useful Dex.FutureSet construtors include Dex.Future.any() and Dex.Future.first .

    /* Either timeout or propagate result of cache/db query */
    return dex_future_first (dex_timeout_new_seconds (60),
                             dex_future_any (query_db_server (),
                                             query_cache_server (),
                                             NULL),
                             NULL);
    

    Cancellable

    Many programmers who use GTK and GIO are familiar with GCancellable . Libdex has something similar in the form of DexCancellable . However, in the Libdex case, DexCancellable is a future.

    It allows for convenient grouping with other futures to perform cancellation when the Dex.Cancellable.cancel() method is called.

    It can also integrate with GCancellable when created using Dex.Cancellable.new_from_cancellable() .

    A DexCancellable will only ever reject.

    DexFuture *future = dex_cancellable_new ();
    dex_cancellable_cancel (DEX_CANCELLABLE (future));
    

    Timeouts

    A timeout may be represented as a future.
    In this case, the timeout will reject after a time period has passed.

    A DexTimeout will only ever reject.

    This future is implemented ontop of GMainContext via API like g_timeout_add() .

    DexFuture *future = dex_timeout_new_seconds (60);
    

    Unix Signals

    Libdex can represent unix signals as a future. That is to say that the future will resolve to an integer of the signal number when that signal has been raised.

    This is implemented using g_unix_signal_source_new() and comes with all the same restrictions.

    Delayed

    Sometimes you may run into a case where you want to gate the result of a future until a specific moment.

    For this case, DexDelayed allows you to wrap another future and decide when to “uncork” the result.

    DexFuture *delayed = dex_delayed_new (dex_future_new_true ());
    dex_delayed_release (DEX_DELAYED (delayed));
    

    Fibers

    Another type of future is a “fiber”.

    More care will be spent on fibers later on but suffice to say that the result of a fiber is easily consumable as a future via DexFiber .

    DexFuture *future = dex_scheduler_spawn (NULL, 0, my_fiber, state, state_free);
    

    Cancellation Propagation

    Futures within your application will enevitably depend on other futures.

    If all of the futures depending on a future have been released, the dependent future will have the opportunity to cancel itself. This allows for cascading cancellation so that unnecessary work may be elided.

    You can use Dex.Future.disown() to ensure that a future will continue to be run even if the dependent futures are released.

    Schedulers

    Libdex requires much processing that needs to be done on the main loop of a thread. This is generally handled by a DexScheduler .

    The main thread of an application has the default sheduler which is a DexMainScheduler .

    Libdex also has a managed thread pool of schedulers via the DexThreadPoolScheduler .

    Schedulers manage short tasks, executing DexBlock when they are ready, finalizing objects on their owning thread, and running fibers.

    Schedulers integrate with the current threads GMainContext via GSource making it easy to use Libdex with GTK and Clutter-based applications.

    Channels

    DexChannel is a higher-level construct built on futures that allow passing work between producers and consumers. They are akin to Go channels in that they have a read and a write side. However, they are much more focused on integrating well with DexFuture .

    You can find this article in the Libdex documentation under terminology .

    • Pl chevron_right

      Jussi Pakkanen: We need to seriously think about what to do with C++ modules

      news.movim.eu / PlanetGnome • 31 August • 11 minutes

    Note: Everything that follows is purely my personal opinion as an individual. It should not be seen as any sort of policy of the Meson build system or any other person or organization. It is also not my intention to throw anyone involved in this work under a bus. Many people have worked to the best of their abilities on C++ modules, but that does not mean we can't analyze the current situation with a critical eye.

    The lead on this post is a bit pessimistic, so let's just get it out of the way.

    If C++ modules can not show a 5× compilation time speedup (preferably 10×) on multiple existing open source code base, modules should be killed and taken out of the standard. Without this speedup pouring any more resources into modules is just feeding the sunk cost fallacy.

    That seems like a harsh thing to say for such a massive undertaking that promises to make things so much better. It is not something that you can just belt out and then mic drop yourself out. So let's examine the whole thing in unnecessarily deep detail. You might want to grab a cup of $beverage before continuing, this is going to take a while.

    What do we want?

    For the average developer the main visible advantages would be the following, ordered from the most important to the least.

    1. Much faster compilation times.
    If you look at old presentations and posts from back in the day when modules were voted in (approximately 2018-2019), this is the big talking point. This makes perfect sense, as the "header inclusion" way is an O(N²) algorithm and parsing C++ source code is slow. Splitting the code between source and header files is busywork one could do without. The core idea behind modules is that if you can store the "headery" bit in a preprocessed binary format that can be loaded from disk, things become massively faster.

    Then, little by little, build speed seems to fall by the wayside and the focus starts shifting towards "build isolation". This means avoiding bugs caused by things like macro leakage, weird namespace lookup issues and so on. Performance is still kind of there, but the numbers are a lot smaller, spoken aloud much more rarely and often omitted entirely. Now, getting rid of these sorts of bugs is fundamentally a good thing. However it might not be the most efficient use of resources. Compiler developer time is, sadly, a zero sum game so we should focus their skills and effort on things that provide the best results.

    Macro leakage and other related issues are icky but they are on average fairly rare. I have encountered a bug caused by them maybe once or twice a year. They are just not that common for the average developer. Things are probably different for people doing deep low level metaprogramming hackery, but they are a minuscule fraction of the total developer base. On the other hand slow build times are the bane of existence of every single C++ developer every single day. It is, without question, the narrowest bottleneck for developer productivity today and is the main issue modules were designed to solve. They don't seem to be doing that nowadays.

    How did we end up here in the first place?

    C++ modules were a C++ 20 feature. If a feature takes over five years of implementation work to get even somewhat working, you might ponder how it was accepted in the standard in the first place. As I was not there when it happened, I do not really know. However I have spoken to people who were present at the actual meetings where things were discussed and voted on. Their comments have been enlightening to say the least.

    Apparently there were people who knew about the implementation difficulty and other fundamental problems and were quite vocal that modules as specified are borderline unimplementable. They were shot down by a group of "higher up" people saying that "modules are such an important feature that we absolutely must have them in C++ 20".

    One person who was present told me: "that happened seven years ago [there is a fair bit of lead time in ISO standards] and [in practice] we still have nothing. In another seven years, if we are very lucky, we might have something that sort of works".

    The integration task from hell

    What sets modules apart from almost all other features is that they require very tight integration between compilers and build systems. This means coming up with schemes for things like what do module files actually contain, how are they named, how are they organized in big projects, how to best divide work between the different tools. Given that the ISO standard does not even acknowledge the fact that source code might reside in a file, none of this is in its purview. It is not in anybody's purview.

    The end result of all that is that everybody has gone in their own corner, done the bits that are the easiest for them and hoping for the best. To illustrate how bad things are, I have been in discussions with compiler developers about this. In said discussion various avenues were considered on how to get things actually working, but one compiler developer replied "we do not want to turn the compiler into a build system" to every single proposal, no matter what it was . The experience was not unlike talking to a brick wall. My guess is that the compiler team in question did not have resources to change their implementation so vetoing everything became the sensible approach for them (though not for the module world in general).

    The last time I looked into adding module support to Meson, things were so mind-bogglingly terrible, that you needed to create, during compilation time, additional compiler flags, store them in temp files and pass them along to compilation commands. I wish I was kidding but I am not . It's quite astounding that the module work started basically from Fortran modules, which are simple and work (in production even), and ended up in their current state, a kafkaesque nightmare of complexity which does not work.

    If we look at the whole thing from a project management viewpoint, the reason for this failure is fairly obvious. This is a big change across multiple isolated organizations. The only real way to get those done is to have a product owner who a) is extremely good at their job b) is tasked with and paid to get the thing done properly c) has sufficient stripes to give orders to the individual teams and d) has no problems slapping people on metaphorical wrists if they try to weasel out of doing their part.

    Such a person does not exist in the modules space. It is arguable whether such a person could exist even in theory. Because of this modules can never become good, which is a reasonable bar to expect a foundational piece of technology to reach.

    The design that went backwards

    If there is one golden rule of software design, it is "Do not do a grand design up front". This is mirrored in the C++ committee's guideline of "standardize existing practice".

    C++ modules may be the grandest up-frontest design the computing world has ever seen. There were no implementations (one might argue there still aren't, but I digress), no test code, no prototypes, nothing. Merely a strong opinion of "we need this and we need it yesterday".

    For the benefit of future generations, one better way to approach the task would have gone something like this. First you implement enough in the compiler to be able to produce one module file and then consume it in a different compilation unit. Keep it as simple as possible. It's fine to only serialize a subset of functionality and error out if someone tries to go outside the lines. Then take a build system that runs that. Then expand that to support a simple project, say, one that has ten source files and produces one executable. Implement features in the module file until you can compile the whole thing. Then measure the output. If you do not see performance increases, stop further development until you either find out why that is or you can fix your code to work better. Now you update the API so that no part of the integration makes people's eyes bleed of horror. Then scale the prototype to handle project with 100 sources. Measure again. Improve again. Then do two 100 source pairs, one that produces a library and one that creates an executable that uses the library. Measure again. Improve again. Then do 1000 sources in 10 subprojects. Repeat.

    If the gains are there, great, now you have base implementation that has been proven to work with real world code and which can be expanded to a full implementation. If the implementation can't be made fast and clean, that is a sign that there is a fundamental design flaw somewhere. Throw your code away and either start from scratch or declare the problem too difficult and work on something else instead.

    Hacking on an existing C++ compiler is really difficult and it takes months of work to even get started. If someone wants to try to work on modules but does not want to dive into compiler development, I have implemented a " module playground ", which consists of a fake C++ compiler, a fake build system and a fake module scanner all in ~300 lines of Python.

    The promise of import std

    There is a second way of doing modules in an iterative fashion and it is actually being pursued by C++ implementers, namely import std . This is a very good approach in several different ways. First of all, the most difficult part of modules is the way compilations must be ordered. For the standard library this is not an issue, because it has no dependencies and you can generate all of it in one go. The second thing is the fact that most of the slowness of most of C++ development comes from the standard library. For reference, merely doing an #include<vector> brings in 27 000 lines of code and that is fairly small amount compared to many other common headers.

    What sort of an improvement can we expect from this on real world code bases? Implementations are still in flux, so let's estimate using information we have. The way import std is used depends on the compiler but roughly:

    1. Replace all #include statements for standard library headers with import std .
    2. Run the compiler in a special mode.
    3. The compiler parses headers of the standard library and produces some sort of a binary representation of them
    4. The representation is written to disk.
    5. When compiling normally, add compiler flags that tell the compiler to load the file in question before processing actual source code

    If you are thinking "wait a minute, if we remove step #1, this is exactly how precompiled headers work", you are correct. Conceptually it is pretty much the same and I have been told (but have not verified myself) that in GCC at least module files are just repurposed precompiled headers with all the same limitations (e.g. you must use all the same compiler flags to use a module file as you did when you created it).

    Barring a major breakthrough in compiler data structure serialization, the expected speedup should be roughly equivalent to the speedup you get from precompiled headers. Which is to say, maybe 10-20% with Visual Studio and a few percentage points on Clang and GCC. OTOH if such a serialization improvement has occurred, it could probably be adapted to be usable in precompiled headers, too. Until someone provides verifiable measurements proving otherwise, we must assume that is the level of achievable improvement.

    For reference, here is a Reddit thread where people report improvements in the 10-20% range.

    But why 5×?

    A reasonable requirement for the speedup would be "better than can be achieved using currently available tools and technologies". As an experiment I wrote a custom standard library (not API compatible with the ISO one on purpose) whose main design goal was to be fast to compile. I then took an existing library, converted that to use the new library and measured. The code compiled four times faster. In addition the binary it produced was smaller and, unexpectedly, ran faster. Details can be found in this blog post .

    Given that 4× is already achievable (though, granted, only tested on one project, not proven in general), 5× seems like a reasonable target.

    But what's in it for you ?

    The C++ standard committee has done a lot of great (and highly underappreciated) work to improve the language. On several occasions Herb Sutter has presented new functionality with "all you have to do is to recompile your code with a new compiler and the end result runs faster and is safer". It takes a ton of work to get these kinds of results, and it is exactly where you want to be.

    Modules are not there. In fact they are in the exact opposite corner.

    Using modules brings with it the following disadvantages:

    1. Need to rewrite (possibly refactor) your code.
    2. Loss of portability.
    3. Module binary files (with the exception of MSVC) are not portable so you need to provide header files for libraries in any case.
    4. The project build setup becomes more complicated.
    5. Any toolchain version except the newest one does not work (at the time of writing Apple's module support is listed as " partial ")
    In exchange for all this you, the regular developer-about-town, get the following advantages:

    1. Nothing.

    • Pl chevron_right

      Debarshi Ray: Toolbx containers hit by CA certificates breakage

      news.movim.eu / PlanetGnome • 30 August • 2 minutes

    If you are using Toolbx 0.1.2 or newer, then some existing containers have been hit by a bug that breaks certificates from certificate authorities (or CAs) that are used for secure network communication. The bug prevents OpenSSL and other cryptography components from finding any certificates, which means that programs like pacman and DNF that use OpenSSL are unable to download any metadata or packages. For what it’s worth GnuTLS and, possibly, Network Security Services (or NSS) are unaffected.

    This is a serious breakage because OpenSSL is widely used, and when it breaks the standard mechanisms for shipping updates, it can be difficult for users to find a way forward. So, without going into too many details, here’s how to diagnose if you are in this situation and how to repair your containers.

    Diagnosis

    Among the different operating system distributions that we regularly test Toolbx on, so far it seems that Arch Linux and Fedora containers are affected. However, if you suddenly start experiencing network errors related to CA certificates inside your Toolbx containers, then you might be affected too.

    Within Arch Linux containers, you will see that files like /etc/ca-certificates/extracted/ca-bundle.trust.crt , /etc/ca-certificates/extracted/edk2-cacerts.bin , /etc/ca-certificates/extracted/tls-ca-bundle.pem , etc. have been emptied out and /etc/ca-certificates/extracted/java-cacerts.jks looks suspiciously truncated at 32 bytes.

    Within Fedora containers, you will see the same, but the file paths are slightly different. They are /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt , /etc/pki/ca-trust/extracted/edk2/cacerts.bin , /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem , /etc/pki/ca-trust/extracted/java/cacerts , etc..

    Workaround

    First, you need to disable something called remote p11-kit inside the containers.

    This requires getting rid of /etc/pkcs11/modules/p11-kit-trust.module from the containers:

    ⬢ $ sudo rm /etc/pkcs11/modules/p11-kit-trust.module

    … and the p11-kit-client.so PKCS #11 module.

    On Arch Linux:

    ⬢ $ sudo rm /usr/lib/pkcs11/p11-kit-client.so

    On Fedora 43 onwards, it’s provided by the p11-kit-client RPM, and on older releases it’s the p11-kit-server RPM:

    ⬢ $ sudo dnf remove p11-kit-client
    Package                 Arch   Version                 Repository           Size
    Removing:
     p11-kit-client         x86_64 0.25.5-9.fc43           ab42c14511ba47b   1.2 MiB
    
    Transaction Summary:
     Removing:           1 package
    
    After this operation, 1 MiB will be freed (install 0 B, remove 1 MiB).
    Is this ok [y/N]: y
    Running transaction
    [1/2] Prepare transaction                                                                                                                                                   100% |  66.0   B/s |   1.0   B |  00m00s
    [2/2] Removing p11-kit-client-0:0.25.5-9.fc43.x86_64                                                                                                                        100% |  16.0   B/s |   5.0   B |  00m00s
    >>> Running %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                 
    >>> Finished %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                
    >>> Scriptlet output:                                                                                                                                                                                               
    >>> Failed to connect to system scope bus via local transport: No such file or directory                                                                                                                            
    >>>                                                                                                                                                                                                                 
    >>> Running %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                 
    >>> Finished %triggerpostun scriptlet: systemd-0:257.7-1.fc43.x86_64                                                                                                                                                
    >>> Scriptlet output:                                                                                                                                                                                               
    >>> Failed to connect to system scope bus via local transport: No such file or directory                                                                                                                            
    >>>                                                                                                                                                                                                                 
    Complete!

    Then, you need to restart the container, run update-ca-trust(8) , and that’s all:

    ⬢ $
    logout
    $ podman stop <CONTAINER>
    <CONTAINER>
    $ toolbox enter <CONTAINER>
    ⬢ $ sudo update-ca-trust