call_end

    • Pl chevron_right

      Hubert Figuière: Dev Log August 2025

      news.movim.eu / PlanetGnome • 7 September • 1 minute

    Some of the stuff I did in August.

    AbiWord

    More memory leaks fixing.

    gudev-rs

    Updated gudev-rs to the latest glib-rs, as a requirement to port any code using it to the latest glib-rs.

    libopenraw

    A minor fix so that it can be used to thumbnail JPEG file extracting the preview.

    Released alpha.12.

    Converted the x-trans interpolation to use floats. Also removed a few unnecessary unsafe blocks.

    Niepce

    A lot of work on the importer. Finally finished that UI bit I had in progress of a while and all the downfall with it. It is in the develop branch which mean it will be merged to main . The includes some UI layout changes to the dialog.

    Then I fixed the camera importer that was assuming everyone followed the DCIM specification (narrator: no they didn't). This mean it was broken on iPhone 14 and the Fujifilm X-T3 that has two card slot (really, use a card reader if the camera uses memory cards). Also sped it up, it's still really slow.

    Also better handle the asynchronous tasks running on a thread like the thumbnailing or camera import list content. I'm almost ready to move on.

    Tore up code using gdkpixbuf for many reasons. It's incompatible with multiple threads, gdk texture were already created from raw buffers. This simplify a lot of things.

    • Pl chevron_right

      Jussi Pakkanen: Trying out import std

      news.movim.eu / PlanetGnome • 6 September • 4 minutes

    Since C++ compilers are starting to support import std , I ran a few experiments to see what the status of that is. GCC 15 on latest Ubuntu was used for all of the following.

    The goal

    One of the main goals of a working module implementation is to be able to support the following workflow:

    • Suppose we have an executable E
    • It uses a library L
    • L and E are made by different people and have different Git repositories and all that
    • We want to take the unaltered source code of L, put it inside E and build the whole thing (in Meson parlance this is known as subproject )
    • Build files do not need to be edited, i.e. the source of L is immutable
    • Make the build as fast as reasonably possible

    The simple start

    We'll start with a helloworld example.

    This requires two two compiler invocations.

    g++-15 -std=c++26 -c -fmodules -fmodule-only -fsearch-include-path bits/std.cc

    g++-15 -std=c++26 -fmodules standalone.cpp -o standalone

    The first invocation compiles the std module and the second one uses it. There is already some wonkiness here. For example the documentation for -fmodule-only says that it only produces the module output, not an object file. However it also tries to link the result into an executable so you have to give it the -c argument to tell it to only create the object file, which the other flag then tells it not to create.

    Building the std module takes 3 seconds and the program itself takes 0.65 seconds. Compiling without modules takes about a second, but only 0.2 seconds if you use iostream instead of println .

    The module file itself goes to a directory called gcm.cache in the current working dir:

    istd2.png

    All in all this is fairly painless so far.

    So ... ship it?

    Not so fast. Let's see what happens if you build the module with a different standards version than the consuming executable.

    It detects the mismatch and errors out. Which is good, but also raises questions. For example what happens if you build the module without definitions but the consuming app with -DNDEBUG . In my testing it worked, but is it just a case of getting lucky with the UB slot machine? I don't know. What should happen? I don't know that either. Unfortunately there is an even bigger issue lurking about.

    Clash of File Name Clans

    If you are compiling with Ninja (and you should) all compiler invocations are made from the same directory (the build tree root). GCC also does not seem to provide a compiler flag to change the location of the gcm.cache directory. Thus if you have two targets that both use import std , their compiled modules get the same output file name. They would clobber each other, so Ninja will refuse to build them (Make probably ignores this, so the files end up clobbering each other and, if you are very lucky, only causes a build error).

    Assuming that you can detect this and deduplicate building the std module, the end result still has a major limitation. You can only ever have one standard library module across all of your build targets. Personally I would be all for forcing this over the entire build tree, but sadly it is a limitation that can't really be imposed on existing projects. Sadly I know this from experience. People are doing weird things out there and they want to keep on weirding on. Sometimes even for valid technical reasons.

    Even if this issue was fixed, it does not really help. As you could probably tell, this clashing will happen for all modules. So if your ever has two modules called utils , no matter where they are or who wrote them, they will both try to write gcm.cache/utils.gcm and either fail to build, fail on import or invoke UB.

    Having the build system work around this by changing the working directory to implicitly make the cache directory go elsewhere (and repoint all paths at the same time) is not an option . All process invocations must be doable from the top level directory. This is the hill I will die on if I must!

    Instead what is needed is something like the target private directory proposal I made ages ago. With that you'd end up with command line arguments roughly like this:

    g++-15 <other args> --target-private-dir=path/to/foo.priv --project-private-dir=toplevel.priv

    The build system would guarantee that all compilations for a single target (library, exe, etc) have the same target private directory and all compilations in the build tree get the same top level private directory. This allows the compiler to do some build optimizations behind the scenes. For example if it needs to build a std module, it could copy it in the top level private directory and other targets could copy it from there instead of building it from scratch (assuming it is compatible and all that).

    • Pl chevron_right

      Christian Hergert: Dedicated Threads with Futures

      news.movim.eu / PlanetGnome • 5 September

    There are often needs to integrate with blocking APIs that do not fit well into the async or future-based models of the GNOME ecosystem. In those cases, you may want to use a dedicated thread for blocking calls so that you do not disrupt main loops, timeouts, or fiber scheduling.

    This is ideal when doing things like interacting with libflatpak or even libgit2 .

    Creating a Dedicated Thread

    Use the dex_thread_spawn() function to spawn a new thread. When the thread completes the resulting future will either resolve or reject.

    typedef DexFuture *(*DexThreadFunc) (gpointer user_data);
    
    DexFuture *future = dex_thread_spawn ("[my-thredad]", thread_func, thread_data,
                                          (GDestroyNotify)thread_data_free);
    

    Waiting for Future Completion

    Since dedicated threads do not have a Dex.Scheduler on them and are not a fiber, you may not await futures. Awaiting would suspend a fiber stack but there is no such fiber to suspend.

    To make integration easier, you may use dex_thread_wait_for() to wait for a future to complete. The mechanism used in this case is a mutex and condition variable which will be signaled when the dependent future completes.

    • Pl chevron_right

      Allan Day: Foundation Update, 2025-09-05

      news.movim.eu / PlanetGnome • 5 September • 4 minutes

    Steven’s weekly Foundation updates were fantastic and, since I’m filling in the ED role as President, I want to keep them going. I’m still mulling over the frequency and scheduling of the posts, but for now I’ll stick to weekly on Fridays.

    I’m trying to make this an update for the entire organisation, rather than for me personally, so a lot of this update is about others’ activities, rather than my own. Inevitably I can’t report on everything and will miss some items, so apologies to anyone if I don’t manage to cover your contributions. We can always squeeze omissions into a future update, so let me know if I missed anything.

    As many of you know, Steven departed his role as ED last week . Everyone on the board knows that this was a shock to many people, and that it has had an impact on the GNOME community. I think that, for now, there’s not a huge amount that I can say about that topic. However, what I will say is that its significance has not been lost on those of us who are working on the Foundation, and that we are committed to making good on our ambitions for the Foundation and for the GNOME Project more generally. With that said, here’s what we’ve been up to this week.

    Transition

    Transitioning the Foundation leadership has been a major focus for this week, due to Steven’s departure and new officers being appointed. Handing over all the accounts is quite a big task which has involved paperwork and calls, and this work will carry on for a while longer yet.

    I’d like to thank Steven for being exceptionally helpful during this process. Everyone on the Exec Committee is very appreciative.

    I’ve been handing the chair role over to Maria, which involved me dumping a lot of procedural information on her in a fairly uncontrolled manner. Sorry Maria!

    I’ve also started to participate in the management of our paid team members. Thanks to everyone for being accommodating during this process.

    There have also been a lot of messages and conversations with partners. I am trying to get around to everyone, but am still midway through the process. If there’s anyone out there who would like to say hi, please don’t be shy! My door is always open and I’d love to hear from anyone who has a relationship with the GNOME Foundation and would like to connect.

    Budget season

    The GNOME Foundation annual budget runs from October to September, so we are currently busy working on the budget that will come into effect in October this year. Deepa, our new treasurer, did an amazing job and provided the board with a draft budget ahead of time, and we had our first meeting to discuss it this week. There will be more meetings over September and we’ll provide a report on the new budget once it has been finalised.

    Sponsorships

    We got our first corporate sponsor through the new donations platform this week, which feels like a great milestone. Many thanks to Symless for your support! Our corporate supporter packages are a great way for businesses of all sizes to help sustain GNOME and the wider ecosystem, and these sponsorships a huge difference. If anyone reading this post thinks that their employer might be interested in a corporate sponsorship package, please do encourage them to sign up. And we’re always happy to talk to potential donors.

    Framework

    One particular highlight from this week was a call between GNOME developers and members of the Framework team. This was something that Steven had set in motion (again, thanks Steven!) and was incredibly exciting from my perspective. Partnerships between hardware vendors and upstream open source projects have so much potential, both in terms of delivering better products, but also improving the efficiency of development and testing. I can’t wait to see where this goes.

    Outreachy

    The Exec Committee approved funding for a single Outreachy intern for the December 2025 cohort. This funding is coming from an Endless grant , so thanks to Endless for your support. Outreachy is a fantastic program so we’re happy we can continue to fund GNOME’s participation.

    Audit

    The GNOME Foundation is due to be audited next calendar year. This is a standard process, but we’ve never been audited before, so it’s going to be a new experience for us. This week I signed the paperwork to secure an auditor, who came highly recommended to us. This was an important step in making sure that the audit happens according to schedule.

    All the usual

    Other Foundation programs continue to run this week. Flathub is being worked on, events are being planned, and we continue to have development activities happening that we’re supporting. There’s already a lot to mention in this post, so I won’t go into specifics right now, but all this work is important and valued. Thanks to everyone who is doing this work. Keep it up!

    Message ends

    That’s it from me. See you next week. o/

    • Pl chevron_right

      Christian Hergert: Asynchronous IO with Libdex

      news.movim.eu / PlanetGnome • 4 September • 3 minutes

    Previously , previously , and previously .

    The Gio.IOStream APIs already provide robust support for asynchronous IO. The common API allows for different types of implementation based on the stream implementation.

    Libdex provides wrappers for various APIs. Coverage is not complete but we do expect additional APIs to be covered in future releases.

    File Management

    See dex_file_copy() for copying files.

    See dex_file_delete() for deleting files.

    See dex_file_move() for moving files.

    File Attributes

    See dex_file_query_info() and dex_file_query_file_type() , and dex_file_query_exists() for basic querying.

    You can set file attributes using dex_file_set_attributes() .

    Directories

    You can create a directory or hierarchy of directories using dex_file_make_directory() and dex_file_make_directory_with_parents() respectively.

    Enumerating Files

    You can create a file enumerator for a directory using dex_file_enumerate_children() .

    You can also asynchronously enumerate the files of that directory using dex_file_enumerator_next_files() which will resolve to a g_autolist(GFileInfo) of infos.

    Reading and Writing Files

    The dex_file_read() will provide a Gio.FileInputStream which can be read from.

    A simpler interface to get the bytes of a file is provided via dex_file_load_contents_bytes() .

    The dex_file_replace() will replace a file on disk providing a Gio.FileOutputStream to write to. The dex_file_replace_contents_bytes() provides a simplified API for this when the content is readily available.

    Reading Streams

    See dex_input_stream_read() , dex_input_stream_read_bytes() , dex_input_stream_skip() , and dex_input_stream_close() for working with input streams asynchronously.

    Writing Streams

    See dex_output_stream_write() , dex_output_stream_write_bytes() , dex_output_stream_splice() , and dex_output_stream_close() for writing to streams asynchronously.

    Sockets

    The dex_socket_listener_accept() , dex_socket_client_connect() , and dex_resolver_lookup_by_name() may be helpful when writing socket servers and clients.

    D-Bus

    Light integration exists for D-Bus to perform asychronous method calls.

    See dex_dbus_connection_call() , dex_dbus_connection_call_with_unix_fd_list() , dex_dbus_connection_send_message_with_reply() and dex_dbus_connection_close() .

    We expect additional support for D-Bus to come at a later time.

    Subprocesses

    You can await completion of a subprocess using dex_subprocess_wait_check() .

    Foundry uses some helpers to do UTF-8 communication but I’d like to bring that to libdex in an improved way post-1.0.

    Asynchronous IO with File Descriptors

    Gio.IOStream and related APIs provides much opertunity for streams to be used asynchronously. There may be cases where you want similar behavior with traditional file-descriptors.

    Libdex provides a set of AIO-like functions for traditional file-descriptors which may be backed with more efficient mechanisms.

    Gio.IOStream typically uses a thread pool of blocking IO operations on Linux and other operating systems because that was the fastest method when the APIs were created. However, on some operating systems such as Linux, faster methods finally exist.

    On Linux, io_uring can be used for asynchronous IO and is provided in the form of a Dex.Future .

    Asynchronous Reads

    DexFuture *dex_aio_read  (DexAioContext *aio_context,
                              int            fd,
                              gpointer       buffer,
                              gsize          count,
                              goffset        offset);
    

    Use the dex_aio_read() function to read from a file-descriptor. The result will be a future that resolves to a gint64 containing the number of bytes read.

    If there was a failure, the future will reject using the appropriate error code.

    Your buffer must stay alive for the duration of the asynchronous read. One easy way to make that happen is to wrap the resulting future in a dex_future_then() which stores the buffer as user_data and releases it when finished.

    If you are doing buffer pooling, more effort may be required.

    Asynchronous Writes

    DexFuture *dex_aio_write (DexAioContext *aio_context,
                              int            fd,
                              gconstpointer  buffer,
                              gsize          count,
                              goffset        offset);
    

    A similar API exists as dex_aio_read() but for writing. It too will resolve to a gint64 containing the number of bytes written.

    buffer must be kept alive for the duration of the call and it is the callers responsibility to do so.

    You can find this article in the documentation under Asynchronous IO .

    • Pl chevron_right

      Hans de Goede: Leaving Red Hat

      news.movim.eu / PlanetGnome • 3 September

    After 17 years I feel that it is time to change things up a bit and for a new challenge. I'm leaving Red Hat and my last day at Red Hat will be October 31st.

    I would like to thank Red Hat for the opportunity to work on many interesting open-source projects during my time at Red Hat and for all the things I've learned while at Red Hat.

    I want to use this opportunity to thank everyone I've worked with, both my great Red Hat colleagues, as well as everyone from the community for all the good times during the last 17 years.

    I've a pretty good idea of what will come next, but this is not set in stone yet. I definitely will continue to work on open-source and on Linux hw-enablement.

    comment count unavailable comments
    • Pl chevron_right

      Christian Hergert: Scheduling & Fibers

      news.movim.eu / PlanetGnome • 3 September • 2 minutes

    Previously , and previously .

    Schedulers

    The Dex.Scheduler is responsible for running work items on a thread. This is performed by integrating with the threads GMainContext . The main thread of your application will have a Dex.MainScheduler as the assigned scheduler.

    The scheduler manages callbacks such as work items created with Dex.Scheduler.push() . This can include blocks, fibers, and application provided work.

    You can get the default scheduler for the application’s main thread using Dex.Scheduler.get_default() . The current thread’s scheduler can be retrieved with Dex.Scheduler.ref_thread_default() .

    Thread Pool Scheduling

    Libdex manages a thread pool which may be retrieved using Dex.ThreadPoolScheduler.get_default() .

    The thread pool scheduler will manage a number of threads that is deemed useful based on the number of CPU available. When io_uring is used, it will also restrict the number of workers to the number of uring available.

    Work items created from outside of the thread pool are placed into a global queue. Thread pool workers will take items from the global queue when they have no more items to process.

    To avoid “thundering herd” situations often caused by global queues and thread pools a pollable semaphore is used. On Linux , specifically, io_uring and eventfd combined with EFD_SEMAPHORE allow waking up a single worker when a work item is queued.

    All thread pool workers have a local Dex.Scheduler so use of timeouts and other GSource features continue to work.

    If you need to interact with long-blocking API calls it is better to use Dex.thread_spawn() rather than a thread pool thread.

    Thread pool workers use a work-stealing wait-free queue which allows the worker to push work items onto one side of the queue quickly. Doing so also helps improve cacheline effectiveness.

    Fibers

    Fibers are a type of stackfull co-routine . A new stack is created and a trampoline is performed onto the stack from the current thread.

    Use Dex.Scheduler.spawn() to create a new fiber.

    When a fiber calls one of the Dex.Future.await() functions or when it returns the fiber is suspended and execution returns to the scheduler.

    By default, fibers have a 128-kb stack with a guard page at the end. Fiber stacks are pooled so that they may be reused during heavy use.

    Fibers are a Dex.Future which means you can await the completion of a fiber just like any other future.

    Note that fibers are pinned to a scheduler. They will not be migrated between schedulers even when a thread pool is in use.

    Fiber Cancellation

    Fibers may be cancelled if the fiber has been discarded by all futures awaiting completion. Fibers will always exit through a natural exit point such as a pending “await”. All attempts to await will reject with error once a fiber has been cancelled.

    If you want to ignore cancellation of fibers, use Dex.Future.disown() on the fiber after creation.

    This article can be found at scheduling in the libdex documentation.

    • Pl chevron_right

      Christian Hergert: Integrating Libdex and GAsyncResult

      news.movim.eu / PlanetGnome • 2 September • 2 minutes

    Previously .

    Historically if you wanted to do asynchronous work in GObject-based applications you would use GAsyncReadyCallback and GAsyncResult .

    There are two ways to integrate with this form of asynchronous API.

    In one direction, you can consume this historical API and provide the result as a DexFuture . In the other direction, you can provide this API in your application or library but implement it behind the scenes with DexFuture .

    Converting GAsyncResult to Futures

    A typical case to integrate, at least initially, is to extract the result of a GAsyncResult and propagate it to a future.

    One way to do that is with a Dex.Promise which will resolve or reject from your async callback.

    static void
    my_callback (GObject      *object,
                 GAsyncResult *result,
                 gpointer      user_data)
    {
      g_autoptr(DexPromise) promise = user_data;
      g_autoptr(GError) error = NULL;
    
      if (thing_finish (THING (object), result, &error))
        dex_promise_resolve_boolean (promise, TRUE);
      else
        dex_promise_reject (promise, g_steal_pointer (&error));
    }
    
    DexFuture *
    my_wrapper (Thing *thing)
    {
      DexPromise *promise = dex_promise_new_cancellable ();
    
      thing_async (thing,
                   dex_promise_get_cancellable (promise),
                   my_callback,
                   dex_ref (promise));
    
      return DEX_FUTURE (promise);
    }
    

    Implementing AsyncResult with Futures

    In some cases you may not want to “leak” into your API that you are using DexFuture . For example, you may want to only expose a traditional GIO API or maybe even clean up legacy code.

    For these cases use Dex.AsyncResult . It is designed to feel familiar to those that have used GTask .

    Dex.AsyncResult.new() allows taking the typical cancellable , callback , and user_data parameters similar to GTask .

    Then call Dex.AsyncResult.await() providing the future that will resolve or reject with error. One completed, the users provided callback will be executed within the active scheduler at time of creation.

    From your finish function, call the appropriate propgate API such as Dex.AsyncResult.propagate_int() .

    void
    thing_async (Thing               *thing,
                 GCancellable        *cancellable,
                 GAsyncReadyCallback  callback,
                 gpointer             user_data)
    {
      g_autoptr(DexAsyncResult) result = NULL;
    
      result = dex_async_result_new (thing, cancellable, callback, user_data);
      dex_async_result_await (result, dex_future_new_true ());
    }
    
    gboolean
    thing_finish (Thing         *thing,
                  GAsyncResult  *result,
                  GError       **error)
    {
      return dex_async_result_propagate_boolean (DEX_ASYNC_RESULT (result), error);
    }
    

    Safety Notes

    One thing that Libdex handles better than GTask is ensuring that your user_data is destroyed on the proper thread. The design of Dex.Block was done in such a way that both the result and user_data are passed back to the owning thread at the same time. This ensures that your user_data will never be finalized on the wrong thread.

    This article can be found in the documentation at Integrating GAsyncResult .

    • Pl chevron_right

      Christian Hergert: Libdex 1.0

      news.movim.eu / PlanetGnome • 1 September • 6 minutes

    A couple years ago I spent a great deal of time in the waiting room of an allergy clinic. So much that I finally found the time to write a library that was meant to be a followup to libgtask/libiris libraries I wrote nearly two decades ago. A lot has changed in Linux since then and I felt that maybe this time, I could get it “right”.

    This will be a multi-part series, but today lets focus on terminology so we have a common language to communicate.

    Futures

    A future is a container that will eventually contain a result or an error.

    Programmers often use the words “future” and “promise” interchangeably. Libdex tries, when possible, to follow the academic nomenclature for futures. That is to say that a future is the interface and promise is a type of future.

    Futures exist in one of three states. The first state is pending . A future exists in this state until it has either rejected or resolved.

    The second state is resolved . A future reaches this state when it has successfully obtained a value.

    The last third state is rejected . If there was a failure to obtain a value a future will be in this state and contain a GError representing such failure.

    Promises and More

    A promise is a type of future that allows the creator to set the resolved value or error. This is a common type of future to use when you are integrating with things that are not yet integrated with Libdex.

    Other types of futures also exist.

    /* resolve to "true" */
    DexPromise *good = dex_promise_new ();
    dex_promise_resolve_boolean (good, TRUE);
    
    /* reject with error */
    DexPromise *bad = dex_promise_new ();
    dex_promise_reject (good,
                        g_error_new (G_IO_ERROR,
                                     G_IO_ERROR_FAILED,
                                     "Failed"));
    

    Static Futures

    Sometimes you already know the result of a future upfront.
    The DexStaticFuture is used in this case.
    Various constructors for DexFuture will help you create one.

    For example, Dex.Future.new_take_object() will create a static future for a GObject -derived instance.

    DexFuture *future = dex_future_new_for_int (123);
    

    Blocks

    One of the most commonly used types of futures in Libdex is a DexBlock .

    A DexBlock is a callback that is run to process the result of a future. The block itself is also a future meaning that you can chain these blocks together into robust processing groups.

    “Then” Blocks

    The first type of block is a “then” block which is created using Dex.Future.then() . These blocks will only be run if the dependent future resolves with a value. Otherwise, the rejection of the dependent future is propagated to the block.

    static DexFuture *
    further_processing (DexFuture *future,
                        gpointer   user_data)
    {
      const GValue *result = dex_promise_get_value (future, NULL);
    
      /* since future is completed at this point, you can also use
       * the simplified "await" API. Otherwise you'd get a rejection
       * for not being on a fiber. (More on that later).
       */
      g_autoptr(GObject) object = dex_await_object (dex_ref (future), NULL);
    
      return dex_ref (future);
    }
    

    “Catch” Blocks

    Since some futures may fail, there is value in being able to “catch” the failure and resolve it.

    Use Dex.Future.catch() to handle the result of a rejected future and resolve or reject it further.

    static DexFuture *
    catch_rejection (DexFuture *future,
                     gpointer   user_data)
    {
      g_autoptr(GError) error = NULL;
    
      dex_future_get_value (future, &error);
    
      if (g_error_matches (error, G_IO_ERROR, G_IO_ERROR_NOT_FOUND))
        return dex_future_new_true ();
    
      return dex_ref (future);
    }
    

    “Finally” Blocks

    There may be times when you want to handle completion of a future whether it resolved or rejected. For this case, use a “finally” block by calling Dex.Future.finally() .

    Infinite Loops

    If you find you have a case where you want a DexBlock to loop indefinitely, you can use the _loop variants of the block APIs.

    See Dex.Future.then_loop() , Dex.Future.catch_loop() , or Dex.Future.finally_loop() . This is generally useful when your block’s callback will begin the next stage of work as the result of the callback.

    Future Sets

    A FutureSet is a type of future that is the composition of multiple futures. This is an extremely useful construct because it allows you to do work concurrently and then process the results in a sort of “reduce” phase.

    For example, you could make a request to a database, cache server, and a timeout and process the first that completes.

    There are multiple types of future sets based on the type of problem you want to solve.

    Dex.Future.all() can be used to resolve when all dependent futures have resolved, otherwise it will reject with error once they are complete.
    If you want to reject as soon as the first item rejects, Dex.Future.all_race() will get you that behavior.

    Other useful Dex.FutureSet construtors include Dex.Future.any() and Dex.Future.first .

    /* Either timeout or propagate result of cache/db query */
    return dex_future_first (dex_timeout_new_seconds (60),
                             dex_future_any (query_db_server (),
                                             query_cache_server (),
                                             NULL),
                             NULL);
    

    Cancellable

    Many programmers who use GTK and GIO are familiar with GCancellable . Libdex has something similar in the form of DexCancellable . However, in the Libdex case, DexCancellable is a future.

    It allows for convenient grouping with other futures to perform cancellation when the Dex.Cancellable.cancel() method is called.

    It can also integrate with GCancellable when created using Dex.Cancellable.new_from_cancellable() .

    A DexCancellable will only ever reject.

    DexFuture *future = dex_cancellable_new ();
    dex_cancellable_cancel (DEX_CANCELLABLE (future));
    

    Timeouts

    A timeout may be represented as a future.
    In this case, the timeout will reject after a time period has passed.

    A DexTimeout will only ever reject.

    This future is implemented ontop of GMainContext via API like g_timeout_add() .

    DexFuture *future = dex_timeout_new_seconds (60);
    

    Unix Signals

    Libdex can represent unix signals as a future. That is to say that the future will resolve to an integer of the signal number when that signal has been raised.

    This is implemented using g_unix_signal_source_new() and comes with all the same restrictions.

    Delayed

    Sometimes you may run into a case where you want to gate the result of a future until a specific moment.

    For this case, DexDelayed allows you to wrap another future and decide when to “uncork” the result.

    DexFuture *delayed = dex_delayed_new (dex_future_new_true ());
    dex_delayed_release (DEX_DELAYED (delayed));
    

    Fibers

    Another type of future is a “fiber”.

    More care will be spent on fibers later on but suffice to say that the result of a fiber is easily consumable as a future via DexFiber .

    DexFuture *future = dex_scheduler_spawn (NULL, 0, my_fiber, state, state_free);
    

    Cancellation Propagation

    Futures within your application will enevitably depend on other futures.

    If all of the futures depending on a future have been released, the dependent future will have the opportunity to cancel itself. This allows for cascading cancellation so that unnecessary work may be elided.

    You can use Dex.Future.disown() to ensure that a future will continue to be run even if the dependent futures are released.

    Schedulers

    Libdex requires much processing that needs to be done on the main loop of a thread. This is generally handled by a DexScheduler .

    The main thread of an application has the default sheduler which is a DexMainScheduler .

    Libdex also has a managed thread pool of schedulers via the DexThreadPoolScheduler .

    Schedulers manage short tasks, executing DexBlock when they are ready, finalizing objects on their owning thread, and running fibers.

    Schedulers integrate with the current threads GMainContext via GSource making it easy to use Libdex with GTK and Clutter-based applications.

    Channels

    DexChannel is a higher-level construct built on futures that allow passing work between producers and consumers. They are akin to Go channels in that they have a read and a write side. However, they are much more focused on integrating well with DexFuture .

    You can find this article in the Libdex documentation under terminology .