call_end

    • Pl chevron_right

      Georges Basile Stavracas Neto: Marks and counters in Sysprof

      news.movim.eu / PlanetGnome • 8 September • 5 minutes

    Last year the Webkit project started to integrate its tracing routines with Sysprof . Since then, the feedback I’ve received about it is that it was a pretty big improvement in the development of the engine! Yay.

    People started using Sysprof to have insights about the internal states of Webkit, gather data on how long different operations took, and more. Eventually we started hitting some limitations in Sysprof, mostly in the UI itself, such as lack of correlational and visualization features.

    Earlier this year a rather interesting enhancement in Sysprof was added: it is now possible to filter the callgraph based on marks . What it means in practice is, it’s now possible to get statistically relevant data about what’s being executed during specific operations of the app.

    In parallel to WebKit, recently Mesa merged a patch that integrates Mesa’s tracing routines with Sysprof . This brought data from yet another layer of the stack, and it truly enriches the profiling we can do on apps. We now have marks from the DRM vblank event, the compositor, GTK rendering, WebKit, Mesa, back to GTK, back to the compositor, and finally the composited frame submitted to the kernel. A truly full stack view of everything.

    Screenshot of Sysprof showing Mesa and Webkit marks

    So, what’s the catch here? Well, if you’re an attentive reader, you may have noticed that the marks counter went from this last year:

    Screenshot of the marks tab with 9122 marks

    To this, in March 2025:

    Screenshot of the marks tab with 35068 marks

    And now, we’re at this number:

    Screenshot of the marks tab with 3243352 marks

    I do not just when I say that this is a significant number! I mean, just look at this screenshot of a full view of marks:

    Screenshot of the Sysprof window resized to show all marks. It's very tall.

    Naturally, this is Sysprof to its limits! The app is starting to struggle to handle such massive amounts of data. Having so much data also starts introducing noise in the marks – sometimes, for example, you don’t care about the Mesa marks, or the WebKit marks, of the GLib marks.

    Hiding Marks

    The most straightforward and impactful improvement that could be done, in light of what was explained above, was adding a way to hide certain marks and groups.

    Sysprof heavily uses GListModels, as is trendy in GTK4 apps, so marks, catalogs, and groups are all considered lists containing lists containing items. So it felt natural to wrap these items in a new object with a visible property, and filter by this property, pretty straightforward.

    Except it was not 🙂

    Turns out, the filtering infrastructure in GTK4 did not support monitoring items for property changes. After talking to GTK developers, I learned that this was just a missing feature that nobody got to implementing. Sounded like a great opportunity to enhance the toolkit!

    It took some wrestling, but it worked , the reviews were fantastic and now GtkFilterListModel has a new watch-items property . It only works when the the filter supports monitoring, so unfortunately GtkCustomFilter doesn’t work here. The implementation is not exactly perfect, so further enhancements are always appreciated.

    So behold! Sysprof can now filter marks out of the waterfall view:

    Counters

    Another area where we have lots of potential is counters. Sysprof supports tracking variables over time. This is super useful when you want to monitor, for example, CPU usage, I/O, network, and more.

    Naturally, WebKit has quite a number of internal counters that would be lovely to have in Sysprof to do proper integrated analysis. So between last year and this year, that’s what I’ve worked on as well! Have a look:

    Image of Sysprof counters showing WebKit information

    Unfortunately it took a long time to land some of these contributions, because Sysprof seemed to be behaving erratically with counters. After months fighting with it, I eventually figured out what was going on with the counters, and wrote the patch with probably my biggest commit message this year (beat only by few others, including a literal poem .)

    Wkrictl

    WebKit also has a remote inspector, which has stats on JavaScript objects and whatnot. It needs to be enabled at build time, but it’s super useful when testing on embedded devices.

    I’ve started working on a way to extract this data from the remote inspector, and stuff this data into Sysprof as marks and counters. It’s called wkrict . Have a look:

    This is far from finished, but I hope to be able to integrate this when it’s more concrete and well developed.

    Future Improvements

    Over the course of an year, the WebKit went from nothing to deep integration with Sysprof, and more recently this evolved into actual tooling built around this integrations. This is awesome, and has helped my colleagues and other contributors to help the project in ways it simply wasn’t possible before.

    There’s still *a lot* of work to do though, and it’s often the kind of work that will benefit everyone using Sysprof, not only WebKit. Here are a few examples:

    • Integrate JITDump symbol resolution , which allows profiling the JavaScript running on webpages. There’s ongoing work on this, but needs to be finished.
    • Per-PID marks and counters. Turns out, WebKit uses a multi-process architecture, so it would be better to redesign the marks and counters views to organize things by PID first, then groups, then catalogs.
    • A new timeline view. This is strictly speaking a condensed waterfall view, but it makes it more obvious the relationship between “inner” and “outer” marks.
    • Performance tuning in Sysprof and GTK. We’re dealing with orders of magnitude more data than we used to, and the app is starting to struggle to keep up with it.

    Some of these tasks involve new user interfaces, so it would be absolutely lovely if Sysprof could get some design love from the design team. If anyone from the design team is reading this, we’d love to have your help 🙂

    Finally, after all this Sysprof work, Christian kindly offered me to help co-maintain the project, which I accepted. I don’t know how much time and energy I’ll be able to dedicate, but I’ll try and help however I can!

    I’d like to thank Christian Hergert, Benjamin Otte, and Matthias Clasen for all the code reviews, for all the discussions and patience during the influx of patches.

    • Pl chevron_right

      Hubert Figuière: Dev Log August 2025

      news.movim.eu / PlanetGnome • 7 September • 1 minute

    Some of the stuff I did in August.

    AbiWord

    More memory leaks fixing.

    gudev-rs

    Updated gudev-rs to the latest glib-rs, as a requirement to port any code using it to the latest glib-rs.

    libopenraw

    A minor fix so that it can be used to thumbnail JPEG file extracting the preview.

    Released alpha.12.

    Converted the x-trans interpolation to use floats. Also removed a few unnecessary unsafe blocks.

    Niepce

    A lot of work on the importer. Finally finished that UI bit I had in progress of a while and all the downfall with it. It is in the develop branch which mean it will be merged to main . The includes some UI layout changes to the dialog.

    Then I fixed the camera importer that was assuming everyone followed the DCIM specification (narrator: no they didn't). This mean it was broken on iPhone 14 and the Fujifilm X-T3 that has two card slot (really, use a card reader if the camera uses memory cards). Also sped it up, it's still really slow.

    Also better handle the asynchronous tasks running on a thread like the thumbnailing or camera import list content. I'm almost ready to move on.

    Tore up code using gdkpixbuf for many reasons. It's incompatible with multiple threads, gdk texture were already created from raw buffers. This simplify a lot of things.

    • Pl chevron_right

      Jussi Pakkanen: Trying out import std

      news.movim.eu / PlanetGnome • 6 September • 4 minutes

    Since C++ compilers are starting to support import std , I ran a few experiments to see what the status of that is. GCC 15 on latest Ubuntu was used for all of the following.

    The goal

    One of the main goals of a working module implementation is to be able to support the following workflow:

    • Suppose we have an executable E
    • It uses a library L
    • L and E are made by different people and have different Git repositories and all that
    • We want to take the unaltered source code of L, put it inside E and build the whole thing (in Meson parlance this is known as subproject )
    • Build files do not need to be edited, i.e. the source of L is immutable
    • Make the build as fast as reasonably possible

    The simple start

    We'll start with a helloworld example.

    This requires two two compiler invocations.

    g++-15 -std=c++26 -c -fmodules -fmodule-only -fsearch-include-path bits/std.cc

    g++-15 -std=c++26 -fmodules standalone.cpp -o standalone

    The first invocation compiles the std module and the second one uses it. There is already some wonkiness here. For example the documentation for -fmodule-only says that it only produces the module output, not an object file. However it also tries to link the result into an executable so you have to give it the -c argument to tell it to only create the object file, which the other flag then tells it not to create.

    Building the std module takes 3 seconds and the program itself takes 0.65 seconds. Compiling without modules takes about a second, but only 0.2 seconds if you use iostream instead of println .

    The module file itself goes to a directory called gcm.cache in the current working dir:

    istd2.png

    All in all this is fairly painless so far.

    So ... ship it?

    Not so fast. Let's see what happens if you build the module with a different standards version than the consuming executable.

    It detects the mismatch and errors out. Which is good, but also raises questions. For example what happens if you build the module without definitions but the consuming app with -DNDEBUG . In my testing it worked, but is it just a case of getting lucky with the UB slot machine? I don't know. What should happen? I don't know that either. Unfortunately there is an even bigger issue lurking about.

    Clash of File Name Clans

    If you are compiling with Ninja (and you should) all compiler invocations are made from the same directory (the build tree root). GCC also does not seem to provide a compiler flag to change the location of the gcm.cache directory. Thus if you have two targets that both use import std , their compiled modules get the same output file name. They would clobber each other, so Ninja will refuse to build them (Make probably ignores this, so the files end up clobbering each other and, if you are very lucky, only causes a build error).

    Assuming that you can detect this and deduplicate building the std module, the end result still has a major limitation. You can only ever have one standard library module across all of your build targets. Personally I would be all for forcing this over the entire build tree, but sadly it is a limitation that can't really be imposed on existing projects. Sadly I know this from experience. People are doing weird things out there and they want to keep on weirding on. Sometimes even for valid technical reasons.

    Even if this issue was fixed, it does not really help. As you could probably tell, this clashing will happen for all modules. So if your ever has two modules called utils , no matter where they are or who wrote them, they will both try to write gcm.cache/utils.gcm and either fail to build, fail on import or invoke UB.

    Having the build system work around this by changing the working directory to implicitly make the cache directory go elsewhere (and repoint all paths at the same time) is not an option . All process invocations must be doable from the top level directory. This is the hill I will die on if I must!

    Instead what is needed is something like the target private directory proposal I made ages ago. With that you'd end up with command line arguments roughly like this:

    g++-15 <other args> --target-private-dir=path/to/foo.priv --project-private-dir=toplevel.priv

    The build system would guarantee that all compilations for a single target (library, exe, etc) have the same target private directory and all compilations in the build tree get the same top level private directory. This allows the compiler to do some build optimizations behind the scenes. For example if it needs to build a std module, it could copy it in the top level private directory and other targets could copy it from there instead of building it from scratch (assuming it is compatible and all that).

    • Pl chevron_right

      Christian Hergert: Dedicated Threads with Futures

      news.movim.eu / PlanetGnome • 5 September

    There are often needs to integrate with blocking APIs that do not fit well into the async or future-based models of the GNOME ecosystem. In those cases, you may want to use a dedicated thread for blocking calls so that you do not disrupt main loops, timeouts, or fiber scheduling.

    This is ideal when doing things like interacting with libflatpak or even libgit2 .

    Creating a Dedicated Thread

    Use the dex_thread_spawn() function to spawn a new thread. When the thread completes the resulting future will either resolve or reject.

    typedef DexFuture *(*DexThreadFunc) (gpointer user_data);
    
    DexFuture *future = dex_thread_spawn ("[my-thredad]", thread_func, thread_data,
                                          (GDestroyNotify)thread_data_free);
    

    Waiting for Future Completion

    Since dedicated threads do not have a Dex.Scheduler on them and are not a fiber, you may not await futures. Awaiting would suspend a fiber stack but there is no such fiber to suspend.

    To make integration easier, you may use dex_thread_wait_for() to wait for a future to complete. The mechanism used in this case is a mutex and condition variable which will be signaled when the dependent future completes.

    • Pl chevron_right

      Allan Day: Foundation Update, 2025-09-05

      news.movim.eu / PlanetGnome • 5 September • 4 minutes

    Steven’s weekly Foundation updates were fantastic and, since I’m filling in the ED role as President, I want to keep them going. I’m still mulling over the frequency and scheduling of the posts, but for now I’ll stick to weekly on Fridays.

    I’m trying to make this an update for the entire organisation, rather than for me personally, so a lot of this update is about others’ activities, rather than my own. Inevitably I can’t report on everything and will miss some items, so apologies to anyone if I don’t manage to cover your contributions. We can always squeeze omissions into a future update, so let me know if I missed anything.

    As many of you know, Steven departed his role as ED last week . Everyone on the board knows that this was a shock to many people, and that it has had an impact on the GNOME community. I think that, for now, there’s not a huge amount that I can say about that topic. However, what I will say is that its significance has not been lost on those of us who are working on the Foundation, and that we are committed to making good on our ambitions for the Foundation and for the GNOME Project more generally. With that said, here’s what we’ve been up to this week.

    Transition

    Transitioning the Foundation leadership has been a major focus for this week, due to Steven’s departure and new officers being appointed. Handing over all the accounts is quite a big task which has involved paperwork and calls, and this work will carry on for a while longer yet.

    I’d like to thank Steven for being exceptionally helpful during this process. Everyone on the Exec Committee is very appreciative.

    I’ve been handing the chair role over to Maria, which involved me dumping a lot of procedural information on her in a fairly uncontrolled manner. Sorry Maria!

    I’ve also started to participate in the management of our paid team members. Thanks to everyone for being accommodating during this process.

    There have also been a lot of messages and conversations with partners. I am trying to get around to everyone, but am still midway through the process. If there’s anyone out there who would like to say hi, please don’t be shy! My door is always open and I’d love to hear from anyone who has a relationship with the GNOME Foundation and would like to connect.

    Budget season

    The GNOME Foundation annual budget runs from October to September, so we are currently busy working on the budget that will come into effect in October this year. Deepa, our new treasurer, did an amazing job and provided the board with a draft budget ahead of time, and we had our first meeting to discuss it this week. There will be more meetings over September and we’ll provide a report on the new budget once it has been finalised.

    Sponsorships

    We got our first corporate sponsor through the new donations platform this week, which feels like a great milestone. Many thanks to Symless for your support! Our corporate supporter packages are a great way for businesses of all sizes to help sustain GNOME and the wider ecosystem, and these sponsorships a huge difference. If anyone reading this post thinks that their employer might be interested in a corporate sponsorship package, please do encourage them to sign up. And we’re always happy to talk to potential donors.

    Framework

    One particular highlight from this week was a call between GNOME developers and members of the Framework team. This was something that Steven had set in motion (again, thanks Steven!) and was incredibly exciting from my perspective. Partnerships between hardware vendors and upstream open source projects have so much potential, both in terms of delivering better products, but also improving the efficiency of development and testing. I can’t wait to see where this goes.

    Outreachy

    The Exec Committee approved funding for a single Outreachy intern for the December 2025 cohort. This funding is coming from an Endless grant , so thanks to Endless for your support. Outreachy is a fantastic program so we’re happy we can continue to fund GNOME’s participation.

    Audit

    The GNOME Foundation is due to be audited next calendar year. This is a standard process, but we’ve never been audited before, so it’s going to be a new experience for us. This week I signed the paperwork to secure an auditor, who came highly recommended to us. This was an important step in making sure that the audit happens according to schedule.

    All the usual

    Other Foundation programs continue to run this week. Flathub is being worked on, events are being planned, and we continue to have development activities happening that we’re supporting. There’s already a lot to mention in this post, so I won’t go into specifics right now, but all this work is important and valued. Thanks to everyone who is doing this work. Keep it up!

    Message ends

    That’s it from me. See you next week. o/

    • Pl chevron_right

      Christian Hergert: Asynchronous IO with Libdex

      news.movim.eu / PlanetGnome • 4 September • 3 minutes

    Previously , previously , and previously .

    The Gio.IOStream APIs already provide robust support for asynchronous IO. The common API allows for different types of implementation based on the stream implementation.

    Libdex provides wrappers for various APIs. Coverage is not complete but we do expect additional APIs to be covered in future releases.

    File Management

    See dex_file_copy() for copying files.

    See dex_file_delete() for deleting files.

    See dex_file_move() for moving files.

    File Attributes

    See dex_file_query_info() and dex_file_query_file_type() , and dex_file_query_exists() for basic querying.

    You can set file attributes using dex_file_set_attributes() .

    Directories

    You can create a directory or hierarchy of directories using dex_file_make_directory() and dex_file_make_directory_with_parents() respectively.

    Enumerating Files

    You can create a file enumerator for a directory using dex_file_enumerate_children() .

    You can also asynchronously enumerate the files of that directory using dex_file_enumerator_next_files() which will resolve to a g_autolist(GFileInfo) of infos.

    Reading and Writing Files

    The dex_file_read() will provide a Gio.FileInputStream which can be read from.

    A simpler interface to get the bytes of a file is provided via dex_file_load_contents_bytes() .

    The dex_file_replace() will replace a file on disk providing a Gio.FileOutputStream to write to. The dex_file_replace_contents_bytes() provides a simplified API for this when the content is readily available.

    Reading Streams

    See dex_input_stream_read() , dex_input_stream_read_bytes() , dex_input_stream_skip() , and dex_input_stream_close() for working with input streams asynchronously.

    Writing Streams

    See dex_output_stream_write() , dex_output_stream_write_bytes() , dex_output_stream_splice() , and dex_output_stream_close() for writing to streams asynchronously.

    Sockets

    The dex_socket_listener_accept() , dex_socket_client_connect() , and dex_resolver_lookup_by_name() may be helpful when writing socket servers and clients.

    D-Bus

    Light integration exists for D-Bus to perform asychronous method calls.

    See dex_dbus_connection_call() , dex_dbus_connection_call_with_unix_fd_list() , dex_dbus_connection_send_message_with_reply() and dex_dbus_connection_close() .

    We expect additional support for D-Bus to come at a later time.

    Subprocesses

    You can await completion of a subprocess using dex_subprocess_wait_check() .

    Foundry uses some helpers to do UTF-8 communication but I’d like to bring that to libdex in an improved way post-1.0.

    Asynchronous IO with File Descriptors

    Gio.IOStream and related APIs provides much opertunity for streams to be used asynchronously. There may be cases where you want similar behavior with traditional file-descriptors.

    Libdex provides a set of AIO-like functions for traditional file-descriptors which may be backed with more efficient mechanisms.

    Gio.IOStream typically uses a thread pool of blocking IO operations on Linux and other operating systems because that was the fastest method when the APIs were created. However, on some operating systems such as Linux, faster methods finally exist.

    On Linux, io_uring can be used for asynchronous IO and is provided in the form of a Dex.Future .

    Asynchronous Reads

    DexFuture *dex_aio_read  (DexAioContext *aio_context,
                              int            fd,
                              gpointer       buffer,
                              gsize          count,
                              goffset        offset);
    

    Use the dex_aio_read() function to read from a file-descriptor. The result will be a future that resolves to a gint64 containing the number of bytes read.

    If there was a failure, the future will reject using the appropriate error code.

    Your buffer must stay alive for the duration of the asynchronous read. One easy way to make that happen is to wrap the resulting future in a dex_future_then() which stores the buffer as user_data and releases it when finished.

    If you are doing buffer pooling, more effort may be required.

    Asynchronous Writes

    DexFuture *dex_aio_write (DexAioContext *aio_context,
                              int            fd,
                              gconstpointer  buffer,
                              gsize          count,
                              goffset        offset);
    

    A similar API exists as dex_aio_read() but for writing. It too will resolve to a gint64 containing the number of bytes written.

    buffer must be kept alive for the duration of the call and it is the callers responsibility to do so.

    You can find this article in the documentation under Asynchronous IO .

    • Pl chevron_right

      Hans de Goede: Leaving Red Hat

      news.movim.eu / PlanetGnome • 3 September

    After 17 years I feel that it is time to change things up a bit and for a new challenge. I'm leaving Red Hat and my last day at Red Hat will be October 31st.

    I would like to thank Red Hat for the opportunity to work on many interesting open-source projects during my time at Red Hat and for all the things I've learned while at Red Hat.

    I want to use this opportunity to thank everyone I've worked with, both my great Red Hat colleagues, as well as everyone from the community for all the good times during the last 17 years.

    I've a pretty good idea of what will come next, but this is not set in stone yet. I definitely will continue to work on open-source and on Linux hw-enablement.

    comment count unavailable comments
    • Pl chevron_right

      Christian Hergert: Scheduling & Fibers

      news.movim.eu / PlanetGnome • 3 September • 2 minutes

    Previously , and previously .

    Schedulers

    The Dex.Scheduler is responsible for running work items on a thread. This is performed by integrating with the threads GMainContext . The main thread of your application will have a Dex.MainScheduler as the assigned scheduler.

    The scheduler manages callbacks such as work items created with Dex.Scheduler.push() . This can include blocks, fibers, and application provided work.

    You can get the default scheduler for the application’s main thread using Dex.Scheduler.get_default() . The current thread’s scheduler can be retrieved with Dex.Scheduler.ref_thread_default() .

    Thread Pool Scheduling

    Libdex manages a thread pool which may be retrieved using Dex.ThreadPoolScheduler.get_default() .

    The thread pool scheduler will manage a number of threads that is deemed useful based on the number of CPU available. When io_uring is used, it will also restrict the number of workers to the number of uring available.

    Work items created from outside of the thread pool are placed into a global queue. Thread pool workers will take items from the global queue when they have no more items to process.

    To avoid “thundering herd” situations often caused by global queues and thread pools a pollable semaphore is used. On Linux , specifically, io_uring and eventfd combined with EFD_SEMAPHORE allow waking up a single worker when a work item is queued.

    All thread pool workers have a local Dex.Scheduler so use of timeouts and other GSource features continue to work.

    If you need to interact with long-blocking API calls it is better to use Dex.thread_spawn() rather than a thread pool thread.

    Thread pool workers use a work-stealing wait-free queue which allows the worker to push work items onto one side of the queue quickly. Doing so also helps improve cacheline effectiveness.

    Fibers

    Fibers are a type of stackfull co-routine . A new stack is created and a trampoline is performed onto the stack from the current thread.

    Use Dex.Scheduler.spawn() to create a new fiber.

    When a fiber calls one of the Dex.Future.await() functions or when it returns the fiber is suspended and execution returns to the scheduler.

    By default, fibers have a 128-kb stack with a guard page at the end. Fiber stacks are pooled so that they may be reused during heavy use.

    Fibers are a Dex.Future which means you can await the completion of a fiber just like any other future.

    Note that fibers are pinned to a scheduler. They will not be migrated between schedulers even when a thread pool is in use.

    Fiber Cancellation

    Fibers may be cancelled if the fiber has been discarded by all futures awaiting completion. Fibers will always exit through a natural exit point such as a pending “await”. All attempts to await will reject with error once a fiber has been cancelled.

    If you want to ignore cancellation of fibers, use Dex.Future.disown() on the fiber after creation.

    This article can be found at scheduling in the libdex documentation.

    • Pl chevron_right

      Christian Hergert: Integrating Libdex and GAsyncResult

      news.movim.eu / PlanetGnome • 2 September • 2 minutes

    Previously .

    Historically if you wanted to do asynchronous work in GObject-based applications you would use GAsyncReadyCallback and GAsyncResult .

    There are two ways to integrate with this form of asynchronous API.

    In one direction, you can consume this historical API and provide the result as a DexFuture . In the other direction, you can provide this API in your application or library but implement it behind the scenes with DexFuture .

    Converting GAsyncResult to Futures

    A typical case to integrate, at least initially, is to extract the result of a GAsyncResult and propagate it to a future.

    One way to do that is with a Dex.Promise which will resolve or reject from your async callback.

    static void
    my_callback (GObject      *object,
                 GAsyncResult *result,
                 gpointer      user_data)
    {
      g_autoptr(DexPromise) promise = user_data;
      g_autoptr(GError) error = NULL;
    
      if (thing_finish (THING (object), result, &error))
        dex_promise_resolve_boolean (promise, TRUE);
      else
        dex_promise_reject (promise, g_steal_pointer (&error));
    }
    
    DexFuture *
    my_wrapper (Thing *thing)
    {
      DexPromise *promise = dex_promise_new_cancellable ();
    
      thing_async (thing,
                   dex_promise_get_cancellable (promise),
                   my_callback,
                   dex_ref (promise));
    
      return DEX_FUTURE (promise);
    }
    

    Implementing AsyncResult with Futures

    In some cases you may not want to “leak” into your API that you are using DexFuture . For example, you may want to only expose a traditional GIO API or maybe even clean up legacy code.

    For these cases use Dex.AsyncResult . It is designed to feel familiar to those that have used GTask .

    Dex.AsyncResult.new() allows taking the typical cancellable , callback , and user_data parameters similar to GTask .

    Then call Dex.AsyncResult.await() providing the future that will resolve or reject with error. One completed, the users provided callback will be executed within the active scheduler at time of creation.

    From your finish function, call the appropriate propgate API such as Dex.AsyncResult.propagate_int() .

    void
    thing_async (Thing               *thing,
                 GCancellable        *cancellable,
                 GAsyncReadyCallback  callback,
                 gpointer             user_data)
    {
      g_autoptr(DexAsyncResult) result = NULL;
    
      result = dex_async_result_new (thing, cancellable, callback, user_data);
      dex_async_result_await (result, dex_future_new_true ());
    }
    
    gboolean
    thing_finish (Thing         *thing,
                  GAsyncResult  *result,
                  GError       **error)
    {
      return dex_async_result_propagate_boolean (DEX_ASYNC_RESULT (result), error);
    }
    

    Safety Notes

    One thing that Libdex handles better than GTask is ensuring that your user_data is destroyed on the proper thread. The design of Dex.Block was done in such a way that both the result and user_data are passed back to the owning thread at the same time. This ensures that your user_data will never be finalized on the wrong thread.

    This article can be found in the documentation at Integrating GAsyncResult .