call_end

    • chevron_right

      Asman Malika: Career Opportunities: What This Internship Is Teaching Me About the Future

      news.movim.eu / PlanetGnome • 20 hours ago • 2 minutes

    Before Outreachy, when I thought about career opportunities, I mostly thought about job openings, applications, and interviews. Opportunities felt like something you wait for, or hope to be selected for.

    This internship has changed how I see that completely.

    I’m learning that opportunities are often created through contribution, visibility, and community, not just applications.

    Opportunities Look Different in Open Source

    Working with GNOME has shown me that contributing to open source is not just about writing code, it’s about building a public track record. Every merge request, every review cycle, every improvement becomes part of a visible body of work.

    Through my work on Papers: implementing manual signature features, fixing issues, contributing to Poppler codebase and now working on digital signatures, I’m not just completing tasks. I’m building real-world experience in a production codebase used by actual users.

    That kind of experience creates opportunities that don’t always show up on job boards:

    • Collaborating with experienced maintainers
    • Learning large-project workflows
    • Becoming known within a technical community
    • Developing credibility through consistent contributions

    Skills That Expand My Career Options

    This internship is also expanding what I feel qualified to do.I’m gaining experience with:

    • Building new features
    • Large, existing codebases
    • Code review and iteration cycles
    • Debugging build failures and integration issues
    • Writing clearer documentation and commit messages
    • Communicating technical progress

    These are skills that apply across many roles, not just one job title. They open doors to remote collaboration, open-source roles, and product-focused engineering work.

    Career Is Bigger Than Employment

    One mindset shift for me is that career is no longer just about “getting hired.” It’s also about impact and direction.

    I now think more about:

    • What kind of software I want to help build
    • What communities I want to contribute to
    • How accessible and user-focused tools can be
    • How I can support future newcomers the way my GNOME mentors supported me

    Open source makes career feel less like a ladder and more like a network.

    Creating Opportunities for Others

    Coming from a non-traditional path into tech, I’m especially aware of how powerful access and guidance can be. Programs like Outreachy don’t just create opportunities for individuals, they multiply opportunities through community.

    As I grow, I want to contribute not only through code, but also through sharing knowledge, documenting processes, and encouraging others who feel unsure about entering open source.

    Looking Ahead

    I don’t have every step mapped out yet. But I now have something better: direction and momentum.

    I want to continue contributing to open source, deepen my technical skills, and work on tools that people actually use. Outreachy and GNOME have shown me that opportunities often come from showing up consistently and contributing thoughtfully.

    That’s the path I plan to keep following.

    • chevron_right

      Andy Wingo: six thoughts on generating c

      news.movim.eu / PlanetGnome • 20 hours ago • 8 minutes

    So I work in compilers, which means that I write programs that translate programs to programs. Sometimes you will want to target a language at a higher level than just, like, assembler, and oftentimes C is that language. Generating C is less fraught than writing C by hand, as the generator can often avoid the undefined-behavior pitfalls that one has to be so careful about when writing C by hand. Still, I have found some patterns that help me get good results.

    Today’s note is a quick summary of things that work for me. I won’t be so vain as to call them “best practices”, but they are my practices, and you can have them too if you like.

    static inline functions enable data abstraction

    When I learned C, in the early days of GStreamer (oh bless its heart it still has the same web page!), we used lots of preprocessor macros. Mostly we got the message over time that many macro uses should have been inline functions ; macros are for token-pasting and generating names, not for data access or other implementation.

    But what I did not appreciate until much later was that always-inline functions remove any possible performance penalty for data abstractions. For example, in Wastrel , I can describe a bounded range of WebAssembly memory via a memory struct, and an access to that memory in another struct:

    struct memory { uintptr_t base; uint64_t size; };
    struct access { uint32_t addr; uint32_t len; };
    

    And then if I want a writable pointer to that memory, I can do so:

    #define static_inline \
      static inline __attribute__((always_inline))
    
    static_inline void* write_ptr(struct memory m, struct access a) {
      BOUNDS_CHECK(m, a);
      char *base = __builtin_assume_aligned((char *) m.base_addr, 4096);
      return (void *) (base + a.addr);
    }
    

    (Wastrel usually omits any code for BOUNDS_CHECK , and just relies on memory being mapped into a PROT_NONE region of an appropriate size. We use a macro there because if the bounds check fails and kills the process, it’s nice to be able to use __FILE__ and __LINE__ .)

    Regardless of whether explicit bounds checks are enabled, the static_inline attribute ensures that the abstraction cost is entirely burned away; and in the case where bounds checks are elided, we don’t need the size of the memory or the len of the access, so they won’t be allocated at all.

    If write_ptr wasn’t static_inline , I would be a little worried that somewhere one of these struct values would get passed through memory. This is mostly a concern with functions that return structs by value; whereas in e.g. AArch64, returning a struct memory would use the same registers that a call to void (*)(struct memory) would use for the argument, the SYS-V x64 ABI only allocates two general-purpose registers to be used for return values. I would mostly prefer to not think about this flavor of bottleneck, and that is what static inline functions do for me.

    avoid implicit integer conversions

    C has an odd set of default integer conversions, for example promoting uint8_t to signed int , and also has weird boundary conditions for signed integers. When generating C, we should probably sidestep these rules and instead be explicit: define static inline u8_to_u32 , s16_to_s32 , etc conversion functions, and turn on -Wconversion .

    Using static inline cast functions also allows the generated code to assert that operands are of a particular type. Ideally, you end up in a situation where all casts are in your helper functions, and no cast is in generated code.

    wrap raw pointers and integers with intent

    Whippet is a garbage collector written in C. A garbage collector cuts across all data abstractions: objects are sometimes viewed as absolute addresses, or ranges in a paged space, or offsets from the beginning of an aligned region, and so on. If you represent all of these concepts with size_t or uintptr_t or whatever, you’re going to have a bad time. So Whippet has struct gc_ref , struct gc_edge , and the like: single-member structs whose purpose it is to avoid confusion by partitioning sets of applicable operations. A gc_edge_address call will never apply to a struct gc_ref , and so on for other types and operations.

    This is a great pattern for hand-written code, but it’s particularly powerful for compilers: you will often end up compiling a term of a known type or kind and you would like to avoid mistakes in the residualized C.

    For example, when compiling WebAssembly, consider struct.set ‘s operational semantics : the textual rendering states, “Assert: Due to validation, val is some ref.struct structaddr .” Wouldn’t it be nice if this assertion could translate to C? Well in this case it can: with single-inheritance subtyping (as WebAssembly has), you can make a forest of pointer subtypes:

    typedef struct anyref { uintptr_t value; } anyref;
    typedef struct eqref { anyref p; } eqref;
    typedef struct i31ref { eqref p; } i31ref;
    typedef struct arrayref { eqref p; } arrayref;
    typedef struct structref { eqref p; } structref;
    

    So for a (type $type_0 (struct (mut f64))) , I might generate:

    typedef struct type_0ref { structref p; } type_0ref;
    

    Then if I generate a field setter for $type_0 , I make it take a type_0ref :

    static inline void
    type_0_set_field_0(type_0ref obj, double val) {
      ...
    }
    

    In this way the types carry through from source to target language. There is a similar type forest for the actual object representations:

    typedef struct wasm_any { uintptr_t type_tag; } wasm_any;
    typedef struct wasm_struct { wasm_any p; } wasm_struct;
    typedef struct type_0 { wasm_struct p; double field_0; } type_0;
    ...
    

    And we generate little cast routines to go back and forth between type_0ref and type_0* as needed. There is no overhead because all routines are static inline, and we get pointer subtyping for free: if a struct.set $type_0 0 instruction is passed a subtype of $type_0 , the compiler can generate an upcast that type-checks.

    fear not memcpy

    In WebAssembly, accesses to linear memory are not necessarily aligned, so we can’t just cast an address to (say) int32_t* and dereference. Instead we memcpy(&i32, addr, sizeof(int32_t)) , and trust the compiler to just emit an unaligned load if it can (and it can). No need for more words here!

    for ABI and tail calls, perform manual register allocation

    So, GCC finally has __attribute__((musttail)) : praise be. However, when compiling WebAssembly, it could be that you end up compiling a function with, like 30 arguments, or 30 return values; I don’t trust a C compiler to reliably shuffle between different stack argument needs at tail calls to or from such a function. It could even refuse to compile a file if it can’t meet its musttail obligations; not a good characteristic for a target language.

    Really you would like it if all function parameters were allocated to registers. You can ensure this is the case if, say, you only pass the first n values in registers, and then pass the rest in global variables. You don’t need to pass them on a stack, because you can make the callee load them back to locals as part of the prologue.

    What’s fun about this is that it also neatly enables multiple return values when compiling to C: simply go through the set of function types used in your program, allocate enough global variables of the right types to store all return values, and make a function epilogue store any “excess” return values—those beyond the first return value, if any—in global variables, and have callers reload those values right after calls.

    what’s not to like

    Generating C is a local optimum: you get the industrial-strength instruction selection and register allocation of GCC or Clang, you don’t have to implement many peephole-style optimizations, and you get to link to to possibly-inlinable C runtime routines. It’s hard to improve over this design point in a marginal way.

    There are drawbacks, of course. As a Schemer, my largest source of annoyance is that I don’t have control of the stack: I don’t know how much stack a given function will need, nor can I extend the stack of my program in any reasonable way. I can’t iterate the stack to precisely enumerate embedded pointers ( but perhaps that’s fine ). I certainly can’t slice a stack to capture a delimited continuation.

    The other major irritation is about side tables: one would like to be able to implement so-called zero-cost exceptions , but without support from the compiler and toolchain, it’s impossible.

    And finally, source-level debugging is gnarly. You would like to be able to embed DWARF information corresponding to the code you residualize; I don’t know how to do that when generating C.

    (Why not Rust, you ask? Of course you are asking that. For what it is worth, I have found that lifetimes are a frontend issue; if I had a source language with explicit lifetimes, I would consider producing Rust, as I could machine-check that the output has the same guarantees as the input. Likewise if I were using a Rust standard library. But if you are compiling from a language without fancy lifetimes, I don’t know what you would get from Rust: fewer implicit conversions, yes, but less mature tail call support, longer compile times... it’s a wash, I think.)

    Oh well. Nothing is perfect, and it’s best to go into things with your eyes wide open. If you got down to here, I hope these notes help you in your generations. For me, once my generated C type-checked, it worked: very little debugging has been necessary. Hacking is not always like this, but I’ll take it when it comes. Until next time, happy hacking!

    • chevron_right

      Jussi Pakkanen: C and C++ dependencies, don't dream it, be it!

      news.movim.eu / PlanetGnome • 2 days ago • 3 minutes

    Bill Hoffman, the original creator of the CMake language held a presentation at CppCon . At approximately 49 minutes in he starts talking about future plans for dependency management. He says, and I now quote him directly, that "in this future I envision", one should be able to do something like the following (paraphrasing).

    Your project has dependencies A, B and C. Typically you get them from "the system" or a package manager. But then you'd need to develop one of the deps as well. So it would be nice if you could somehow download, say, A, build it as part of your own project and, once you are done, switch back to the system one.

    Well mr Hoffman, do I have wonderful news for you! You don't need to treasure these sensual daydreams any more. This so called "future" you "envision" is not only the present, but in fact ancient past. This method of dependency management has existed in Meson for so long I don't even remember when it got added. Something like over five years at least.

    How would you use such a wild and an untamed thing?

    Let's assume you have a Meson project that is using some dependency called bob . The current build is using it from the system (typically via pkg-config, but the exact method is irrelevant). In order to build the source natively, first you need to obtain it. Assuming it is available in WrapDB, all you need to do is run this command:

    meson wrap install bob

    If it is not, then you need to do some more work. You can even tell Meson to check out the project's Git repo and build against current trunk if you so prefer. See documentation for details.

    Then you need to tell Meson to use the internal one. There is a global option to switch all dependencies to be local, but in this case we want only this dependency to be built and get the remaining ones from the system. Meson has a builtin option for exactly this:

    meson configure builddir -Dforce_fallback_for=bob

    Starting a build would now reconfigure the system to use the builtin option. Once you are done and want to go back to using system deps, run this command:

    meson configure builddir -Dforce_fallback_for=

    This is all you need to do. That is the main advantage of competently designed tools. They rose tint your world and keep you safe from trouble and pain. Sometimes you can see the blue sky through the tears in your eyes.

    Oh, just one more deadly sting

    If you keep watching the presenter first asks the audience if this is something they would like. Upon receiving a positive answer he then follows up with this [again quoting directly]:

    So you should all complain to the DOE [presumably US Department of Energy] for not funding the SBIR [presumably some sort of grant or tender] for this.

    Shaming your end users into advocating an authoritarian/fascist government to give large sums of money in a tender that only one for-profit corporation can reasonably win is certainly a plan.

    Instead of working on this kind of a muscle man you can alternative do what we in the Meson project did: JFDI. The entire functionality was implemented by maybe 3 to 5 people, some working part time but most being volunteers. The total amount of work it took is probably a fraction of the clerical work needed to deal with all the red tape that comes with a DoE tender process.

    In the interest of full disclosure

    While writing this blog post I discovered a corner case bug in our current implementation. At the time of writing it is only seven hours old, and not particularly beautiful to behold as it has not been fixed yet. And, unfortunately, the only thing I've come to trust is that bugfixes take longer than you would want them to.

    • chevron_right

      Christian Hergert: Mid-life transitions

      news.movim.eu / PlanetGnome • 3 days ago • 3 minutes

    The past few months have been heavy for many people in the United States, especially families navigating uncertainty about safety, stability, and belonging. My own mixed family has been working through some of those questions, and it has led us to make a significant change.

    Over the course of last year, my request to relocate to France while remaining in my role moved up and down the management chain at Red Hat for months without resolution, ultimately ending in a denial. That process significantly delayed our plans despite providing clear evidence of the risks involved to our family. At the beginning of this year, my wife and I moved forward by applying for long-stay visitor visas for France, a status that does not include work authorization.

    During our in-person visa appointment in Seattle, a shooting involving CBP occurred just a few parking spaces from where we normally park for medical outpatient visits back in Portland. It was covered by the news internationally and you may have read about it. Moments like that have a way of clarifying what matters and how urgently change can feel necessary.

    Our visas were approved quickly, which we’re grateful for. We’ll be spending the next year in France, where my wife has other Tibetan family. I’m looking forward to immersing myself in the language and culture and to taking that responsibility seriously. Learning French in mid-life will be humbling, but I’m ready to give it my full focus.

    This move also means a professional shift. For many years, I’ve dedicated a substantial portion of my time to maintaining and developing key components across the GNOME platform and its surrounding ecosystem. These projects are widely used, including in major Linux distributions and enterprise environments, and they depend on steady, ongoing care.

    For many years, I’ve been putting in more than forty hours each week maintaining and advancing this stack. That level of unpaid or ad-hoc effort isn’t something I can sustain, and my direct involvement going forward will be very limited. Given how widely this software is used in commercial and enterprise environments, long-term stewardship really needs to be backed by funded, dedicated work rather than spare-time contributions.

    If you or your organization depend on this software, now is a good time to get involved. Perhaps by contributing engineering time, supporting other maintainers, or helping fund long-term sustainability.

    The folliwing is a short list of important modules where I’m roughly the sole active maintainer:

    • GtkSourceView – foundation for editors across the GTK eco-system
    • Text Editor – GNOME’s core text editor
    • Ptyxis – Default terminal on Fedora, Debian, Ubuntu, RHEL/CentOS/Alma/Rocky and others
    • libspelling – Necessary bridge between GTK and enchant2 for spellcheck
    • Sysprof – Whole-systems profiler integrating Linux perf, Mesa, GTK, Pango, GLib, WebKit, Mutter, and other statistics collectors
    • Builder – GNOME’s flagship IDE
    • template-glib – Templating and small language runtime for a scriptable GObject Introspection syntax
    • jsonrpc-glib – Provides JSONRPC communication with language servers
    • libpeas – Plugin library providing C/C++/Rust, Lua, Python, and JavaScript integration
    • libdex – Futures, Fibers, and io_uring integration
    • GOM – Data object binding between GObject and SQLite
    • Manuals – Documentation reader for our development platform
    • Foundry – Basically Builder as a command-line program and shared library, used by Manuals and a future Builder (hopefully)
    • d-spy – Introspect D-Bus connections
    • libpanel – Provides IDE widgetry for complex GTK/libadwaita applications
    • libmks – Qemu Mouse-Keyboard-Screen implementation with DMA-BUF integration for GTK

    There are, of course, many other modules I contribute to, but these are the ones most in need of attention. I’m committed to making the transition as smooth as possible and am happy to help onboard new contributors or teams who want to step up.

    My next chapter is about focusing on family and building stability in our lives.

    • chevron_right

      Allan Day: GNOME Foundation Update, 2026-02-06

      news.movim.eu / PlanetGnome • 3 days ago • 4 minutes

    Welcome to another GNOME Foundation weekly update! FOSDEM happened last week, and we had a lot of activity around the conference in Brussels. We are also extremely busy getting ready for our upcoming audit, so there’s lots to talk about. Let’s get started.

    FOSDEM

    FOSDEM happened in Brussels, Belgium, last weekend, from 31st January to 1st February. There were lots of GNOME community members in attendance, and plenty of activities around the event, including talks and several hackfests. The Foundation was busy with our presence at the conference, plus our own fringe events.

    Board hackfest

    Seven of our nine directors met for an afternoon and a morning prior to FOSDEM proper. Face to face hackfests are something that the Board has done at various times previously, and have always been a very effective way to move forward on big ticket items. This event was no exception, and I was really happy that we were able to make it happen.

    During the event we took the time to review the Foundation’s financials, and to make some detailed plans in a number of key areas. It’s exciting to see some of the initiatives that we’ve been talking about starting to take more shape, and I’m looking forward to sharing more details soon.

    Advisory Board meeting

    The afternoon of Friday 30th January was occupied with a GNOME Foundation Advisory Board meeting. This is a regular occurence on the day before FOSDEM, and is an important opportunity for the GNOME Foundation Board to meet with partner organizations and supporters.

    Turn out for the meeting was excellent, with Canonical, Google, Red Hat, Endless and PostmarketOS all in attendance. I gave a presentation on the how the Foundation is currently performing, which seemed to be well-received. We then had presentations and discussion amongst Advisory Board members.

    I thought that the discussion was useful, and we identified a number of areas of shared interest. One of these was around how partners (companies, projects) can get clear points of contact for technical decision making in GNOME and beyond. Another positive theme was a shared interest in accessibility work, which was great to see.

    We’re hoping to facilitate further conversations on these topics in future, and will be holding our next Advisory Board meeting in the summer prior to GUADEC. If there are any organizations out there would like to join the Advisory Board, we would love to hear from you.

    Conference stand

    GNOME had a stand during both FOSDEM days, which was really busy. I worked the stand on the Saturday and had great conversations with people who came to say hi. We also sold a lot of t-shirts and hats!

    I’d like to give a huge thank you to Maria Majadas who organized and ran our stand this year. It is incredibly exhausting work and we are so lucky to have Maria in our community. Please say thank you to her!

    We also had plenty of other notable volunteers, including Julian Sparber, Ignacy Kuchciński, Sri Ramkrishna. Richard Litteaur, our previous Interim Executive Director even took a shift on the stand.

    Social

    On the Saturday night there was a GNOME social event, hosted at a local restaurant. As always it was fantastic to get together with fellow contributors, and we had a good turnout with 40-50 people there.

    Audit preparation

    Moving on from FOSDEM, there has been plenty of other activity at the Foundation in recent weeks. The first of these is preparation for our upcoming audit. I have written a fair bit about this in these previous updates. The audit is a routine exercise, but this is also our first, so we are learning a lot.

    The deadline for us to provide our documentation submission to the auditors is next Tuesday, so everyone on the finance side of the operation has been really busy getting all that ready. Huge thanks to everyone for their extra effort here.

    GUADEC & LAS planning

    Conference planning has been another theme in the past few weeks. For GUADEC, accommodation options have been announced , artwork has been produced, and local information is going up on the website.

    Linux App Summit , which we co-organise with KDE, has been a bit delayed this year, but we have a venue now and are in the process of finalizing the budget. Announcements about the dates and location will hopefully be made quite soon.

    Google verification

    A relatively small task, but a good one to highlight: this week we facilitated (ie. paid for) the assessment process for GNOME’s integration with Google services. This is an annual process we have to go through in order to keep Evolution Data Server working with Google.

    Infrastructure optimization

    Finally, Bart, along with Andrea, has been doing some work to optimize the resource usage of GNOME infrastructure. If you are using GNOME services you might have noticed some subtle changes as a result of this, like Anubis popping up more frequently.

    That’s it for this week. Thanks for reading; I’ll see you next week!

    • chevron_right

      Andy Wingo: ahead-of-time wasm gc in wastrel

      news.movim.eu / PlanetGnome • 3 days ago • 3 minutes

    Hello friends! Today, a quick note: the Wastrel ahead-of-time WebAssembly compiler now supports managed memory via garbage collection!

    hello, world

    The quickest demo I have is that you should check out and build wastrel itself:

    git clone https://codeberg.org/andywingo/wastrel
    cd wastrel
    guix shell
    # alternately: sudo apt install guile-3.0 guile-3.0-dev \
    #    pkg-config gcc automake autoconf make
    autoreconf -vif && ./configure
    make -j
    

    Then run a quick check with hello, world :

    $ ./pre-inst-env wastrel examples/simple-string.wat
    Hello, world!
    

    Now give a check to gcbench , a classic GC micro-benchmark:

    $ WASTREL_PRINT_STATS=1 ./pre-inst-env wastrel examples/gcbench.wat
    Garbage Collector Test
     Creating long-lived binary tree of depth 16
     Creating a long-lived array of 500000 doubles
    Creating 33824 trees of depth 4
    	Top-down construction: 10.189 msec
    	Bottom-up construction: 8.629 msec
    Creating 8256 trees of depth 6
    	Top-down construction: 8.075 msec
    	Bottom-up construction: 8.754 msec
    Creating 2052 trees of depth 8
    	Top-down construction: 7.980 msec
    	Bottom-up construction: 8.030 msec
    Creating 512 trees of depth 10
    	Top-down construction: 7.719 msec
    	Bottom-up construction: 9.631 msec
    Creating 128 trees of depth 12
    	Top-down construction: 11.084 msec
    	Bottom-up construction: 9.315 msec
    Creating 32 trees of depth 14
    	Top-down construction: 9.023 msec
    	Bottom-up construction: 20.670 msec
    Creating 8 trees of depth 16
    	Top-down construction: 9.212 msec
    	Bottom-up construction: 9.002 msec
    Completed 32 major collections (0 minor).
    138.673 ms total time (12.603 stopped); 209.372 ms CPU time (83.327 stopped).
    0.368 ms median pause time, 0.512 p95, 0.800 max.
    Heap size is 26.739 MB (max 26.739 MB); peak live data 5.548 MB.
    

    We set WASTREL_PRINT_STATS=1 to get those last 4 lines. So, this is a microbenchmark: it runs for only 138 ms, and the heap is tiny (26.7 MB). It does collect 30 times, which is something.

    is it good?

    I know what you are thinking: OK, it’s a microbenchmark, but can it tell us anything about how Wastrel compares to V8? Well, probably so:

    $ guix shell node time -- \
       time node js-runtime/run.js -- \
         js-runtime/wtf8.wasm examples/gcbench.wasm
    Garbage Collector Test
    [... some output elided ...]
    total_heap_size: 48082944
    [...]
    0.23user 0.03system 0:00.20elapsed 128%CPU (0avgtext+0avgdata 87844maxresident)k
    0inputs+0outputs (0major+13325minor)pagefaults 0swaps
    

    Which is to say, V8 takes more CPU time (230ms vs 209ms) and more wall-clock time (200ms vs 138ms). Also it uses twice as much managed memory (48 MB vs 26.7 MB), and more than that for the total process (88 MB vs 34 MB, not shown).

    improving on v8, really?

    Let’s try with quads , which at least has a larger active heap size. This time we’ll compile a binary and then run it:

    $ ./pre-inst-env wastrel compile -o quads examples/quads.wat
    $ WASTREL_PRINT_STATS=1 guix shell time -- time ./quads 
    Making quad tree of depth 10 (1398101 nodes).
    	construction: 23.274 msec
    Allocating garbage tree of depth 9 (349525 nodes), 60 times, validating live tree each time.
    	allocation loop: 826.310 msec
    	quads test: 860.018 msec
    Completed 26 major collections (0 minor).
    848.825 ms total time (85.533 stopped); 1349.199 ms CPU time (585.936 stopped).
    3.456 ms median pause time, 3.840 p95, 5.888 max.
    Heap size is 133.333 MB (max 133.333 MB); peak live data 82.416 MB.
    1.35user 0.01system 0:00.86elapsed 157%CPU (0avgtext+0avgdata 141496maxresident)k
    0inputs+0outputs (0major+231minor)pagefaults 0swaps
    

    Compare to V8 via node:

    $ guix shell node time -- time node js-runtime/run.js -- js-runtime/wtf8.wasm examples/quads.wasm
    Making quad tree of depth 10 (1398101 nodes).
    	construction: 64.524 msec
    Allocating garbage tree of depth 9 (349525 nodes), 60 times, validating live tree each time.
    	allocation loop: 2288.092 msec
    	quads test: 2394.361 msec
    total_heap_size: 156798976
    [...]
    3.74user 0.24system 0:02.46elapsed 161%CPU (0avgtext+0avgdata 382992maxresident)k
    0inputs+0outputs (0major+87866minor)pagefaults 0swaps
    

    Which is to say, wastrel is almost three times as fast, while using almost three times less memory : 2460ms (v8) vs 849ms (wastrel), and 383MB vs 141 MB.

    zowee!

    So, yes, the V8 times include the time to compile the wasm module on the fly. No idea what is going on with tiering, either, but I understand that tiering up is a thing these days; this is node v22.14, released about a year ago, for what that’s worth. Also, there is a V8-specific module to do some impedance-matching with regards to strings; in Wastrel they are WTF-8 byte arrays, whereas in Node they are JS strings. But it’s not a string benchmark, so I doubt that’s a significant factor.

    I think the performance edge comes in having the program ahead-of-time: you can statically allocate type checks, statically allocate object shapes, and the compiler can see through it all. But I don’t really know yet, as I just got everything working this week.

    Wastrel with GC is demo-quality, thus far. If you’re interested in the back-story and the making-of, see my intro to Wastrel article from October, or the FOSDEM talk from last week:

    Slides here , if that’s your thing.

    More to share on this next week, but for now I just wanted to get the word out. Happy hacking and have a nice weekend!

    • chevron_right

      Matthias Clasen: GTK hackfest, 2026 edition

      news.movim.eu / PlanetGnome • 3 days ago • 3 minutes

    As is by now a tradition, a few of the GTK developers got together in the days before FOSDEM to make plans and work on your favorite toolkit.

    Code

    We released gdk-pixbuf 2.44.5 with glycin-based XPM and XBM loaders, rounding out the glycin transition. Note that the XPM/XBM support in will only appear in glycin 2.1. Another reminder is that gdk_pixbuf_new_from_xpm_data() was deprecated in gdk-pixbuf 2.44, and should not be used any more, as it does not allow for error handling in case the XPM loader is not available; if you still have XPM assets, please convert them to PNG, and use GResource to embed them into your application if you don’t want to install them separately.

    We also released GTK 4.21.5, in time for the GNOME beta release. The highlights in this snapshot are still more SVG work (including support for SVG filters in CSS) and lots of GSK renderer refactoring. We decided to defer the session saving support, since early adopters found some problems with our APIs; once the main development branch opens for GTK 4.24, we will work on a new iteration and ask for more feedback.

    Discussions

    One topic that we talked about is unstable APIs, but no clear conclusion was reached. Keeping experimental APIs in the same shared object was seen as problematic (not just because of ABI checkers).  Making a separate shared library (and a separate namespace, for bindings) might not be easy.

    Still on the topic of APIs, we decided that we want to bump our C runtime requirement to C11 in the next cycle, to take advantage of standard atomics, integer types and booleans. At the moment, C11 is a soft requirement through GLib. We also talked about GLib’s autoptrs , and were saddened by the fact that we still can’t use them without dropping MSVC. The defer proposal for C2y would not really work with how we use automatic cleanup for types, either, so we can’t count on the C standard to save us.

    Mechanics

    We collected some ideas for improving project maintenance. One idea that came up was to look at automating issue tagging, so it is easier for people to pay closer attention to a subset of all open issues and MRs. Having more accurate labels on merge requests would allow people to get better notifications and avoid watching the whole project.

    We also talked about the state of GTK3 and agreed that we want to limit changes in this very mature code base to crash and build fixes: the chances of introducing regressions in code that has long since been frozen is too high.

    Accessibility

    On the accessibility side, we are somewhat worried about the state of AccessKit . The code upstream is maintained, but we haven’t seen movement in the GTK implementation. We still default to the AT-SPI backend on Linux, but AccessKit is used on Windows and macOS (and possibly Android in the future); it would be nice to have consumers of the accessibility stack looking at the code and issues.

    On the AT-SPI side we are still missing proper feature negotiation in the protocol; interfaces are now versioned on D-Bus, but there’s no mechanism to negotiate the supported set of roles or events between toolkits, compositors, and assistive technologies, which makes running newer applications on older OS versions harder.

    We discussed the problem of the ARIA specification being mostly “stringly” typed in the attributes values, and how it impacts our more strongly typed API (especially with bindings); we don’t have a good generic solution, so we will have to figure out possible breaks or deprecations on a case by case basis.

    Finally, we talked about a request by the LibreOffice developers on providing a wrapper for the AT-SPI collection interface ; this API is meant to be used as a way to sidestep the array-based design, and perform queries on the accessible objects tree. It can be used to speed up iterating through large and sparse trees, like documents or spreadsheets. It’s also very AT-SPI specific, which makes it hard to write in a platform-neutral way. It should be possible to add it as a platform-specific API, like we did for GtkAtSpiSocket .

    Carlos is working on landing the pointer query API in Mutter , which would address the last remnant of X11 use inside Orca.

    Outlook

    Some of the plans and ideas that we discussed for the next cycle include:

    • Bring back the deferred session saving
    • Add some way for applications to support the AT-SPI collection interface
    • Close some API gaps in GtkDropDown ( 8003 and 8004 )
    • Bring some general purpose APIs from libadwaita back to GTK

    Until next year, ❤

    • chevron_right

      This Week in GNOME: #235 Integrating Fonts

      news.movim.eu / PlanetGnome • 4 days ago • 4 minutes

    Update on what happened across the GNOME project in the week from January 30 to February 06.

    GNOME Core Apps and Libraries

    GTK

    Cross-platform widget toolkit for creating graphical user interfaces.

    Emmanuele Bassi says

    The GTK developers published the report for the 2026 GTK hackfest on their development blog. Lots of work and plans for the next 12 months:

    • session save/restore
    • toolchain requirements
    • accessibility
    • project maintenance

    and more!

    Glycin

    Sandboxed and extendable image loading and editing.

    Sophie (she/her) reports

    Glycin 2.1.beta has been released. Starting with this version, the JPEG 2000 image format is supported by default. This was made possible by a new JPEG 2000 implementation that is completely written in safe Rust.

    While this image format isn’t in widespread use for images directly, many PDFs contain JPEG 2000 images since PDF 1.5 and PDF/A-2 support embedded JPEG 2000 images. Therefore, images extracted from PDFs, frequently have the JPEG 2000 format.

    GNOME Circle Apps and Libraries

    Resources

    Keep an eye on system resources

    nokyan says

    This week marks the release of Resources 1.10 with support for new hardware, software and improvements all around! Here are some highlights:

    • Added support for AMD NPUs using the amdxdna driver
    • Improved accessibility for screen reader users and keyboard users
    • Vastly improved app detection
    • Significantly cut down CPU usage
    • Searching for multiple process names at once is now possible using the “|” operator in the search field

    In-depth release notes can be found on GitHub .

    Resources is available on Flathub .

    gtk-rs

    Safe bindings to the Rust language for fundamental libraries from the GNOME stack.

    Julian 🍃 announces

    I’ve added another chapter for the gtk4-rs book. It describes how to use gettext to make your app available in other languages: https://gtk-rs.org/gtk4-rs/stable/latest/book/i18n.html

    Third Party Projects

    Ronnie Nissan announces

    This week I released Sitra, an app to install and manage fonts from google fonts. It also helps devs integrate fonts into their projects using fontsource npm and CDN.

    The app is a replacement to the Font Downloader app which has been abandoned for a while.

    Sitra can be downloaded from flathub

    sitra-font-preview-page-with-font-installed.UtiR11Xn_2id1Ph.webp

    sitra-use-in-a-project-dialog-npm.CDfZQJgX_Z1X4h4W.webp

    Arnis (kem-a) announces

    AppManager is a GTK/Libadwaita developed desktop utility in Vala that makes installing and uninstalling AppImages on Linux desktop painless. It supports both SquashFS and DwarFS AppImage formats, features a seamless background auto-update process, and leverages zsync delta updates for efficient bandwidth usage. Double-click any .AppImage to open a macOS-style drag-and-drop window, just drag to install and AppManager will move the app, wire up desktop entries, and copy icons.

    And of course, it’s available as AppImage. Get it on Github

    AppManager-v3.0.0-TWIG.Boy3BBoi_Z1xxHDw.webp

    Parabolic

    Download web video and audio.

    Nick reports

    Parabolic V2026.2.0 is here!

    This release contains a complete overhaul of the downloading engine as it was rewritten from C++ to C#. This will provide us with more stable performance and faster iteration of highly requested features (see the long list below!!). The UIs for both Windows and Linux were also ported to C# and got a face lift, providing a smoother and more beautiful downloading experience.

    Besides the rewrite, this release also contains many new features (including quality and subtitle options for playlists - finally!) and plenty of bug fixes with an updated yt-dlp .

    Here’s the full changelog:

    • Parabolic has been rewritten in C# from C++
    • Added arm64 support for Windows
    • Added support for playlist quality options
    • Added support for playlist subtitle options
    • Added support for reversing the download order of a playlist
    • Added support for remembering the previous Download Immediately selection in the add download dialog
    • Added support for showing yt-dlp’s sleeping pauses within download rows
    • Added support for enabling nightly yt-dlp updates within Parabolic
    • Redesigned both platform application designs for a faster and smoother download experience
    • Removed documentation pages as Parabolic shows in-app documentation when needed
    • Fixed an issue where translator-credits were not properly displayed
    • Fixed an issue where Parabolic crashed when adding large amounts of downloads from a playlist
    • Fixed an issue where Parabolic crashed when validating certain URLs
    • Fixed an issue where Parabolic refused to start due to keyring errors
    • Fixed an issue where Parabolic refused to start due to VC errors
    • Fixed an issue where Parabolic refused to start due to version errors
    • Fixed an issue where opening the about dialog would freeze Parabolic for a few seconds
    • Updated bundled yt-dlp

    Parabolic_V2026.2.0.BBxr5_C5_Z1jVJEy.webp

    Shell Extensions

    subz69 reports

    I just released Pigeon Email Notifier , a new GNOME Shell extension for Gmail and Microsoft email notifications using GNOME Online Accounts. Supports priority-only mode, persistent and sound notifications.

    pigeon.CXwjHacz_Z12patp.webp

    Miscellaneous

    Arjan reports

    PyGObject 3.55.3 has been released. It’s the third development release (it’s not available on PyPI) in the current GNOME release cycle.

    The main achievements for this development cycle, leading up to GNOME 50, are:

    • Support for do_dispose and do_constructed methods in Python classes. do_constructed is called after an object has been constructed (as a post-init method), and do_dispose is called when a GObject is disposed.
    • Removal of duplicate marshalling code for fields, properties, constants, and signal closures.
    • Removal of old code, most notable pygtkcompat and wrappers for Glib.OptionContext/OptionGroup .
    • Under the hood toggle references have been replaced by normal references, and PyGObject sinks “floating” objects by default.

    Notable changes include for this release include:

    • Type annotations to Glib and GObject overrides. This makes it easier for pygobject-stubs to generate type hints.
    • Updates to the asyncio support.

    A special thanks to Jamie Gravendeel, Laura Kramolis, and K.G. Hammarlund for test-driving the unstable versions.

    All changes can be found in the Changelog .

    This release can be downloaded from Gitlab and the GNOME download server .If you use PyGObject in your project, please give it a spin and see if everything works as expected.⁦

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • chevron_right

      Cassidy James Blaede: ROOST at FOSDEM 2026

      news.movim.eu / PlanetGnome • 4 days ago • 5 minutes

    stickers.jpg

    A few months ago I joined ROOST (Robust Open Online Safety Tools) to build our open source community that would be helping to create, distribute, and maintain common tools and building blocks for online trust and safety. One of the first events I wanted to make sure we attended in order to build that community was of course FOSDEM , the massive annual gathering of open source folks in Brussels, Belgium.

    Luckily for us, the timing aligned nicely with the v1 release of our first major online safety tool, Osprey , as well as its adoption by Bluesky and the Matrix.org Foundation. I wrote and submitted a talk for the FOSDEM crowd and the decentralized communications track, which was accepted. Our COO Anne Bertucio and I flew out to Brussels to meet up with folks, make connections, and learn how our open source tools could best serve open protocols and platforms.

    Brunch with the Christchurch Call Foundation

    Saturday, ROOST co-hosted a brunch with the Christchurch Call Foundation where we invited folks to discuss the intersection of open source and online safety. The event was relatively small, but we engaged in meaningful conversations and came away with several recurring themes. Non-exhaustively, some areas attendees were interested in: novel classifiers for unique challenges like audio recordings and pixel art; how to ethically source and train classifiers; ways to work better together across platforms and protocols.

    Personally I enjoyed meeting folks from Mastodon, GitHub, ATproto, IFTAS, and more in person for the first time, and I look forward to continuing several conversations that were started over coffee and fruit.

    Talk

    Our Sunday morning talk “Stop Reinventing in Isolation” (which you can watch on YouTube or at fosdem.org ) filled the room and was really well-received.

    Cassidy Anne

    Cassidy and Anne giving a talk. | Photos from @matrix@mastodon.matrix.org

    In it we tackled three major topics: a crash course on what is “trust and safety”; why the field needs an open source approach; and then a bit about Osprey, our self-hostable automated rules engine and investigation tool that started as an internal tool built at Discord.

    Q&A

    We had a few minutes for Q&A after the talk, and the folks in the room spurred some great discussions. If there’s something you’d like to ask that isn’t covered by the talk or this Q&A, feel free to start a discussion ! Also note that this gets a bit nerdy; if you’re not interested in the specifics of deploying Osprey, feel free to skip ahead to the Stand section.

    Room

    When using Osprey with the decentralized Matrix protocol, would it be a policy server implementation?

    Yes, in the Matrix model that’s the natural place to handle it. Chat servers are designed to check with the policy server before sending room events to clients, so it’s precisely where you’d want to be able to run automated rules. The Matrix.org Foundation is actively investigating how exactly Osprey can be used with this setup, and already have it deployed in their staging environment for testing.

    Does it make sense to use Osprey for smaller platforms with fewer events than something like Matrix, Bluesky, or Discord?

    This one’s a bit harder to answer, because Osprey is often the sort of tool you don’t “need” until you suddenly and urgently do. That said, it is designed as an in-depth investigation tool, and if that’s not something needed on your platform yet due to the types and volume of events you handle, it could be overkill. You might be better off starting with a moderation/review dashboard like Coop, which we expect to be able to release as v0 in the coming weeks. As your platform scales, you could then explore bringing Osprey in as a complementary tool to handle more automation and deeper investigation.

    Does Osprey support account-level fraud detection?

    Osprey itself is pretty agnostic to the types of events and metadata it handles; it’s more like a piece of plumbing that helps you connect a firehose of events to one end, write rules and expose those events for investigation in the middle, and then connect outgoing actions on the other end. So while it’s been designed for trust and safety uses, we’ve heard interest from platforms using it in a fraud prevention context as well.

    What are the hosting requirements of Osprey, and what do deployments look like?

    While you can spin Osprey up on a laptop for testing and development, it can be a bit beefy. Osprey is made up of four main components: worker, UI, database, and Druid as the analytics database. The worker and UI have low resource requirements, your database (e.g. Postgres) could have moderate requirements, but then Druid is what will have the highest requirements. The requirements will also scale with your total throughput of events being processed, as well as the TTLs you keep in Druid. As for deployments, Discord, Bluesky, and the Matrix.org Foundation have each integrated Osprey into their Kubernetes setups as the components are fairly standard Docker images. Osprey also comes with an optional coordinator, an action distribution and load-balancing service that can aid with horizontal scaling.

    Stand

    This year we were unable to secure a stand (there were already nearly 100 stands in just 5 buildings!), but our friends at Matrix graciously hosted us for several hours at their stand near the decentralized communications track room so we could follow up with folks after our talk. We blew through our shiny sticker supply as well as our 3D printed ROOST keychains (which I printed myself at home!) in just one afternoon. We’ll have to bring more to future FOSDEMs!

    Stickers

    When I handed people one of our hexagon stickers the reaction was usually some form of, “ooh, shiny!” but my favorite was when someone essentially said, “Oh, you all actually know open source!” That made me proud, at least. :)

    Interesting Talks

    Lastly, I always like to shout out interesting talks I attended or caught on video later so others can enjoy them on their own time. I recommend checking out: