-
Pl
chevron_right
Martin Pitt: Revisiting Google Cloud Performance for KVM-based CI
news.movim.eu / PlanetGnome • 13 February 2026
- group_work rss_feed
-
Pl
chevron_right
This Week in GNOME: #236 New Library
news.movim.eu / PlanetGnome • 13 February 2026 • 3 minutes
- Now app runs on any Linux , yes that’s right, even as old as Debian Bookworm or Bullseye and of course Ubuntu LTS. Big thanks to AppImage community devs who made it possible
- Added grid view in app list
- GitHub token support to significantly increase update requests
- and many more …
- Browser Switcher — one-click default browser switching from the GNOME Shell panel. Auto-detects installed browsers, zero configuration, fully async. Useful when you need separate browsers for work and personal SSO. https://extensions.gnome.org/extension/8836/browser-switcher/ https://github.com/totoshko88/browser-switcher
- gp-gnome — GlobalProtect VPN integration for GNOME Shell. System tray indicator with connect/disconnect, MFA support, gateway selection, real-time status monitoring, and HIP resubmission. Designed for the official Palo Alto Networks Linux CLI (PanGPLinux). https://extensions.gnome.org/extension/8899/gp-gnome/ https://github.com/totoshko88/gp-gnome
-
Pl
chevron_right
Olav Vitters: GUADEC 2026 accommodation
news.movim.eu / PlanetGnome • 12 February 2026
-
Pl
chevron_right
Andy Wingo: free trade and the left
news.movim.eu / PlanetGnome • 11 February 2026 • 7 minutes
-
Pl
chevron_right
Asman Malika: Career Opportunities: What This Internship Is Teaching Me About the Future
news.movim.eu / PlanetGnome • 9 February 2026 • 2 minutes
- Collaborating with experienced maintainers
- Learning large-project workflows
- Becoming known within a technical community
- Developing credibility through consistent contributions
- Building new features
- Large, existing codebases
- Code review and iteration cycles
- Debugging build failures and integration issues
- Writing clearer documentation and commit messages
- Communicating technical progress
- What kind of software I want to help build
- What communities I want to contribute to
- How accessible and user-focused tools can be
- How I can support future newcomers the way my GNOME mentors supported me
-
Pl
chevron_right
Andy Wingo: six thoughts on generating c
news.movim.eu / PlanetGnome • 9 February 2026 • 8 minutes
-
Pl
chevron_right
Jussi Pakkanen: C and C++ dependencies, don't dream it, be it!
news.movim.eu / PlanetGnome • 7 February 2026 • 3 minutes
-
Pl
chevron_right
Christian Hergert: Mid-life transitions
news.movim.eu / PlanetGnome • 6 February 2026 • 3 minutes
- GtkSourceView – foundation for editors across the GTK eco-system
- Text Editor – GNOME’s core text editor
- Ptyxis – Default terminal on Fedora, Debian, Ubuntu, RHEL/CentOS/Alma/Rocky and others
- libspelling – Necessary bridge between GTK and enchant2 for spellcheck
- Sysprof – Whole-systems profiler integrating Linux perf, Mesa, GTK, Pango, GLib, WebKit, Mutter, and other statistics collectors
- Builder – GNOME’s flagship IDE
- template-glib – Templating and small language runtime for a scriptable GObject Introspection syntax
- jsonrpc-glib – Provides JSONRPC communication with language servers
- libpeas – Plugin library providing C/C++/Rust, Lua, Python, and JavaScript integration
- libdex – Futures, Fibers, and io_uring integration
- GOM – Data object binding between GObject and SQLite
- Manuals – Documentation reader for our development platform
- Foundry – Basically Builder as a command-line program and shared library, used by Manuals and a future Builder (hopefully)
- d-spy – Introspect D-Bus connections
- libpanel – Provides IDE widgetry for complex GTK/libadwaita applications
- libmks – Qemu Mouse-Keyboard-Screen implementation with DMA-BUF integration for GTK
-
Pl
chevron_right
Allan Day: GNOME Foundation Update, 2026-02-06
news.movim.eu / PlanetGnome • 6 February 2026 • 4 minutes
Update on what happened across the GNOME project in the week from February 06 to February 13.
Third Party Projects
Alexander Vanhee says
This week, the Bazaar app store received two major new features. The first is the new Library page, which combines the Installed page, Update dialog, and Transaction sidebar into a single view. It should make managing installed apps much more intuitive.
The second feature is support for user scope Flatpaks. Flatpaks installed in user scope are now listed in the Installed Apps list, where you can view or uninstall them just like other apps. Installing new user-scope Flatpaks is still not possible in the Flatpak version of the app due to an unresolved issue.
Install the app via Flathub
![]()
Arnis (kem-a) reports
AppManager v3.2.0 just got released
AppManager is a GTK/Libadwaita developed desktop utility in Vala that makes installing and uninstalling AppImages on Linux desktop painless. It supports both SquashFS and DwarFS AppImage formats, features a seamless background auto-update process, and leverages zsync delta updates for efficient bandwidth usage. Double-click any
.AppImageto open a macOS-style drag-and-drop window, just drag to install and AppManager will move the app, wire up desktop entries, and copy icons.Since last week release many suggestions and feature requests where implemented and bugs fixed. Here are some changes highlights:
Hit your in-app update button or Get it on Github
![]()
Anton Isaiev reports
I’d like to introduce RustConn, a modern connection manager for Linux with a GTK4/Wayland-native interface. Manage SSH, RDP, VNC, SPICE, Telnet, and Zero Trust connections from a single application. All core protocols use embedded Rust implementations — no external dependencies required. Supports import from Remmina, Asbru-CM, SSH config, Ansible, Royal TS, and MobaXterm. Credentials are stored securely via KeePassXC, GNOME Keyring, Bitwarden CLI, or 1Password CLI. Available on Flathub, Snap, AppImage, and OBS (deb/rpm).
https://github.com/totoshko88/RustConn https://flathub.org/apps/io.github.totoshko88.RustConn
![]()
![]()
![]()
![]()
bjawebos reports
Version 0.2.1 of Filmbook was released this week. This app helps you see which film you have loaded into which camera. That means you can concentrate fully on taking photos. The view of your exposed films has been optimised and Nido has contributed a new icon. I was particularly pleased about that. You can install Filmbook via Flathub: https://flathub.org/de/apps/page.codeberg.bjawebos.Filmbook
Shell Extensions
Anton Isaiev says
I released two GNOME Shell extensions:
![]()
![]()
Miscellaneous
GNOME OS ↗
The GNOME operating system, development and testing platform
Ada Magicat ❤️🧡🤍🩷💜 reports
Valentin also created an extension for patented codecs that can not be shipped with the base system. With the extension enabled, the system video player can play formats that it couldn’t before (like mp4/h264) while nautilus will show thumbnails for those files. It also enables hardware-acceleration on screen recordings with gnome-shell.
Enable the feature by running:
updatectl enable codecs-extra --now
Ada Magicat ❤️🧡🤍🩷💜 reports
Valentin changed the way we handle system configuration in
/etc, moving it to systemd-confext . This makes the configuration less fragile and easier to update - a huge improvement to the atomicity of GNOME OS and an important step towards our goal of a stable system everyone can use.
That’s all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
One of the things that I appreciate in a GUADEC (if available) is a common accommodation. Loads of attendees appreciated the shared accommodation in Vilanova i la Geltrú, Spain (GUADEC 2006). For GUADEC 2026 Deepesha announced one recommended accommodation , a student’s residence. GUADEC 2026 is at the same place as GUADEC 2012, meaning: A Coruña, Spain. I didn’t go to the 2012 one though I heard it also had a shared accommodation. For those wondering where to stay, suggest the recommended one.
I came of age an age ago, in the end of history: the decade of NAFTA, the WTO, the battle of Seattle. My people hated it all, interpreting the global distribution of production as a race to the bottom leaving post-industrial craters in its wake. Our slogans were to buy local, to the extent that participation in capitalism was necessary; to support local businesses, local products, local communities. What were trade barriers to us? One should not need goods from far away in the first place; autarky is the bestarky.
Now, though, the vibe has shifted. If you told me then that most leftists would be aghast at the cancelling of NAFTA and subsequently relieved when it was re-instated as the USMCA, I think I would have used that most 90s of insults: sellouts . It has been a perplexing decade in many ways.
The last shoe to drop for me was when I realized that, given the seriousness of the climate crisis, it is now a moral imperative everywhere to import as many solar panels as cheaply as possible, from wherever they might be made. Solar panels will liberate us from fossil fuel consumption. China makes them cheapest. Neither geopolitical (“please believe China is our enemy”) nor antiglobalist (“home-grown inverters have better vibes”) arguments have any purchase on this reality. In this context, any trade barrier on Chinese solar panels means more misery for our grandchildren, as they suffer under global warming that we caused. Every additional half-kilowatt panel we install is that much less dinosaur that other people will choose to burn.
But if trade can be progressive, then when is it so, and when not? I know that the market faithful among my readers will see the question as nonsense, but that is not my home church. We on the left pride ourselves in our critical capacities, in our ability to challenge common understandings. However, I wonder about the extent to which my part of the left has calcified and become brittle, recycling old thoughts that we thunk about new situations that rhyme with things we remember, in which we spare ourselves the effort of engagement out of confidence in our past conclusions, but in doing so miss the mark. What is the real leftist take on the EU-Mercosur agreement, anyway?
This question has consumed me for some weeks. To help me think through it, I have enlisted four sources: Quinn Slobodian’s Globalists (2018), an intellectual history of neoliberalism, from the first world war to the 1990s; Marc-William Palen’s Pax Economica (2024), a history of arguments for free trade “from the left”, from the 1840s to the beginnings of Slobodian’s book; Starhawk’s Webs of power (2002), a moment-in-time, first-person-plural account of the antiglobalization movement between Seattle and 9/11; and finally Florence Reese’s Which side are you on? (1931), as recorded by the Weavers , a haunting refrain that dogs me all through this investigation, reminding me that the question is visceral and not abstract.
stave 1: mechanism
Before diving in though, I should admit that I don’t actually understand the basics, and it is not for a lack of trying.
Let’s work through some thought experiments together. Let’s imagine you are in France and want to buy nails. To produce nails, a capitalist has to purchase machines and buildings and steel, has to pay the workers to operate the machines, then finally the nails have to get boxed and shipped to, you know, me. I could buy French-made nails, apparently there is one clouterie left . Even assuming that Creil is a hotbed of industrial development and that its network of suppliers and machine operators is such that they can produce nails very efficiently, the wage in France is higher than the wage in, say, China, or Vietnam, or Turkey. The cost of the labor that goes into each nail is more.
So if I have 1000€ that I would like to devote to making a thing in France, I might prefer to obtain my nails from Turkey for 50€ rather than from France for 100€, because it leaves me more left over in which I can do my thing.
Perhaps France sees this as a travesty; in response, France imposes a tariff of 100% on imported nails. In that way, it might be the same to me as a consumer whether the nails were from Turkey or France. But this imposes a cost on me; I have less left over with which to do my thing. In exchange, the French clouterie gets to keep on going, along with the workers that work there, their families and the community around the factory, the suppliers of the factory, and so on.
But do I actually care that my nails are made in France? Should the country care that it has a capacity to make nails? When I was 25, I thought so, that it would be better if everything were made locally. Perhaps that was a distortion from having grown up in the US, in which we consider goods originating between either ocean as “ours”; smaller polities cannot afford that conceit. There may or may not be a reason that France might want nails to be made locally; it is a political question, but one that is not without tradeoffs. If we always choose the local option, we will ultimately do worse overall than a country that is happy to import; in the limit case, some goods like coffee would be unobtainable, with a price floor established by the cost to artificially re-create in France the mountain climate of Perú.
Let us accept, however, the argument that free trade improves overall efficiency, yielding productivity gains that ultimately get spread around society, making everyone’s lives better. We on the left are used to being more critical than constructive, but we do ourselves a disservice if we let our distaste of capitalism prevent us from appreciating how it works. Reading Marx’s Capital , for example, one can’t help but appreciate the awe which he has towards the dynamism of capitalism, apart from analysis and critique. In that spirit of dynamic awe, therefore, let us assume that free trade leads to higher productivity. How does it do this?
Here, I think, we must recognize a kind of ruthless cruelty. The one clouterie survives because of local demand; it can’t compete on the global market. It will fold eventually. Its workers will be laid off, its factories vacated, its town... well Creil is on the outskirts of Paris, it will live on, but less as a center in and of itself.
The nail factory will try to hang on for a while, though. It will start by putting downward pressure on wages, and extract other concessions from workers, in the name of economic necessity. Otherwise, the boss says, we move the factory to Tunisia. This operates also on larger levels, with the chamber of commerce (MEDEF) arguing for regulatory “simplifications”, new kinds of contracts with fewer protections, and so on. To the extent that a factory’s product is a commodity – whether the nails are Malaysian or French doesn’t matter – to that extent, free trade is indeed a race to the bottom, allowing mobile capital to play different jurisdictions off each other. The same goes for quality standards, environmental standards, and similar ways in which countries might choose to internalize what are elsewhere externalities.
I am a sentimental dude and have always found this sort of situation to be quite sad. Places that are on the up and up have a yuppie, rootless vibe, while those that are in decline have roots but no branches. Economics washes its hands of vibes, though; if it’s not measurable in euros, it doesn’t exist. What is the value of 19th-century India’s textile industry against the low price of Manchester-milled cloth? Can we put a number on it? We did, of course; zero is a number after all.
Finally, before I close today’s missive, a note on distribution. Most macro-economics is indifferent as regards distribution; gross domestic product is assessed relative to a country, not its parts. But if nails are producible for 50€ in Turkey, shipping included, that doesn’t mean they get sold at that price in France; if they sell for 70€, the capitalist pockets the change. That’s the base of surplus value: the owner pays the input costs, and the input cost of a worker is what the worker needs to make a living (rent, food, etc), but the that cost is less than the value of what the worker produces. That productivity goes up doesn’t necessarily mean that it makes it to the producers; that requires a finer analysis.
to be continued
For me, the fundamental economic argument for free trade is not unambiguously positive. Yes, lower trade barriers should leave us all with more left over with which we can do cool things. But the mechanism is cruel, the benefits accrue unequally, and there is value in producing communities that is not captured in prices.
In my next missive, we go back to the 19th century to see what Marx and Cobden had to say about the topic. Until then, happy trading!
Before Outreachy, when I thought about career opportunities, I mostly thought about job openings, applications, and interviews. Opportunities felt like something you wait for, or hope to be selected for.
This internship has changed how I see that completely.
I’m learning that opportunities are often created through contribution, visibility, and community, not just applications.
Opportunities Look Different in Open Source
Working with GNOME has shown me that contributing to open source is not just about writing code, it’s about building a public track record. Every merge request, every review cycle, every improvement becomes part of a visible body of work.
Through my work on Papers: implementing manual signature features, fixing issues, contributing to Poppler codebase and now working on digital signatures, I’m not just completing tasks. I’m building real-world experience in a production codebase used by actual users.
That kind of experience creates opportunities that don’t always show up on job boards:
Skills That Expand My Career Options
This internship is also expanding what I feel qualified to do.I’m gaining experience with:
These are skills that apply across many roles, not just one job title. They open doors to remote collaboration, open-source roles, and product-focused engineering work.
Career Is Bigger Than Employment
One mindset shift for me is that career is no longer just about “getting hired.” It’s also about impact and direction.
I now think more about:
Open source makes career feel less like a ladder and more like a network.
Creating Opportunities for Others
Coming from a non-traditional path into tech, I’m especially aware of how powerful access and guidance can be. Programs like Outreachy don’t just create opportunities for individuals, they multiply opportunities through community.
As I grow, I want to contribute not only through code, but also through sharing knowledge, documenting processes, and encouraging others who feel unsure about entering open source.
Looking Ahead
I don’t have every step mapped out yet. But I now have something better: direction and momentum.
I want to continue contributing to open source, deepen my technical skills, and work on tools that people actually use. Outreachy and GNOME have shown me that opportunities often come from showing up consistently and contributing thoughtfully.
That’s the path I plan to keep following.
So I work in compilers, which means that I write programs that translate programs to programs. Sometimes you will want to target a language at a higher level than just, like, assembler, and oftentimes C is that language. Generating C is less fraught than writing C by hand, as the generator can often avoid the undefined-behavior pitfalls that one has to be so careful about when writing C by hand. Still, I have found some patterns that help me get good results.
Today’s note is a quick summary of things that work for me. I won’t be so vain as to call them “best practices”, but they are my practices, and you can have them too if you like.
static inline functions enable data abstraction
When I learned C, in the early days of GStreamer (oh bless its heart it still has the same web page!), we used lots of preprocessor macros. Mostly we got the message over time that many macro uses should have been inline functions ; macros are for token-pasting and generating names, not for data access or other implementation.
But what I did not appreciate until much later was that always-inline functions remove any possible performance penalty for data abstractions. For example, in Wastrel , I can describe a bounded range of WebAssembly memory via a memory struct, and an access to that memory in another struct:
struct memory { uintptr_t base; uint64_t size; };
struct access { uint32_t addr; uint32_t len; };
And then if I want a writable pointer to that memory, I can do so:
#define static_inline \ static inline __attribute__((always_inline)) static_inline void* write_ptr(struct memory m, struct access a) { BOUNDS_CHECK(m, a); char *base = __builtin_assume_aligned((char *) m.base_addr, 4096); return (void *) (base + a.addr); }
(Wastrel usually omits any code for BOUNDS_CHECK , and just relies on memory being mapped into a PROT_NONE region of an appropriate size. We use a macro there because if the bounds check fails and kills the process, it’s nice to be able to use __FILE__ and __LINE__ .)
Regardless of whether explicit bounds checks are enabled, the static_inline attribute ensures that the abstraction cost is entirely burned away; and in the case where bounds checks are elided, we don’t need the size of the memory or the len of the access, so they won’t be allocated at all.
If write_ptr wasn’t static_inline , I would be a little worried that somewhere one of these struct values would get passed through memory. This is mostly a concern with functions that return structs by value; whereas in e.g. AArch64, returning a struct memory would use the same registers that a call to void (*)(struct memory) would use for the argument, the SYS-V x64 ABI only allocates two general-purpose registers to be used for return values. I would mostly prefer to not think about this flavor of bottleneck, and that is what static inline functions do for me.
avoid implicit integer conversions
C has an odd set of default integer conversions, for example promoting uint8_t to signed int , and also has weird boundary conditions for signed integers. When generating C, we should probably sidestep these rules and instead be explicit: define static inline u8_to_u32 , s16_to_s32 , etc conversion functions, and turn on -Wconversion .
Using static inline cast functions also allows the generated code to assert that operands are of a particular type. Ideally, you end up in a situation where all casts are in your helper functions, and no cast is in generated code.
wrap raw pointers and integers with intent
Whippet is a garbage collector written in C. A garbage collector cuts across all data abstractions: objects are sometimes viewed as absolute addresses, or ranges in a paged space, or offsets from the beginning of an aligned region, and so on. If you represent all of these concepts with size_t or uintptr_t or whatever, you’re going to have a bad time. So Whippet has struct gc_ref , struct gc_edge , and the like: single-member structs whose purpose it is to avoid confusion by partitioning sets of applicable operations. A gc_edge_address call will never apply to a struct gc_ref , and so on for other types and operations.
This is a great pattern for hand-written code, but it’s particularly powerful for compilers: you will often end up compiling a term of a known type or kind and you would like to avoid mistakes in the residualized C.
For example, when compiling WebAssembly, consider struct.set ‘s operational semantics : the textual rendering states, “Assert: Due to validation, val is some ref.struct structaddr .” Wouldn’t it be nice if this assertion could translate to C? Well in this case it can: with single-inheritance subtyping (as WebAssembly has), you can make a forest of pointer subtypes:
typedef struct anyref { uintptr_t value; } anyref;
typedef struct eqref { anyref p; } eqref;
typedef struct i31ref { eqref p; } i31ref;
typedef struct arrayref { eqref p; } arrayref;
typedef struct structref { eqref p; } structref;
So for a (type $type_0 (struct (mut f64))) , I might generate:
typedef struct type_0ref { structref p; } type_0ref;
Then if I generate a field setter for $type_0 , I make it take a type_0ref :
static inline void
type_0_set_field_0(type_0ref obj, double val) {
...
}
In this way the types carry through from source to target language. There is a similar type forest for the actual object representations:
typedef struct wasm_any { uintptr_t type_tag; } wasm_any;
typedef struct wasm_struct { wasm_any p; } wasm_struct;
typedef struct type_0 { wasm_struct p; double field_0; } type_0;
...
And we generate little cast routines to go back and forth between type_0ref and type_0* as needed. There is no overhead because all routines are static inline, and we get pointer subtyping for free: if a struct.set $type_0 0 instruction is passed a subtype of $type_0 , the compiler can generate an upcast that type-checks.
fear not memcpy
In WebAssembly, accesses to linear memory are not necessarily aligned, so we can’t just cast an address to (say) int32_t* and dereference. Instead we memcpy(&i32, addr, sizeof(int32_t)) , and trust the compiler to just emit an unaligned load if it can (and it can). No need for more words here!
for ABI and tail calls, perform manual register allocation
So, GCC finally has __attribute__((musttail)) : praise be. However, when compiling WebAssembly, it could be that you end up compiling a function with, like 30 arguments, or 30 return values; I don’t trust a C compiler to reliably shuffle between different stack argument needs at tail calls to or from such a function. It could even refuse to compile a file if it can’t meet its musttail obligations; not a good characteristic for a target language.
Really you would like it if all function parameters were allocated to registers. You can ensure this is the case if, say, you only pass the first n values in registers, and then pass the rest in global variables. You don’t need to pass them on a stack, because you can make the callee load them back to locals as part of the prologue.
What’s fun about this is that it also neatly enables multiple return values when compiling to C: simply go through the set of function types used in your program, allocate enough global variables of the right types to store all return values, and make a function epilogue store any “excess” return values—those beyond the first return value, if any—in global variables, and have callers reload those values right after calls.
what’s not to like
Generating C is a local optimum: you get the industrial-strength instruction selection and register allocation of GCC or Clang, you don’t have to implement many peephole-style optimizations, and you get to link to to possibly-inlinable C runtime routines. It’s hard to improve over this design point in a marginal way.
There are drawbacks, of course. As a Schemer, my largest source of annoyance is that I don’t have control of the stack: I don’t know how much stack a given function will need, nor can I extend the stack of my program in any reasonable way. I can’t iterate the stack to precisely enumerate embedded pointers ( but perhaps that’s fine ). I certainly can’t slice a stack to capture a delimited continuation.
The other major irritation is about side tables: one would like to be able to implement so-called zero-cost exceptions , but without support from the compiler and toolchain, it’s impossible.
And finally, source-level debugging is gnarly. You would like to be able to embed DWARF information corresponding to the code you residualize; I don’t know how to do that when generating C.
(Why not Rust, you ask? Of course you are asking that. For what it is worth, I have found that lifetimes are a frontend issue; if I had a source language with explicit lifetimes, I would consider producing Rust, as I could machine-check that the output has the same guarantees as the input. Likewise if I were using a Rust standard library. But if you are compiling from a language without fancy lifetimes, I don’t know what you would get from Rust: fewer implicit conversions, yes, but less mature tail call support, longer compile times... it’s a wash, I think.)
Oh well. Nothing is perfect, and it’s best to go into things with your eyes wide open. If you got down to here, I hope these notes help you in your generations. For me, once my generated C type-checked, it worked: very little debugging has been necessary. Hacking is not always like this, but I’ll take it when it comes. Until next time, happy hacking!
Bill Hoffman, the original creator of the CMake language held a presentation at CppCon . At approximately 49 minutes in he starts talking about future plans for dependency management. He says, and I now quote him directly, that "in this future I envision", one should be able to do something like the following (paraphrasing).
Your project has dependencies A, B and C. Typically you get them from "the system" or a package manager. But then you'd need to develop one of the deps as well. So it would be nice if you could somehow download, say, A, build it as part of your own project and, once you are done, switch back to the system one.
Well mr Hoffman, do I have wonderful news for you! You don't need to treasure these sensual daydreams any more. This so called "future" you "envision" is not only the present, but in fact ancient past. This method of dependency management has existed in Meson for so long I don't even remember when it got added. Something like over five years at least.
How would you use such a wild and an untamed thing?
Let's assume you have a Meson project that is using some dependency called bob . The current build is using it from the system (typically via pkg-config, but the exact method is irrelevant). In order to build the source natively, first you need to obtain it. Assuming it is available in WrapDB, all you need to do is run this command:
meson wrap install bob
If it is not, then you need to do some more work. You can even tell Meson to check out the project's Git repo and build against current trunk if you so prefer. See documentation for details.
Then you need to tell Meson to use the internal one. There is a global option to switch all dependencies to be local, but in this case we want only this dependency to be built and get the remaining ones from the system. Meson has a builtin option for exactly this:
meson configure builddir -Dforce_fallback_for=bob
Starting a build would now reconfigure the system to use the builtin option. Once you are done and want to go back to using system deps, run this command:
meson configure builddir -Dforce_fallback_for=
This is all you need to do. That is the main advantage of competently designed tools. They rose tint your world and keep you safe from trouble and pain. Sometimes you can see the blue sky through the tears in your eyes.
Oh, just one more deadly sting
If you keep watching the presenter first asks the audience if this is something they would like. Upon receiving a positive answer he then follows up with this [again quoting directly]:
So you should all complain to the DOE [presumably US Department of Energy] for not funding the SBIR [presumably some sort of grant or tender] for this.
Shaming your end users into advocating an authoritarian/fascist government to give large sums of money in a tender that only one for-profit corporation can reasonably win is certainly a plan.
Instead of working on this kind of a muscle man you can alternative do what we in the Meson project did: JFDI. The entire functionality was implemented by maybe 3 to 5 people, some working part time but most being volunteers. The total amount of work it took is probably a fraction of the clerical work needed to deal with all the red tape that comes with a DoE tender process.
In the interest of full disclosure
While writing this blog post I discovered a corner case bug in our current implementation. At the time of writing it is only seven hours old, and not particularly beautiful to behold as it has not been fixed yet. And, unfortunately, the only thing I've come to trust is that bugfixes take longer than you would want them to.
The past few months have been heavy for many people in the United States, especially families navigating uncertainty about safety, stability, and belonging. My own mixed family has been working through some of those questions, and it has led us to make a significant change.
Over the course of last year, my request to relocate to France while remaining in my role moved up and down the management chain at Red Hat for months without resolution, ultimately ending in a denial. That process significantly delayed our plans despite providing clear evidence of the risks involved to our family. At the beginning of this year, my wife and I moved forward by applying for long-stay visitor visas for France, a status that does not include work authorization.
During our in-person visa appointment in Seattle, a shooting involving CBP occurred just a few parking spaces from where we normally park for medical outpatient visits back in Portland. It was covered by the news internationally and you may have read about it. Moments like that have a way of clarifying what matters and how urgently change can feel necessary.
Our visas were approved quickly, which we’re grateful for. We’ll be spending the next year in France, where my wife has other Tibetan family. I’m looking forward to immersing myself in the language and culture and to taking that responsibility seriously. Learning French in mid-life will be humbling, but I’m ready to give it my full focus.
This move also means a professional shift. For many years, I’ve dedicated a substantial portion of my time to maintaining and developing key components across the GNOME platform and its surrounding ecosystem. These projects are widely used, including in major Linux distributions and enterprise environments, and they depend on steady, ongoing care.
For many years, I’ve been putting in more than forty hours each week maintaining and advancing this stack. That level of unpaid or ad-hoc effort isn’t something I can sustain, and my direct involvement going forward will be very limited. Given how widely this software is used in commercial and enterprise environments, long-term stewardship really needs to be backed by funded, dedicated work rather than spare-time contributions.
If you or your organization depend on this software, now is a good time to get involved. Perhaps by contributing engineering time, supporting other maintainers, or helping fund long-term sustainability.
The folliwing is a short list of important modules where I’m roughly the sole active maintainer:
There are, of course, many other modules I contribute to, but these are the ones most in need of attention. I’m committed to making the transition as smooth as possible and am happy to help onboard new contributors or teams who want to step up.
My next chapter is about focusing on family and building stability in our lives.
Welcome to another GNOME Foundation weekly update! FOSDEM happened last week, and we had a lot of activity around the conference in Brussels. We are also extremely busy getting ready for our upcoming audit, so there’s lots to talk about. Let’s get started.
FOSDEM
FOSDEM happened in Brussels, Belgium, last weekend, from 31st January to 1st February. There were lots of GNOME community members in attendance, and plenty of activities around the event, including talks and several hackfests. The Foundation was busy with our presence at the conference, plus our own fringe events.
Board hackfest
Seven of our nine directors met for an afternoon and a morning prior to FOSDEM proper. Face to face hackfests are something that the Board has done at various times previously, and have always been a very effective way to move forward on big ticket items. This event was no exception, and I was really happy that we were able to make it happen.
During the event we took the time to review the Foundation’s financials, and to make some detailed plans in a number of key areas. It’s exciting to see some of the initiatives that we’ve been talking about starting to take more shape, and I’m looking forward to sharing more details soon.
Advisory Board meeting
The afternoon of Friday 30th January was occupied with a GNOME Foundation Advisory Board meeting. This is a regular occurence on the day before FOSDEM, and is an important opportunity for the GNOME Foundation Board to meet with partner organizations and supporters.
Turn out for the meeting was excellent, with Canonical, Google, Red Hat, Endless and PostmarketOS all in attendance. I gave a presentation on the how the Foundation is currently performing, which seemed to be well-received. We then had presentations and discussion amongst Advisory Board members.
I thought that the discussion was useful, and we identified a number of areas of shared interest. One of these was around how partners (companies, projects) can get clear points of contact for technical decision making in GNOME and beyond. Another positive theme was a shared interest in accessibility work, which was great to see.
We’re hoping to facilitate further conversations on these topics in future, and will be holding our next Advisory Board meeting in the summer prior to GUADEC. If there are any organizations out there would like to join the Advisory Board, we would love to hear from you.
Conference stand
GNOME had a stand during both FOSDEM days, which was really busy. I worked the stand on the Saturday and had great conversations with people who came to say hi. We also sold a lot of t-shirts and hats!
I’d like to give a huge thank you to Maria Majadas who organized and ran our stand this year. It is incredibly exhausting work and we are so lucky to have Maria in our community. Please say thank you to her!
We also had plenty of other notable volunteers, including Julian Sparber, Ignacy Kuchciński, Sri Ramkrishna. Richard Litteaur, our previous Interim Executive Director even took a shift on the stand.
Social
On the Saturday night there was a GNOME social event, hosted at a local restaurant. As always it was fantastic to get together with fellow contributors, and we had a good turnout with 40-50 people there.
Audit preparation
Moving on from FOSDEM, there has been plenty of other activity at the Foundation in recent weeks. The first of these is preparation for our upcoming audit. I have written a fair bit about this in these previous updates. The audit is a routine exercise, but this is also our first, so we are learning a lot.
The deadline for us to provide our documentation submission to the auditors is next Tuesday, so everyone on the finance side of the operation has been really busy getting all that ready. Huge thanks to everyone for their extra effort here.
GUADEC & LAS planning
Conference planning has been another theme in the past few weeks. For GUADEC, accommodation options have been announced , artwork has been produced, and local information is going up on the website.
Linux App Summit , which we co-organise with KDE, has been a bit delayed this year, but we have a venue now and are in the process of finalizing the budget. Announcements about the dates and location will hopefully be made quite soon.
Google verification
A relatively small task, but a good one to highlight: this week we facilitated (ie. paid for) the assessment process for GNOME’s integration with Google services. This is an annual process we have to go through in order to keep Evolution Data Server working with Google.
Infrastructure optimization
Finally, Bart, along with Andrea, has been doing some work to optimize the resource usage of GNOME infrastructure. If you are using GNOME services you might have noticed some subtle changes as a result of this, like Anubis popping up more frequently.
That’s it for this week. Thanks for reading; I’ll see you next week!