call_end

    • Pl chevron_right

      Juan Pablo Ugarte: Casilda 0.9.0 Development Release!

      news.movim.eu / PlanetGnome • 22 June • 1 minute

    Native rendering Release!

    I am pleased to announce a new development release of Casilda, a simple Wayland compositor widget for Gtk 4 which can be used to embed other processes windows in your Gtk 4 application.

    The main feature of this release is dmabuf support which allow clients to use hardware accelerated libraries for their rendering brought to you by Val Packet!

    You can see all her cool work here.

    This allowed me to stop relaying on wlroots scene compositor and render client windows directly in the widget snapshot method which not only is faster but also integrates better with Gtk since now the background is not handled by wlroots anymore and can be set with CSS like with any other widget. This is why I decided to deprecate bg-color property.

    Other improvements include transient window support and better initial window placement.

    Release Notes

      • Fix rendering glitch on resize
      • Do not use wlr scene layout
      • Render windows and popups directly in snapshot()
      • Position windows on center of widget
      • Position transient windows on center of parent
      • Fix unmaximize
      • Add dmabuf support (Val Packett)
      • Added vapi generation (PaladinDev)
      • Add library soname (Benson Muite)

    Fixed Issues

      • #11 “Resource leak causing crash with dmabuf”
      • #3 ” Unmaximize not working properly”
      • #2 “Add dmabuff support” (Val Packett)
      • #9 “Bad performance”
      • #10 “Add a soname to shared library” (Benson Muite)

    Where to get it?

    Source code lives on GNOME gitlab here

    git clone https://gitlab.gnome.org/jpu/casilda.git

    Matrix channel

    Have any question? come chat with us at #cambalache:gnome.org

    Mastodon

    Follow me in Mastodon @xjuan to get news related to Casilda and Cambalache development.

    Happy coding!

    • Pl chevron_right

      Steven Deobald: 2025-06-20 Foundation Report

      news.movim.eu / PlanetGnome • 21 June • 3 minutes

    Welcome to the mid-June Foundation Report! I’m in an airport! My back hurts! This one might be short! haha

    ## AWS OSS

    Before the UN Open Source Week, Andrea Veri and I had a chance to meet Mila Zhou, Tom (Spot) Callaway, and Hannah Aubry from AWS OSS. We thanked them for their huge contribution to GNOME’s infrastructure but, more importantly, discussed other ways we can partner with them to make GNOME more sustainable and secure.

    I’ll be perfectly honest: I didn’t know what to expect from a meeting with AWS. And, as it turns out, it was such a lovely conversation that we chatted nonstop for nearly 5 hours and then continued the conversation over supper. At a… vegan chinese food place, of all things? (Very considerate of them to find some vegetarian food for me!) Lovely folks and I can’t wait for our next conversation.

    ## United Nations Open Source Week

    The big news for me this week is that I attended the United Nations Open Source Week in Manhattan. The Foundation isn’t in a great financial position, so I crashed with friends-of-friends (now also friends!) on an air mattress in Queens. Free (as in ginger beer) is a very reasonable price but my spine will also appreciate sleeping in my own bed tonight. 😉

    I met too many people to mention, but I was pleasantly surprised by the variety of organizations and different folks in attendance. Indie hackers, humanitarian workers, education specialists, Digital Public Infrastructure Aficionados, policy wonks, OSPO leaders, and a bit of Big Tech. I came to New York to beg for money (and I did do a bit of that) but it was the conversations about the f/oss community that I really enjoyed.

    We did do a “five Executive Directors” photo, because 4 previous GNOME Foundation EDs happened to be there. One of them was Richard! I got to hang out with him in person and he gave me a hug. So did Karen. It was nice. The history matters (recent history and ancient history) … and GNOME has a lot of history.

    Special shout-out to Sumana Harihareswara (it’s hard for me to spell that without an “sh”) who organized an extremely cool, low-key gathering in an outdoor public space near the UN. She couldn’t make the conf herself but she managed to create the best hallway track I attended. (By dragging a very heavy bag of snacks and drinks all the way from Queens.) More of that, please. The unconf part, not the dragging snacks across the city part.

    All in all, a really exciting and exhausting week.

    ## Donation Page

    As I mentioned above, the GNOME Foundation’s financial situation could use help. We’ll be starting a donation drive soon to encourage GNOME users to donate, using the new donation page:

    https://donate.gnome.org

    This blog post is as good a time as any to say this isn’t just a cash grab. The flip side of finding money for the Foundation is finding ways to grow the project with it. I’m of the opinion that this needs to include more than running infrastructure and conferences. Those things are extremely important — nothing in recent memory has reminded me of the value of in-person interactions like meeting a bunch of new friends here in New York — the real key to the GNOME project is the project itself. And the core of the project is development.

    As usual: No Promises. But if you want to hear a version of what I was saying all week, you can bug Adrian Vovk for his opinion about my opinions. 😉

    The donation page would not have been possible without the help of Bart Piotrowski, Sam Hewitt, Jakub Steiner, Shivam Singhal, and Yogiraj Hendre. Thanks everyone for putting in the hard work to get this over the line, to test it with your own credit cards, and to fix bugs as they cropped up.

    We will keep iterating on this as we learn more about what corporate sponsors want in exchange for their sponsorship and as we figure out how best to support Causes (campaigns), such as #a11y development.

    ## Elections

    Voting has closed! Thank you to all the candidates who ran this year. I know that running for election on the Board is intimidating but I’m glad folks overcame that fear and made the effort to run campaigns. It was very important to have you all in the race and I look forward to working with my new bosses once they take their seats. That’s when you get to learn about governance and demonstrate that you’re willing to put in the work. You might be my bosses… but I’m going to push you. 😉

    Until next week!

    • Pl chevron_right

      Jussi Pakkanen: Book creation using almost entirely open source tools

      news.movim.eu / PlanetGnome • 20 June • 2 minutes

    Some years ago I wrote a book. Now I have written a second one, but because no publisher wanted to publish it I chose to self-publish a small print run and hand it out to friends (and whoever actually wants one) as a a very-much-not-for-profit art project.

    This meant that I had to create every PDF used in the printing myself. I received the shipment from the printing house yesterday.  The end result turned out very nice indeed.

    The red parts in this dust jacket are not ink but instead are done with foil stamping (it has a metallic reflective surface). This was done with Scribus. The cover image was painted by hand using watercolor paints. The illustrator used a proprietary image processing app to create the final TIFF version used here. She has since told me that she wants to eventually shift to an open source app due to ongoing actions of the company making the proprietary app.

    The cover itself is cloth with a debossed emblem. The figure was drawn in Inkscape and then copypasted to Scribus.

    Evert fantasy book needs to have a map. This has two and they are printed in the end papers. The original picture was drawn with a nib pen and black ink. The printed version is brownish to give it that "old timey" look. Despite its apparent simplicity this one was the most difficult PDF to implement correctly. The image itself is 1-bit and printed with a Pantone spot ink. Trying to print this with CMYK inks would just not have worked. Because the PDF drawing model for spot inks in images behaves, let's say, in an unexpected way, I had to write a custom script to create the PDF with CapyPDF. As far as I know no other open source tool can do this correctly, not even Scribus. The relevant bug can be found here . It was somewhat nerve wrecking to send this out to the print shop with zero practical experience and a theoretical basis of "according to my interpretation of the PDF spec, this should be correct". As this is the first ever commercial print job using CapyPDF, it's quite fortunate that it succeeded pretty much perfectly.

    The inner pages were created with the same Chapterizer tool as the previous book. It uses Pango and Cairo to generate PDFs. Because Cairo only produces RGB PDFs it had to be converted to grayscale using Ghostscript.

    • Pl chevron_right

      Matthew Garrett: My a11y journey

      news.movim.eu / PlanetGnome • 20 June • 3 minutes

    23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher , one of the group's research projects, more usable on Linux. I jumped.

    The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

    This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

    But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

    I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

    But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

    That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

    When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.

    comment count unavailable comments
    • Pl chevron_right

      Michael Meeks: 2025-06-19 Thursday

      news.movim.eu / PlanetGnome • 19 June

    • Up early, tech planning call in the morning, mail catch-up, admin and TORF pieces.
    • Really exited to see the team get the first COOL 25.04 release shipped , coming to a browser near you:

      Seems our videos are getting more polished over time too which is good.
    • Mail, admin, compiled some code too.
    • Pl chevron_right

      Peter Hutterer: libinput and tablet tool eraser buttons

      news.movim.eu / PlanetGnome • 19 June • 3 minutes

    This is, to some degree, a followup to this 2014 post . The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.

    In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.

    Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.

    To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.

    Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).

    This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support . Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.

    [1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
    [2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.

    • Pl chevron_right

      Marcus Lundblad: Midsommer Maps

      news.movim.eu / PlanetGnome • 18 June • 4 minutes

    As tradition has it, it's about time for the (Northern Hemisphere) summer update on the happenings around Maps!

    about-49.alpha.png
    About dialog for GNOME Maps 49.alpha development


    Bug Fixes

    Since the GNOME 48 release in March, there's been some bug fixes, such as correctly handling daylight savings time in public transit itineraries retrieved from Transitous. Also James Westman fixed a regression where the search result popover wasn't showing on small screen devices (phones) because of sizing issues.

    More Clickable Stuff

    More symbols can now be directly selected in the map view by clicking/tapping on there symbols, like roads and house numbers (and then also, like any other POI can be marked as favorites).
    place-bubble-road.png
    Showing place information for the AVUS motorway in Berlin

    And related to traffic and driving, exit numbers are now shown for highway junctions (exits) when available.
    motorway-exit-right.png
    Showing information for a highway exit in a driving-on-the-right locallity

    motorway-exit-left.png
    Showing information for a highway exit in a driving-on-the-left locallity

    Note how the direction the arrow is pointing depends on the side of the road vehicle traffic drives in the country/territoy of the place…
    Also the icon for the “Directions” button shows a “turn off left” mirrored icon now for places in drives-on-the-left countries as an additional attention-to-detail.

    Furigana Names in Japanese

    Since some time (around when we re-designed the place information “bubbles”) we show the native name for place under the name translated in the user's locale (when they are different).
    As there exists an established OpenStreetMap tag for phonetic names in Japanese (using Hiragana), name:ja-Hira akin to Furigana ( https://en.wikipedia.org/wiki/Furigana ) used to aid with pronounciation of place names. I had been thinking that it might be a good idea to show this when available as the dimmed supplimental text in the cases where the displayed name and native names are identical, and the Hiragana name is available. E.g. when the user's locale is Japanese and looking at Japanese names.  For other locales in these cases the displayed name would typically be the Romaji name with the Japanese full (Kanji) name displayed under it as the native name.
    So, I took the opportunity to discuss this with my college Daniel Markstedt, who speaks fluent Japanese and has lived many years in Japan. As he like the idea, and demo of it, I decided to go ahead with this!
    hiragana-name.png
    Showing a place in Japanese with supplemental Hiragana name

    Configurable Measurement Systems

    Since like the start of time, Maps has  shown distances in feet and miles when using a United States locale (or more precisely when measurements use such a locale, LC_MEASUREMENT when speaking about the environment variables). For other locales using standard metric measurements.
    Despite this we have several times recieved bug reports about Maps not  using the correct units. The issue here is that many users tend to prefer to have their computers speaking American English.
    So, I finally caved in and added an option to override the system default.
    hamburger-preferences.png
    Hamburger menu

    measurement-system-menu.png
    Hamburger menu showing measurement unit selection

    Station Symbols

    One feature I had been wanted to implement since we moved to vector tiles and integrated the customized highway shields from OpenStreeMap Americana is showing localized symbols for e.g. metro stations. Such as the classic “roundel” symbol used in London, and the ”T“ in Stockholm.
    After adding the network:wikidata tag to the pre-generated vector tiles this has been possible to implement. We choose to rely on the Wikidata tag instead of the network name/abbreviations as this is more stable and names could risk getting collitions with unrelated networks having the same (short-) name.
    hamburg-u-bahn.png
    U-Bahn station in Hamburg

    copenhagen-metro.png
    Metro stations in Copenhagen

    boston-t.png
    Subway stations in Boston

    berlin-s-bahn.png
    S-Bahn station in Berlin

    This requires the stations being tagged consitently to work out. I did some mass tagging of metro stations in Stockholm, Oslo, and Copenhagen. Other than that I mainly choose places where's at least partial coverage already.
    If you'd like to contribute and update a network with the network Wikidata tag, I prepared to quick steps to do such an edit with the JOSM OpenStreetMap desktop editor.
    Download a set of objects to update using an Overpass query, as an example, selecting the stations of Washing DC metro

    [out:xml][timeout:90][bbox:{{bbox}}];

    (

    nwr["network"="Washington Metro"]["railway"="station"];

    );

    (._;>;);

    out meta;

    josm-download-overpass-query.png
    JOSM Overpass download query editor

    Select the region to download from

    josm-select-region.png
    Select region in JOSM

    Select to only show the datalayer (not showing the background map) to make it easier to see the raw data.

    josm-show-layers.png
    Toggle data layers in JOSM

    Select the nodes.

    josm-select-nodes.png
    Show raw datapoints in JSOM

    Edit the field in the tag edit panel to update the value for all selected objects

    josm-edit-network-wikidata-tag.png
    Showing tags for selected objects

    Note that this sample assumed the relevant station node where already tagged with network names (the network tag). Other queries to limit selection might be needed.

    Also it could also be a good idea to reach out to local OSM communities before making bulk edits like this (e.g. if there is no such tagging at all in specific region) to make sure it would be aliged with expectations and such.

    Then it will also potentially take a while before it gets include in out monthly vector tile  update.

    When this has been done, given a suitable icon is available as e.g. public domain or commons in WikimediaCommons, it could be bundled in data/icons/stations and a definition added in the data mapping in src/mapStyle/stations.js.

    And More…

    One feature that has been long-wanted is the ability to dowload maps for offline usage. Lately precisely this is something James Westman has been working on.

    It's still an early draft, so we'll see when it is ready, but it already look pretty promising.

    hamburger-preferences.png
    Showing the new Preferences option



    download-dialog.png
    Preference dialog with dowloads

    download-select-area.png
    Selecting region to download

    download-rename.png
    Entering a name for a downloaded region

    download-areas.png
    Dialog showing dowloaded areas

    And that's it for now!

    • Pl chevron_right

      Michael Meeks: 2025-06-18 Wednesday

      news.movim.eu / PlanetGnome • 18 June

    • Up too early, out for a run with J. Sync with Dave. Plugged away at calls, admin, partner call, sales call, catch up with Philippe and Italo.
    • Birthday presents at lunch - new (identical) trousers, and a variable DC power supply for some electronics.
    • Published the next strip around the excitement of setting up your own non-profit structure:
    • Pl chevron_right

      Alley Chaggar: Demystifying The Codegen Phase Part 1

      news.movim.eu / PlanetGnome • 18 June • 3 minutes

    Intro

    I want to start off and say I’m really glad that my last blog was helpful to many wanting to understand Vala’s compiler. I hope this blog will also be just as informative and helpful. I want to talk a little about the basics of the compiler again, but this time, catering to the codegen phase. The phase that I’m actually working on, but has the least information in the Vala Docs.

    Last blog, I briefly mentioned the directories codegen and ccode being part of the codegen phase. This blog will be going more into depth about it. The codegen phase takes the AST and outputs the C code tree (ccode* objects), so that it can be generated to C code more easily, usually by GCC or another C compiler you installed. When dealing with this phase, it’s really beneficial to know and understand at least a little bit of C.

    ccode Directory

    • Many of the files in the ccode directory are derived from the class CCodeNode, valaccodenode.vala .
    • The files in this directory represent C Constructs. For example, the valaccodefunction.vala file represents a C code function. Regular C functions have function names, parameters, return types, and bodies that add logic. Essentially, what this class specifically does, is provide the building blocks for building a function in C.

      Cfunction3.png

         //...
        	writer.write_string (return_type);
            if (is_declaration) {
                writer.write_string (" ");
            } else {
                writer.write_newline ();
            }
            writer.write_string (name);
            writer.write_string (" (");
            int param_pos_begin = (is_declaration ? return_type.char_count () + 1 : 0 ) + name.char_count () + 2;
      
            bool has_args = (CCodeModifiers.PRINTF in modifiers || CCodeModifiers.SCANF in modifiers);
       //...
      

    This code snippet is part of the ccodefunction file, and what it’s doing is overriding the ‘write’ function that is originally from ccodenode. It’s actually writing out the C function.

    codegen Directory

    • The files in this directory are higher-level components responsible for taking the compiler’s internal representation, such as the AST and transforming it into the C code model ccode objects.
    • Going back to the example of the ccodefunction, codegen will take a function node from the abstract syntax tree (AST), and will create a new ccodefunction object. It then fills this object with information like the return type, function name, parameters, and body, which are all derived from the AST. Then the CCodeFunction.write() (the code above) will generate and write out the C function.

      //...
      private void add_get_property_function (Class cl) {
      		var get_prop = new CCodeFunction ("_vala_%s_get_property".printf (get_ccode_lower_case_name (cl, null)), "void");
      		get_prop.modifiers = CCodeModifiers.STATIC;
      		get_prop.add_parameter (new CCodeParameter ("object", "GObject *"));
      		get_prop.add_parameter (new CCodeParameter ("property_id", "guint"));
      		get_prop.add_parameter (new CCodeParameter ("value", "GValue *"));
      		get_prop.add_parameter (new CCodeParameter ("pspec", "GParamSpec *"));
        
      		push_function (get_prop);
      //...
      

    This code snippet is from valagobjectmodule.vala and it’s calling CCodeFunction (again from the valaccodefunction.vala ) and adding the parameters, which is calling valaccodeparameter.vala . What this would output is something that looks like this in C:

        void _vala_get_property (GObject *object, guint property_id, GValue *value, GParamSpec *pspec) {
           //... 
        }
    

    Why do all this?

    Now you might ask why? Why separate codegen and ccode?

    • We split things into codegen and ccode to keep the compiler organized, readable, and maintainable. It prevents us from having to constantly write C code representations from scratch all the time.
    • It also reinforces the idea of polymorphism and the ability that objects can behave differently depending on their subclass.
    • And it lets us do hidden generation by adding new helper functions, temporary variables, or inlined optimizations after the AST and before the C code output.

    Jsonmodule

    I’m happy to say that I am making a lot of progress with the JSON module I mentioned last blog. The JSON module follows very closely other modules in the codegen, specifically like the gtk module and the gobject module . It will be calling ccode functions to make ccode objects and creating helper methods so that the user doesn’t need to manually override certain JSON methods.