call_end

    • Pl chevron_right

      Nancy Nyambura: Outreachy Update: Two Weeks of Configs, Word Lists, and GResource Scripting

      news.movim.eu / PlanetGnome • 7 July • 1 minute

    It has been a busy two weeks of learning as I continued to develop the GNOME Crosswords project. I have been mainly engaged in improving how word lists are managed and included using configuration files.

    I started by writing documentation for how to add a new word list to the project by using .conf files. The configuration files define properties like display name, language, and origin of the word list so that contributors can simply add new vocabulary datasets. Each word list can optionally pull in definitions from Wiktionary and parse them, converting them into resource files for use by the game.

    As an addition to this I also scripted a program that takes config file contents and turns them into GResource XML files. This isn’t the project bulk, but a useful tool that automates part of the setup and ensures consistency between different word list entries. It takes in a .conf file and outputs a corresponding .gresource.xml.in file, mapping the necessary resources to suitable aliases. This was a good chance for me to learn more about Python’s argparse and configparser modules.

    Beyond scripting, I’ve been in regular communication with my mentor, seeking feedback and guidance to improve both my technical and collaborative skills. One key takeaway has been the importance of sharing smaller, incremental commits rather than submitting a large block of work all at once, a practice that not only helps with clarity but also encourages consistent progress tracking. I was also advised to avoid relying on AI-generated code and instead focus on writing clear, simple, and understandable solutions, which I’ve consciously applied to both my code and documentation.

    Next, I’ll be looking into how definitions are extracted and how importer modules work. Lots more to discover, especially about the innards of the Wiktionary extractor tool.

    Looking forward to sharing more updates as I get deeper into the project

    • Pl chevron_right

      Bradley M. Kuhn: Copyleft-next Relaunched!

      news.movim.eu / PlanetGnome • 6 July

    I am excited that Richard Fontana and I have announced the relaunch of copyleft-next .

    The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.

    If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance) .

    I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org .

    • Pl chevron_right

      Jussi Pakkanen: Deoptimizing a red-black tree

      news.movim.eu / PlanetGnome • 5 July • 1 minute

    An ordered map is typically slower than a hash map, but it is needed every now and then. Thus I implemented one in Pystd . This implementation does not use individually allocated nodes, but instead stores all data in a single contiguous array.

    Implementing the basics was not particularly difficult. Debugging it to actually work took ages of staring at the debugger, drawing trees by hand on paper, printfing things out in Graphviz format and copypasting the output to a visualiser. But eventually I got it working. Performance measurements showed that my implementation is faster than std::map but slower than std::unordered_map .

    So far so good.

    The test application creates a tree with a million random integers. This means that the nodes are most likely in a random order in the backing store and searching through them causes a lot of cache misses. Having all nodes in an array means we can rearrange them for better memory access patterns.

    I wrote some code to reorganize the nodes so that the root is at the first spot and the remaining nodes are stored layer by layer. In this way the query always processes memory "left to right" making things easier for the branch predictor.

    Or that's the theory anyway. In practice this made things slower. And not even a bit slower, the "optimized version" was more than ten percent slower. Why? I don't know. Back to the drawing board.

    Maybe interleaving both the left and right child node next to each other is the problem? That places two mutually exclusive pieces of data on the same cache line. An alternative would be to place the entire left subtree in one memory area and the right one in a separate one. Thinking about this for a while, this can be accomplished by storing the nodes in tree traversal order, i.e. in numerical order.

    I did that. It was also slower than a random layout. Why? Again: no idea.

    Time to focus on something else, I guess.

    • Pl chevron_right

      Steven Deobald: 2025-07-05 Foundation Update

      news.movim.eu / PlanetGnome • 5 July • 6 minutes

    ## The Cat’s Out Of The Bag

    Since some of you are bound to see this Reddit comment, and my reply , it’s probably useful for me to address it in a more public forum, even if it violates my “No Promises” rule.

    No, this wasn’t a shoot-from-the-hip reply. This has been the plan since I proposed a fundraising strategy to the Board. It is my intention to direct more of the Foundation’s resources toward GNOME development, once the Foundation’s basic expenses are taken care of. (Currently they are not.) The GNOME Foundation won’t stop running infrastructure, planning GUADEC, providing travel grants, or any of the other good things we do. But rather than the Foundation contributing to GNOME’s development exclusively through inbound/restricted grants, we will start to produce grants and fellowships ourselves.

    This will take time and it will demand more of the GNOME project. The project needs clear governance and management or we won’t know where to spend money, even if we have it. The Foundation won’t become a kingmaker, nor will we run lotteries — it’s up to the project to make recommendations and help us guide the deployment of capital toward our mission.

    ## Friends of GNOME

    So far, we have a cute little start to our fundraising campaign: I count 172 public Friends of GNOME over on https://donate.gnome.org/ … to everyone who contributes to GNOME and to everyone who donates to GNOME: thank you. Every contribution makes a huge difference and it’s been really heartwarming to see all this early support.

    We’ve taken the first step out of our cozy f/oss spaces: Reddit . One user even set up a “show me your donation!” thread . It’s really cute. 🙂 It’s hard to express just how important it is that we go out and meet our users for this exercise. We need them to know what an exciting time it is for GNOME: Windows 10 is dying, MacOS gets worse with every release, and they’re going to run GNOME on a phone soon. We also need them to know that GNOME needs their help.

    Big thanks to Sri for pushing this and to him and Brage for moderating /r/gnome . It matters a lot to find a shared space with users and if, as a contributor, you’ve been feeling like you need a little boost lately, I encourage you to head over to those Reddit threads. People love what you build, and it shows.

    ## Friends of GNOME: Partners

    The next big thing we need to do is to find partners who are willing to help us push a big message out across a lot of channels. We don’t even know who our users are, so it’s pretty hard to reach them. The more people see that GNOME needs their help, the more help we’ll get.

    Everyone I know who runs GNOME (but doesn’t pay much attention to the project) said the same thing when I asked what they wanted in return for a donation: “Nothing really… I just need you to ask me. I didn’t know GNOME needed donations!”

    If you know of someone with a large following or an organization with a lot of reach (or, heck, even a little reach), please email me and introduce me. I’m happy to get them involved to boost us.

    ## Friends of GNOME: Shell Notification

    KDE, Thunderbird, and Blender have had runaway success with their small donation notification. I’m not sure we can do this for GNOME 49 or not, but I’d love to try. I’ve opened an issue here:

    https://gitlab.gnome.org/Teams/Design/os-mockups/-/issues/274

    We may not know who our users are. But our software knows who our users are. 😉

    ## Annual Report

    I should really get on this but it’s been a busy week with other things. Thanks everyone who’s contributed their thoughts to the “Successes for 2025” issue so far. If you don’t see your name and you still want to contribute something, please go ahead!

    ## Fiscal Controls

    One of the aforementioned “other things” is Fiscal Controls.

    This concept goes by many names. “Fiscal Controls”, “Internal Controls”, “Internal Policies and Procedures”, etc. But they all refer to the same thing: how to manage financial risk. We’re taking a three-pronged approach to start with:

    1. Reduce spend and tighten up policies. We have put the travel policy on pause (barring GUADEC, which was already approved) and we intend to tighten up all our policies.
    2. Clarity on capital shortages. We need to know exactly what our P&L looks like in any given month, and what our 3-month, 6-month, and annual projections look like based on yesterday’s weather. Our bookkeepers, Ops team, and new treasurers are helping with this.
    3. Clarity in reporting. A 501(c)(3) is … kind of a weird shape. Not everyone in the Board is familiar with running a business and most certainly aren’t familiar with running a non-profit. So we need to make it painfully straightforward for everyone on the Board to understand the details of our financial position, without getting into the weeds: How much money are we responsible for, as a fiscal host? How much money is restricted? How much core money do we have? Accounting is more art than science and the nuances of reporting accurately (but without forcing everyone to read a balance sheet) is a large part of why that’s the case. Again, we have a lot of help from our bookkeepers, Ops team, and new treasurers.

    There’s a lot of work to do here and we’ll keep iterating, but these feel like strong starts.

    ## Organizational Resilience

    The other aforementioned “other thing” is resilience . We have a few things happening here.

    First, we need broader ownership, control, and access to bank accounts. This is, of course, the related to, but different from, fiscal controls — our controls ensure no one person can sign themselves a cheque for $50,000. Multiple signatories ensures that such responsibility doesn’t rest with a single individual. Everyone at the GNOME Foundation has impeccable moral standing but people do die, and we need to add resilience to that inevitability. More realistically (and immediately), we will be audited soon and the auditors will not care how trustworthy we believe one another to be.

    Second, we have our baseline processes: filing 990s, renewing our registration, renewing insurance, etc. All of these processes should be accessible to (and, preferably, executable by) multiple people.

    Third, we’re finally starting to make good use of Vaultwarden. Thanks again, Bart, for setting this up for us.

    Fourth, we need to ensure we have at least 3 administrators on each of our online accounts. Or, at worst, 2 administrators. Online accounts with an account owner should lean on an organizational account owner (not an individual) which multiple people control together. Thanks Rosanna for helping sort this out.

    Last, we need at least 2 folks with root level access to all our self-hosted services. This of course true in the most literal sense, but we also need our SREs to have accounts with each service.

    ## Digital Wellbeing Kickoff

    I’m pleased to announce that the Digital Wellbeing contract has kicked off! The developer who was awarded the contract is Ignacy Kuchciński and he has begun working with Philip and Sam as of Tuesday.

    ## Office Hours

    I had a couple pleasant conversations with hackers this week: Jordan Petridis and Sophie Harold. I asked Sophie what she thought about the idea of “office hours” as I feel like I’ve gotten increasingly disconnected from the community after my first few weeks. Her response was something to the effect of “you can only try.” 🙂

    So let’s do that. I’ll invite maintainers and if you’d like to join, please reach out to a maintainer to find out the BigBlueButton URL for next Friday.

    ## A Hacker In Need Of Help

    We have a hacker in the southwest United States who is currently in an unsafe living situation. This person has given me permission to ask for help on their behalf. If you or someone you know could provide a safe temporary living situation within the continental United States, please get in touch with me. They just want to hack in peace.

    • Pl chevron_right

      Hans de Goede: Recovering a FP2 which gives "flash write failure" errors

      news.movim.eu / PlanetGnome • 4 July

    This blog post describes my successful os re-install on a fairphone 2 which was giving "flash write failure" errors when flashing it with fastboot, with the flash_FP2_factory.sh script. I'm writing down my recovery steps for this in case they are useful for anyone else.

    I believe that this is caused by the bootloader code which implements fastboot not having the ability to retry recoverable eMMC errors. It is still possible to write the eMMC from Linux which can retry these errors.

    So we can recover by directly fastboot-ing a recovery.img and then flashing things over adb.

    ( See step by step instructions... )


    comment count unavailable comments
    • Pl chevron_right

      This Week in GNOME: #207 Replacing Shortcuts

      news.movim.eu / PlanetGnome • 4 July • 5 minutes

    Update on what happened across the GNOME project in the week from June 27 to July 04.

    GNOME Core Apps and Libraries

    Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) reports

    The Release Team is happy to announce, that Papers will be the default Document Viewer starting with GNOME 49. This comes after a Herculean effort of the Papers maintainers and contributors that started about four years ago. The inclusion into GNOME Core was lately only blocked by missing screen-reader support, which is now ready to be merged. Papers is a fork of Evince motivated by a faster pace of development.

    Papers is not just a GTK 4 port but also brings new features like a better document annotations and support for mobile form factors. It is currently maintained by Pablo Correa Gomez, Qiu Wenbo, Markus Göllnitz, and lbaudin.

    org.gnome.Papers.sygKlGTR_2vmmKH.webp

    Emmanuele Bassi reports

    While GdkPixbuf, the elderly statesperson of image loading libraries in GNOME, is being phased out in favour or better alternatives, like Glycin , we are still hard at work to ensure it’s working well enough while applications and libraries are ported. Two weeks ago, GdkPixbuf acquired a safe, sandboxed image loader using Glycin; this week, this loader has been updated to be the default on Linux. The Glycin loader has also been updated to read SVG, and save image data including metadata. Additionally, GdkPixbuf has a new Android-native loader, using platform API; this allows loading icon assets when building GTK for Android. For more information, see the release notes for GdkPixbuf 2.43.3 , the latest development snapshot.

    Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) announces

    The nightly GNOME Flatpak runtime and SDK org.gnome.Sdk//master are now based on the Freedesktop runtime and SDK 25.08beta. If you are using the nightly runtime in you Flatpak development manifest, you might have to adjust a few things:

    • If you are using the LLVM extension, the required sdk-extensions is now org.freedesktop.Sdk.Extension.llvm20 . Don’t forget to also adjust the append-path . On your development system you will probably also have to run flatpak install org.freedesktop.Sdk.Extension.llvm20//25.08beta .
    • If you are using other SDK extensions, they might also require a newer version. They can be installed with commands like flatpak install org.freedesktop.Sdk.Extension.rust-stable//25.08beta .

    Libadwaita

    Building blocks for modern GNOME apps using GTK4.

    Alice (she/her) 🏳️‍⚧️🏳️‍🌈 says

    libadwaita finally has a replacement for the deprecated GtkShortcutsWindow - AdwShortcutsDialog . AdwShortcutLabel is available as a separate widget as well, replacing GtkShortcutLabel

    adw-shortcuts-dialog.Bl5QjOwN_18YRfK.webp

    Calendar

    A simple calendar application.

    Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ announces

    Happy Disability Pride Month everybody :)

    During the past few weeks, there’s been an overwhelming amount of progress with accessibility on GNOME Calendar:

    • Event widgets/popovers will convey to screen readers that they are toggle buttons. They will also convey of their states (whether they’re pressed or not) and that they have a popover. (See !587 )
    • Calendar rows will convey to screen readers that they are check boxes, along with their states (whether they’re checked or not). Additionally, they will no longer require a second press of a tab to get to the next row; one tab will be sufficient. (See !588 )
    • Month and year spin buttons are now capable of being interacted with using arrow up/down buttons. They will also convey to screen readers that they are spin buttons, along with their properties (current, minimum, and maximum values). The month spin button will also wrap, where going back a month from January will jump to December, and going to the next month from December will jump to January. (See !603 )
    • Events in the agenda view will convey to screen readers of their respective titles and descriptions. (See !606 )

    All these improvements will be available in GNOME 49.

    Accessibility on Calendar has progressed to the point where I believe it’s safe to say that, as of GNOME 49, Calendar will be usable exclusively with a keyboard, without significant usability friction!

    There’s still a lot of work to be done in regards to screen readers, for example conveying time appropriately and event descriptions. But really, just 6 months ago, we went from having absolutely no idea where to even begin with accessibility in Calendar — which has been an ongoing issue for literally a decade — to having something workable exclusively with a keyboard and screen reader! :3

    Huge thanks to Jeff Fortin for coordinating the accessibility initiative, especially with keeping the accessibility meta issue updated ; Georges Stavracas for single-handedly maintaining GNOME Calendar and reviewing all my merge requests; and Lukáš Tyrychtr for sharing feedback in regards to usability.

    All my work so far has been unpaid and voluntary; hundreds of hours were put into developing and testing all the accessibility-related merge requests. I would really appreciate if you could spare a little bit of money to support my work, thank you! 🩷

    Glycin

    Sandboxed and extendable image loading and editing.

    Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) reports

    We recently switched our legacy image loading library GdkPixbuf over to using glycin internally, which is our new image loading library. Glycin is safer, faster, and supports more features. Something that we missed is how much software depends on the image saving capabilities of GdkPixbuf for different formats. But that’s why we are making such changes early in the cycle to find these issues.

    Glycin now supports saving images for the AVIF, BMP, DDS, Farbfeld, GIF, HEIC, ICO, JPEG, OpenEXR, PNG, QOI, TGA, TIFF, and WebP image formats. JXL will hopefully follow. This means GdkPixbuf can also save the formats that it could save before. The changes are available as glycin 2.0.alpha.6 and gdk-pixbuf 2.43.3.

    Third Party Projects

    Alexander Vanhee says

    Gradia has been updated with the ability to upload edited images to an online provider of choice. I made sure users are both well informed about these services and can freely choose without being forced to use any particular one. The data related to this feature can also be updated dynamically without requiring a new release, enabling us to quickly address any data quality issues and update the list of providers as needed, without relying on additional package maintainer intervention.

    You can find the app on Flathub .

    gradia_diverse_showcase.Dr7dv_ei_Z1AmQc6.webp

    gradia_upload_providers.GVfdl-Br_Ca6tg.webp

    gradia_provider_showcase.DrqgKAGG_BYJN7.webp

    Bilal Elmoussaoui reports

    I have released a MCP (Model Context Protocol) server implementation that allows LLMs to access and interact with your favourite desktop environment. The implementation is available at https://github.com/bilelmoussaoui/gnome-mcp-server and you can read a bit more about it in my recent blog post https://belmoussaoui.com/blog/21-mcp-server

    Phosh

    A pure wayland shell for mobile devices.

    Guido reports

    Phosh 0.48.0 is out:

    There’s a new lock screen plugin that show all currently running media players (that support the MPRIS interface). You can thus switch between Podcasts, Shortwave and Gapless without having to unlock the phone.

    We also updated phosh’s compositor phoc to wlroots 0.19.0 bringing all the goodies from this releases. Phoc now also remembers the output scale in case the automatic scaling doesn’t match your expectations.

    There’s more, see the full details at here

    phosh-media-players.CcforpYd_Z1pNCpC.webp

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Richard Littauer: A handful of EDs

      news.movim.eu / PlanetGnome • 2 July

    I had the great privilege of going to UN Open Source Week at the UN, in New York City, last month. At one point, standing on the upper deck and looking out over the East River, I realized that there were more than a few former and current GNOME executive directors. So, we got a photo.

    Six people in front of a river on a building

    Stormy, Karen, Jeff, me, Steven, and Michael – not an ED, but the host of the event and the former board treasurer – all lined up.

    Fun.

    • Pl chevron_right

      Carlos Garnacho: Developing an application with TinySPARQL in 2025

      news.movim.eu / PlanetGnome • 2 July • 12 minutes

    Back a couple of months ago, I was given the opportunity to talk at LAS about search in GNOME , and the ideas floating around to improve it. Part of the talk was dedicated to touting the benefits of TinySPARQL as the base for filesystem search, and how in solving the crazy LocalSearch usecases we ended up with a very versatile tool for managing application data, either application-private or shared with other peers.

    It was no one else than our (then) future ED in a trench coat (I figure!) who forced my hand in the question round into teasing an application I had been playing with, to showcase how TinySPARQL should be used in modern applications. Now, after finally having spent some more time on it, I feel it’s up to a decent enough level of polish to introduce it more formally.

    Behold Rissole

    Picture of Rissole UI.

    Rissole is a simple RSS feed reader, intended let you read articles in a distraction free way, and to keep them all for posterity. It also sports a extremely responsive full-text search over all those articles, even on huge data sets. It is built as a flatpak, you can ATM download it from CI to try it, meanwhile it reaches flathub and GNOME Circle (?). Your contributions are welcome!

    So, let’s break down how it works, and what does TinySPARQL bring to the table.

    Structuring the data

    The first thing a database needs is a definition about how the data is structured. TinySPARQL is strongly based on RDF principles, and depends on RDF Schema for these data definitions. You have the internet at your fingertips to read more about these, but the gist is that it allows the declaration of data in a object-oriented manner, with classes and inheritance:

    mfo:FeedMessage a rdfs:Class ;
        rdfs:subClassOf mfo:FeedElement .
    
    mfo:Enclosure a rdfs:Class ;
        rdfs:subClassOf mfo:FeedElement .
    

    One can declare properties on these classes:

    mfo:downloadedTime a rdf:Property ;
        nrl:maxCardinality 1 ;
        rdfs:domain mfo:FeedMessage ;
        rdfs:range xsd:dateTime .
    

    And make some of these properties point to other entities of specific (sub)types, this is the key that makes TinySPARQL a graph database:

    mfo:enclosureList a rdf:Property ;
        rdfs:domain mfo:FeedMessage ;
        rdfs:range mfo:Enclosure .
    

    In practical terms, a database needs some guidance on what data access patterns are most expected. Being a RSS reader, sorting things by date will be prominent, and we want full-text search on content. So we declare it on these properties:

    nie:plainTextContent a rdf:Property ;
        nrl:maxCardinality 1 ;
        rdfs:domain nie:InformationElement ;
        rdfs:range xsd:string ;
        nrl:fulltextIndexed true .
    
    nie:contentLastModified a rdf:Property ;
        nrl:maxCardinality 1 ;
        nrl:indexed true ;
        rdfs:subPropertyOf nie:informationElementDate ;
        rdfs:domain nie:InformationElement ;
        rdfs:range xsd:dateTime .
    

    The full set of definitions will declare what is permitted for the database to contain, the class hierarchy and their properties, how do resources of a specific class interrelate with other classes… In essence, how the information graph is allowed to grow. This is its ontology (semi-literally, its view of the world, whoooah duude ). You can read more in detail how these declarations work at the TinySPARQL documentation .

    This information is kept in files separated from code, built in as a GResource in the application binary, and used during initialization to create a database at a location in control of the application:

        let mut store_path = glib::user_data_dir();
        store_path.push("rissole");
        store_path.push("db");
    
        obj.imp()
            .connection
            .set(tsparql::SparqlConnection::new(
                tsparql::SparqlConnectionFlags::NONE,
                Some(&gio::File::for_path(store_path)),
                Some(&gio::File::for_uri(
                    "resource:///com/github/garnacho/Rissole/ontology",
                )),
                gio::Cancellable::NONE,
            )?)
            .unwrap();
    

    So there’s a first advantage right here, compared to other libraries and approaches: The application only has to declare this ontology without much (or any) further code to support supporting code, compare to going through the design/normalization steps for your database design, and. having to CREATE TABLE your way to it with SQLite.

    Handling structure updates

    If you are developing an application that needs to store a non-trivial amount of data. It often comes as a second thought how to deal with new data being necessary, stored data being no longer necessary, and other post-deployment data/schema migrations. Rarely things come up exactly right at the first try.

    With few documented exceptions, TinySPARQL is able to handle these changes to the database structure by itself , applying the necessary changes to convert a pre-existing database into the new format declared by the application. This also happens at initialization time, from the application-provided ontology.

    But of course, besides the data structure, there might also be data content that might some kind of conversion or migration, this is where an application might still need some supporting code. Even then, SPARQL offers the necessary syntax to convert data, from small to big, from minor to radical changes. With the CONSTRUCT query form, you can generate any RDF graph from any other RDF graph .

    For Rissole, I’ve gone with a subset of the Nepomuk ontology, which does contain much embedded knowledge about the best ways to lay data in a graph database. As such I don’t expect major changes or gotchas in the data, but this remains a possibility for the future, e.g. if we were to move to another emerging ontology, or any other less radical data migrations that might crop up.

    So here’s the second advantage, compare to having to ALTER TABLE your way to new database schemas, or handle data migration for each individual table, and ensuring you will not paint yourself into a corner in the future.

    Querying data

    Now we have a database! We can write the queries that will feed the application UI. Of course, the language to write these in is SPARQL, there are plenty of resources over it on the internet, and TinySPARQL has its own tutorial in the documentation.

    One feature that sets TinySPARQL apart from other SPARQL engines in terms of developer experience is the support for parameterized values in SPARQL queries, through a little bit of non-standard syntax and the TrackerSparqlStatement API, you can compile SPARQL queries into reusable statements, which can be executed with different arguments, and will compile to an intermediate representation, resulting in faster execution when reused. Statements are also the way to go in terms of security, in order to avoid query injection situations. This is e.g. Rissole (simplified) search query:

    SELECT
        ?urn
        ?title
    {
        ?urn a mfo:FeedMessage ;
            nie:title ?title ;
            fts:match ~match .
    }
    

    Which allows me to funnel a GtkEntry content right away in the ~match without caring about character escaping or other validation. These queries may also be stored in GResource, and live as separate files in the project tree , and be loaded/compiled early during application startup once, so they are reusable during the rest of the application lifetime:

    fn load_statement(&self, query: &str) -> tsparql::SparqlStatement {
        let base_path = "/com/github/garnacho/Rissole/queries/";
    
        let stmt = self
            .imp()
            .connection
            .get()
            .unwrap()
            .load_statement_from_gresource(&(base_path.to_owned() + query), gio::Cancellable::NONE)
            .unwrap()
            .expect(&format!("Failed to load {}", query));
    
        stmt
    }
    
    ...
    
    // Pre-loading an statement
    obj.imp()
        .search_entries
        .set(obj.load_statement("search_entries.rq"))
        .unwrap();
    
    ...
    
    // Running a search
    pub fn search(&self, search_terms: &str) -> tsparql::SparqlCursor {
        let stmt = self.imp().search_entries.get().unwrap();
    
        stmt.bind_string("match", search_terms);
        stmt.execute(gio::Cancellable::NONE).unwrap()
    }
    

    This data is of course all introspectable with the gresource CLI tool, and I can run these queries from a file using the tinysparql query CLI command, either on the application database itself, or on a separate in-memory testing database created through e.g. tinysparql endpoint --ontology-path ./src/ontology --dbus-service=a.b.c .

    Here’s the third advantage for application development. Queries are 100% separate from code, introspectable, and able to be run standalone for testing, while the code remains highly semantic.

    Inserting and updating data

    When inserting data, we have two major pieces of API to help with the task, each with their own strengths:

    TrackerSparqlStatement also works for SPARQL update queries. TrackerResource offers more of a builder API to generate RDF data.

    These can be either executed standalone, or combined/accumulated in a TrackerBatch for a transactional behavior. Batches do improve performance by clustering writes to the database, and database stability by making these changes either succeed or fail atomically (TinySPARQL is fully ACID ).

    This interaction is the most application dependent (concretely, retrieving the data to insert to the database), but here is some links to Rissole code for reference, using TrackerResource to store RSS feed data , and using TrackerSparqlStatement to delete RSS feeds .

    And here is the fourth advantage for your application, an async friendly mechanism to efficiently manage large amounts of data, ready for use.

    Full-text search

    For some reason, there tends to be some magical thinking revolving databases and how these make things fast. And the most damned pattern of all can be typically at the heart of search UIs: substring matching. What feels wonderful during initial development in small datasets soon slows to a crawl in larger ones. See, an index is little more than a tree, you can look up exact items on relatively low big O, lookup by prefix with slightly higher one, and for anything else (substring, suffix) there will be nothing to do but a linear search. Sure, the database engine will comply, however painstakingly.

    What makes full-text search fundamentally different? This is a specialized index that performs an effort to pre-tokenize the text, so that each parsed word and term is represented individually, and can be looked up independently (either prefix or exact matches). At the expense of a slightly higher insertion cost (i.e. the usually scarce operation), this provides response times measured in milliseconds when searching for terms (i.e. the usual operation) regardless of their position in the text, even on really large data sets. Of course this is a gross simplification (SQLite has extensive documentation about the details ), but I hopefully shined enough light into why full-text search can make things fast in a way a traditional index can not.

    I am largely parroting a SQLite feature here, and yes, this might also be available for you if using SQLite, but the fact that I’ve already taught you in this post how to use it in TinySPARQL (declaring nrl:fulltextIndexed on the searchable properties, using fts:match in queries to match on them) does again have quite some contrast with rolling your own database creation code. So here’s another advantage.

    Backups and other bulk operations

    After you got your data stored, is it enshrined? Are there forward plans to get the data back again out of there? Is the backup strategy cp ?

    TinySPARQL (and the SPARQL/RDF combo at its core) boldly says no. Data is fully introspectable, and the query language is powerful enough to extract even full data dumps at a single query, if you wished so. This is for example available through the command line with tinysparql export and tinysparql import , the full database content can be serialized into any of the supported RDF formats, and can be either post-processed or snapshot into other SPARQL databases from there.

    A “small” detail I have not mentioned so far is the (optional) major network transparency of TinySPARQL, since for the most part it is irrelevant on usecases like Rissole. Coming from web standards, of course network awareness is a big component of SPARQL. In TinySPARQL, creating an endpoint to publicly access a database is an explicit choice of made through API, and so it is possible to access other endpoints either from a dedicated connection or by extending your local queries. Why do I bring this up here? I talked at Guadec 2023 about Emergence , a local-first oriented data synchronization mechanism between devices owned by the same user. Network transparency sits at the heart of this mechanism, which could make Rissole able to synchronize data between devices, or any other application that made use of it.

    And this is the last advantage I’ll bring up today, a solid standards-based forward plan to the stored data.

    Closing note

    If you develop an application that does need to store data, future you might appreciate some forward thinking on how to handle a lifetime’s worth of it. More artisan solutions like SQLite or file-based storage might set you up quickly for other funnier development and thus be a temptation, but will likely decrease rapidly in performance unless you know very well what you are doing, and will certainly increase your project’s technical debt over time.

    TinySPARQL wraps all major advantages of SQLite with a versatile data model and query language strongly based on open standards. The degree of separation between the data model and the code makes both neater and more easily testable. And it’s got forward plans in terms of future data changes, backups, and migrations.

    As everything is always subject to improvement, there’s some things that could do for a better developer experience:

    • Query and schema definition files could be linted/validated as a preprocess step when embedding in a GResource, just as we validate GtkBuilder files
    • TinySPARQL’s builtin web IDE started during last year’s GSoC should move forward, so we have an alternative to the CLI
    • There could be graphical ways to visualize and edit these schemas
    • Similar thing, but to visually browse a database content
    • I would not dislike if some of these were implemented in/tied to GNOME Builder
    • It would be nice to have a more direct way to funnel the results of a SPARQL query into UI. Sadly, the GListModel interface API mandates random access and does not play nice with cursor-alike APIs as it is common with databases. This at least excludes making TrackerSparqlCursor just implement GListModel.
    • A more streamlined website to teach and showcase these benefits, currently tinysparql.org points to the developer documentation (extensive otoh, but does not make a great landing page).

    Even though the developer experience would be more buttered up, there’s a solid core that is already a leap compared to other more artisan solutions, in a few areas. I would also like to point out that Rissole is not the first instance here, there are also Polari and Health using TinySPARQL databases this way, and mostly up-to-date in these best practices. Rissole is just my shiny new excuse to talk about this in detail, other application developers might appreciate the resource, and I’d wish it became one of many, so Emergence finally has a worthwhile purpose.

    Last but not least, I would like to thank Kristi, Anisa, and all organizers at LAS for a great conference.

    • Pl chevron_right

      Alley Chaggar: Demystifying The Codegen Phase Part 2

      news.movim.eu / PlanetGnome • 2 July • 4 minutes

    Intro

    Hello again, I’m here to update my findings and knowledge about Vala. Last blog, I talked about the codegen phase, as intricate as it is, I’m finding some very helpful information that I want to share.

    Looking at The Outputted C Code

    While doing the JSON module, I’m constantly looking at C code. Back and forth, back and forth, having more than 1 monitor is very helpful in times like these.

    At the beginning of GSoC I didn’t know much of C, and that has definitely changed. I’m still not fluent in it, but I can finally read the code and understand it without too much brain power. For the JsonModule I’m creating, I first looked at how users can currently (de)serialize JSON. I went scouting json-glib examples since then, and for now, I will be using json-glib. In the future, however, I’ll look at other ways in which we can have JSON more streamlined in Vala, whether that means growing away from json-glib or not.

    Using the command ‘valac -C yourfilename.vala’, you’ll be able to see the C code that Valac generates. If you were to look into it, you’d see a bunch of temporary variables and C functions. It can be a little overwhelming to see all this if you don’t know C.

    When writing JSON normally with minimal customization and without the JsonModule’s support. You would be writing it like this:

    Json.Node node = Json.gobject_serialize (person);
    Json.Generator gen = new Json.Generator ();
    gen.set_root(node);
    string result = gen.to_data (null);
    print ("%s\n", result); 
    

    This code is showing one way to serialize a GObject class using json-glib.
    The code below is a snippet of C code that Valac outputs for this example. Again, to be able to see this, you have to use the -C command when running your Vala code.

    static void
    _vala_main (void)
    {
    		Person* person = NULL;
    		Person* _tmp0_;
    		JsonNode* node = NULL;
    		JsonNode* _tmp1_;
    		JsonGenerator* gen = NULL;
    		JsonGenerator* _tmp2_;
    		gchar* _result_ = NULL;
    		gchar* _tmp3_;
    		_tmp0_ = person_new ();
    		person = _tmp0_;
    		person_set_name (person, "Alley");
    		person_set_age (person, 2);
    		_tmp1_ = json_gobject_serialize ((GObject*) person);
    		node = _tmp1_;
    		_tmp2_ = json_generator_new ();
    		gen = _tmp2_;
    		json_generator_set_root (gen, node);
    		_tmp3_ = json_generator_to_data (gen, NULL);
    		_result_ = _tmp3_;
    		g_print ("%s\n", _result_);
    		_g_free0 (_result_);
    		_g_object_unref0 (gen);
    		__vala_JsonNode_free0 (node);
    		_g_object_unref0 (person);
    }
    

    You can see many tempary variables denoted by the names __tmp * _ , but you can also see JsonNode being called, you can see Json’s generator being called and setting root, and you can even see json gobject serialize. All of this was in our Vala code, and now it’s all in the C code, having temporary variables containing them to be successfully compiled to C code.

    The jsonmodule

    If you may recall the Codegen is the clash of Vala code, but also writing to C code. The steps I’m taking for the JsonModule are looking at the examples to (de)serialize then looking at how the example compiled to C. Since the whole purpose of my work is to write how the C should look like. I’m mainly going off of C’s _vala_main function when determining which C code I should put into my module, but I’m also going off of what the Vala code the user put.

    // serializing gobject classes
    	void generate_gclass_to_json (Class cl) {
    		cfile.add_include ("json-glib/json-glib.h");
    
    		var to_json_class = new CCodeFunction ("_json_%s_serialize_myclass".printf (get_ccode_lower_case_name (cl, null)), "void");
    		to_json_class.add_parameter (new CCodeParameter ("gobject", "GObject *"));
    		to_json_class.add_parameter (new CCodeParameter ("value", " GValue *"));
    		to_json_class.add_parameter (new CCodeParameter ("pspec", "GParamSpec *"));
    		
    		//...
    
    		var Json_gobject_serialize = new CCodeFunctionCall (new CCodeIdentifier ("json_gobject_serialize"));
    		Json_gobject_serialize.add_argument (new CCodeIdentifier ("gobject"));
    
    		// Json.Node node = Json.gobject_serialize (person); - vala code
    		Json_gobject_serialize.add_argument (new CCodeIdentifier ("gobject"));
    		var node_decl_right = new CCodeVariableDeclarator ("node", Json_gobject_serialize);
    		var node_decl_left = new CCodeDeclaration ("JsonNode *");
    		node_decl_left.add_declarator (node_decl_right);
    
    		// Json.Generator gen = new Json.Generator (); - vala code
    		var gen_decl_right = new CCodeVariableDeclarator ("generator", json_gen_new);
    		var gen_decl_left = new CCodeDeclaration ("JsonGenerator *");
    		gen_decl_left.add_declarator (gen_decl_right);
    
    		// gen.set_root(node); - vala code
    		var json_gen_set_root = new CCodeFunctionCall (new CCodeIdentifier ("json_generator_set_root"));
    		json_gen_set_root.add_argument (new CCodeIdentifier ("generator"));
    		json_gen_set_root.add_argument (new CCodeIdentifier ("node"));
    		//...
    

    The code snippet above is a work in progress method in the JsonModule that I created called ‘generate_gclass_to_json’ to generate serialization for GObject classes. I’m creating a C code function and passing parameters through it. I’m also filling the body with how the example code did the serializing in the first code snippet. Instead of the function calls being created in _vala_main (by the user), they’ll have their own function that will instantly get created by the module instead.

    static void _json_%s_serialize_myclass (GObject *gobject, GValue *value, GParamSpec *pspec)
    {
    	JsonNode *node = Json_gobject_serialize (gobject);
    	JsonGenerator *generator = json_generator_new ();
    	json_generator_set_root (generator, node);
    	//...
    }
    

    Comparing the differences with the original Vala code and the compiled code (C code), it takes the Vala code shape, but it’s written in C.