• chevron_right

      Marcus Lundblad: Maps and GNOME 47

      news.movim.eu / PlanetGnome · Yesterday - 13:48 · 3 minutes


    As it's now aproaching mid-September and the autumn release of GNOME, so is there also a new release of Maps approaching for GNOME 47.

    Switch to Vector-based Map

    The biggest change since last release is that we now use the vector-based map by default  and the old raster map has also been retired since we wanted to move forward with things like enabling, and relying on clickable POIs directly in the map view so we could the remove the old tedious “What's here?” context menu doing a reverse geocoding to get details about a place (which is also a bit hit-and-miss with regards to how close to where you point the actual result is).
    Apart from this other benefits we get (and this has already been mentioned in earlier posts)  localized names (when tagged in OpenStreetMap) and finally a proper dark mode with our new GNOME map style.

    light-47.png
    Light (default) theme variant of the map in 47

    dark-47.png
    Dark theme variant of the map in 47

    Redesigned Search Bar

    The “explore” button to open the search for nearby POIs by category menu has now been integrated into the entry to avoid a theme issue when using the linked button style where the rounded corners disappears on the “other side” when the results popover is showing.
    searchbar-integrated.png
    Search bar with explore button

    This also looks a bit sleeker I think…

    Improved Public Transit Routing

    Public transit routing is now using the Transitous project ( https://transitous.org ) to provide transit routing for new regions. And as this is a crowd sourcing initiativ you can also help out by adding additional missing GTFS feeds to the project and it should automatically get supported by Maps.
    For the time being, the regions we already supported by third-party APIs (such as Resrobot for Sweden, and OpenData.ch for Switzerland) will still use those providers to avoid possible regressions. It also gives us some leeway to improve MOTIS (the backend used by Transitous). The implementation in Maps also lacks e.g. support for specifying via locations.
    transitous-prag.png
    Showing some travel itinerary options in Prague

    lund-hamburg.png
    Showing a sample of an itinerary from Lund, Sweden to Hamburg, Germany

    denver.png
    Showing a sample of an itinerary in Denver, Colorado

    Updated Highway Shield Localizations

    Thanks to the latest updates by the OSM Americana project ( https://github.com/osm-americana/openstreetmap-americana ) we now have some refinements in highway shield rendering.

    france-d-road.png
    French departmental routes (D-roads)

    turkey.png
    Turkish national D-roads

    Some changes had to be made to our internal shield rendering library (we couldn't use the OSM Americana implementation directly, as we had to implement ours using Cairo rendering and so on) to support the new convenience shortcut for a “pill” shield shape, and also being able to define hard-coded route references “ref” directly in the shield definition rather than getting from the tile data.

    rochester-loop.png
    Rochester Inner Loop in Rochester, New York, using a fixed “LOOP” reference label

    And on that note, a funny bug I discovered during some testing is that we currently always assume the “pointsUp” attribute in the shield definitions would always default to true when not explicitly set, while the OSM Americana code has different defaults depending on the following shape. So specifically for the “fishhead” shape it should actually be false when not set. I guess this is a bit odd to assume different values depending on following JSON elements, but…
    fish-head-upsidedown.png
    Highway shields in New Zeeland

    It was a bit funny I discovered this bug when “browsing around“ in New Zeeland considering Northern Hemisphere-centric jokes about people “Down-under” walking upside-down 😀. But this actually also affects some highways in the US…
    I guess this should be fixed before the 47.0 release.
    And that's that for this time!

    • wifi_tethering open_in_new

      This post is public

      ml4711.blogspot.com /2024/09/maps-and-gnome-47.html

    • chevron_right

      Andy Wingo: conservative gc can be faster than precise gc

      news.movim.eu / PlanetGnome · Yesterday - 10:00 · 3 minutes

    Should your garbage collector be precise or conservative? The prevailing wisdom is that precise is always better: conservative GC can retain more objects than strictly necessary, making GC slow: GC has to more frequently, and it has to trace a larger heap on each collection. However the calculus is not as straightforward as most people think, and indeed there are some reasons to expect that conservative root-finding can result in faster systems.

    (I have made / relayed some of these arguments before but I feel like a dedicated article can make a contribution here.)

    problem precision

    Let us assume that by conservative GC we mean conservative root-finding, in which the collector assumes that any integer on the stack that happens to be a heap address indicates a reference on the object containing that address. The address doesn’t have to be at the start of the object. Assume that objects on the heap are traced precisely; contrast to BDW-GC which generally traces both the stack and the heap conservatively. Assume a collector that will pin referents of conservative roots, but in which objects not referred to by a conservative root can be moved, as in Conservative Immix or Whippet’s stack-conservative-mmc collector .

    With that out of the way, let’s look at four reasons why conservative GC might be faster than precise GC.

    smaller lifetimes

    A compiler that does precise root-finding will typically output a side-table indicating which slots in a stack frame hold references to heap objects. These lifetimes aren’t always precise, in the sense that although they precisely enumerate heap references, those heap references might actually not be used in the continuation of the stack frame. When GC occurs, it might mark more objects as live than are actually live, which is the imputed disadvantage of conservative collectors.

    This is most obviously the case when you need to explicitly register roots with some kind of handle API: the handle will typically be kept live until the scope ends, but that might be an overapproximation of lifetime. A compiler that can assume conservative stack scanning may well exhibit more precision than it would if it needed to emit stack maps.

    no run-time overhead

    For generated code, stack maps are great. But if a compiler needs to call out to C++ or something, it needs to precisely track roots in a run-time data structure . This is overhead, and conservative collectors avoid it.

    smaller stack frames

    A compiler may partition spill space on a stack into a part that contains pointers to the heap and a part containing numbers or other unboxed data. This may lead to larger stack sizes than if you could just re-use a slot for two purposes, if the lifetimes don’t overlap. A similar concern applies for compilers that partition registers.

    no stack maps

    The need to emit stack maps is annoying for a compiler and makes binaries bigger. Of course it’s necessary for precise roots. But then there is additional overhead when tracing the stack: for each frame on the stack, you need to look up the stack map for the return continuation, which takes time. It may be faster to just test if words on the stack might be pointers to the heap.

    unconstrained compiler

    Having to make stack maps is a constraint imposed on the compiler. Maybe if you don’t need them, the compiler could do a better job, or you could use a different compiler entirely. This can make better code

    anecdotal evidence

    The Conservative Immix paper shows that conservative stack scanning can beat precise scanning in some cases. I have reproduced these results with parallel-stack-conservative-mmc compared to parallel-mmc . It’s small—maybe a percent—but it was a surprising result to me and I thought others might want to know.

    Also, Apple’s JavaScriptCore uses conservative stack scanning, and V8 is looking at switching to it . Funny, right?

    conclusion

    When it comes to designing a system with GC, don’t count out conservative stack scanning; the tradeoffs don’t obviously go one way or the other, and conservative scanning might be the right engineering choice for your system.

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/09/07/conservative-gc-can-be-faster-than-precise-gc

    • chevron_right

      Andy Wingo: on taking advantage of ragged stops

      news.movim.eu / PlanetGnome · 2 days ago - 12:15 · 4 minutes

    Many years ago I read one of those Cliff Click “here’s what I learned” articles in which he was giving advice about garbage collector design, and one of the recommendations was that at a GC pause, running mutator threads should cooperate with the collector by identifying roots from their own stacks. You can read a similar assertion in their VEE2005 paper, The Pauseless GC Algorithm , though this wasn’t the source of the information.

    One motivation for the idea was locality: a thread’s stack is already local to a thread. Then specifically in the context of a pauseless collector, you need to avoid races between the collector and the mutator for a thread’s stack, and having the thread visit its own stack neatly handles this problem.

    However, I am not so interested any more in (so-called) pauseless collectors; though I have not measured myself, I am convinced enough by the arguments in the Distilling the real costs of production garbage collectors paper, which finds that state of the art pause-minimizing collectors actually increase both average and p99 latency, relative to a well-engineered collector with a pause phase. So, the racing argument is not so relevant to me, because a pause is happening anyway.

    There was one more argument that I thought was interesting, which was that having threads visit their own stacks is a kind of cheap parallelism: the mutator threads are already there, they might as well do some work; it could be that it saves time, if other threads haven’t seen the safepoint yet. Mutators exhibit a ragged stop , in the sense that there is no clean cutoff time at which all mutators stop simultaneously, only a time after which no more mutators are running.

    Visiting roots during a ragged stop introduces concurrency between the mutator and the collector , which is not exactly free; notably, it prevents objects marked while mutators are running from being evacuated. Still, it could be worth it in some cases.

    Or so I thought! Let’s try to look at the problem analytically. Consider that you have a system with N processors, a stop-the-world GC with N tracing threads, and M mutator threads. Let’s assume that we want to minimize GC latency, as defined by the time between GC is triggered and the time that mutators resume. There will be one triggering thread that causes GC to begin, and then M –1 remote threads that need to reach a safepoint before the GC pause can begin.

    The total amount of work that needs to be done during GC can be broken down into roots i , the time needed to visit roots for mutator i , and then graph , the time to trace the transitive closure of live objects. We want to know whether it’s better to perform roots i during the ragged stop or in the GC pause itself.

    Let’s first look to the case where M is 1 (just one mutator thread). If we visit roots before the pause, we have

    latencyragged,1 = roots0 + graphM

    Which is to say, thread 0 triggers GC, visits its own roots, then enters the pause in which the whole graph is traced by all workers with maximum parallelism. (It may be that graph tracing doesn’t fully parallelize, for example if the graph has a long singly-linked list, but the parallelism with be maximal within the pause as there are N workers tracing the graph.

    If instead we visit roots within the pause, we have:

    latencypause,1= roots0+graphM

    This is strictly better than the ragged-visit latency.

    If we have two threads, then we will need to add in some delay, corresponding to the time it takes for remote threads to reach a safepoint. Let’s assume that there is a maximum period (in terms of instructions) at which a mutator will check for safepoints . In that case the worst-case delay will be a constant, and we add it on to the latency. Let us assume also there are more than two threads available. The marking-roots-during-the-pause case it’s easiest to analyze:

    latencypause,2= delay + roots0+roots1+graphM

    In this case, a ragged visit could win: while the triggering thread is waiting for the remote thread to stop, it could perform roots 0 , moving the work out of the pause, reducing pause time, and reducing latency, for free.

    However, we only have this win if the root-visiting time is smaller than the safepoint delay; otherwise we are just prolonging the pause. Thing is, you don’t know in general. If indeed the root-visiting time is short, relative to the delay, we can assume the roots elements of our equation are 0, and so the choice to mark during ragged stop doesn’t matter at all! If we assume instead that root-visiting time is long, then it is suboptimally parallelised: under-parallelised if we have more than M cores, oversubscribed if M is greater than N , and needlessly serializing before the pause while it waits for the last mutator to finish marking its roots. What’s worse, root-visiting actually slows down delay , because the oversubscribed threads compete with visitation for CPU time.

    So in summary, I plan to switch away from doing GC work during the ragged stop. It is complexity that doesn’t pay. Onwards and upwards!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/09/06/on-taking-advantage-of-ragged-stops

    • chevron_right

      Arun Raghavan: GStreamer and WebRTC HTTP signalling

      news.movim.eu / PlanetGnome · 4 days ago - 16:35 · 4 minutes

    The WebRTC nerds among us will remember the first thing we learn about WebRTC, which is that it is a specification for peer-to-peer communication of media and data, but it does not specify how signalling is done.

    Or put more simply, if you want call someone on the web, WebRTC tells you how you can transfer audio, video and data, but it leaves out the bit about how you make the call itself: how do you locate the person you’re calling, let them know you’d like to call them, and a few following steps before you can see and talk to each other.

    WebRTC signalling

    While this allows services to provide their own mechanisms to manage how WebRTC calls work, the lack of a standard mechanism means that general-purpose applications need to individually integrate each service that they want to support. For example, GStreamer’s webrtcsrc and webrtcsink elements support various signalling protocols, including Janus Video Rooms, LiveKit, and Amazon Kinesis Video Streams.

    However, having a standard way for clients to do signalling would help developers focus on their application and worry less about interoperability with different services.

    Standardising Signalling

    With this motivation, the IETF WebRTC Ingest Signalling over HTTPS (WISH) workgroup has been working on two specifications:

    (author’s note: the puns really do write themselves :))

    As the names suggest, the specifications provide a way to perform signalling using HTTP. WHIP gives us a way to send media to a server, to ingest into a WebRTC call or live stream, for example.

    Conversely, WHEP gives us a way for a client to use HTTP signalling to consume a WebRTC stream – for example to create a simple web-based consumer of a WebRTC call, or tap into a live streaming pipeline.

    WHIP and WHEP

    With this view of the world, WHIP and WHEP can be used both for calling applications, but also as an alternative way to ingest or play back live streams, with lower latency and a near-ubiquitous real-time communication API.

    In fact, several services already support this including Dolby Millicast , LiveKit and Cloudflare Stream .

    WHIP and WHEP with GStreamer

    We know GStreamer already provides developers two ways to work with WebRTC streams:

    • webrtcbin : provides a low-level API, akin to the PeerConnection API that browser-based users of WebRTC will be familiar with

    • webrtcsrc and webrtcsink : provide high-level elements that can respectively produce/consume media from/to a WebRTC endpoint

    At Asymptotic , my colleagues Tarun and Sanchayan have been using these building blocks to implement GStreamer elements for both the WHIP and WHEP specifications. You can find these in the GStreamer Rust plugins repository.

    Our initial implementations were based on webrtcbin , but have since been moved over to the higher-level APIs to reuse common functionality (such as automatic encoding/decoding and congestion control). Tarun covered our work in a talk at last year’s GStreamer Conference .

    Today, we have 4 elements implementing WHIP and WHEP.

    Clients

    • whipclientsink : This is a webrtcsink -based implementation of a WHIP client, using which you can send media to a WHIP server. For example, streaming your camera to a WHIP server is as simple as:
    gst-launch-1.0 -e \
      v4l2src ! video/x-raw ! queue ! \
      whipclientsink signaller::whip-endpoint="https://my.webrtc/whip/room1"
    • whepclientsrc : This is work in progress and allows us to build player applications to connect to a WHEP server and consume media from it. The goal is to make playing a WHEP stream as simple as:
    gst-launch-1.0 -e \
      whepclientsrc signaller:whep-endpoint="https://my.webrtc/whep/room1" ! \
      decodebin ! autovideosink

    The client elements fit quite neatly into how we might imagine GStreamer-based clients could work. You could stream arbitrary stored or live media to a WHIP server, and play back any media a WHEP server provides. Both pipelines implicitly benefit from GStreamer’s ability to use hardware-acceleration capabilities of the platform they are running on.

    GStreamer WHIP/WHEP clients

    Servers

    • whipserversrc : Allows us to create a WHIP server to which clients can connect and provide media, each of which will be exposed as GStreamer pads that can be arbitrarily routed and combined as required. We have an example server that can play all the streams being sent to it.

    • whepserversink : Finally we have ongoing work to publish arbitrary streams over WHEP for web-based clients to consume this media.

    The two server elements open up a number of interesting possibilities. We can ingest arbitrary media with WHIP, and then decode and process, or forward it, depending on what the application requires. We expect that the server API will grow over time, based on the different kinds of use-cases we wish to support.

    GStreamer WHIP/WHEP server

    This is all pretty exciting, as we have all the pieces to create flexible pipelines for routing media between WebRTC-based endpoints without having to worry about service-specific signalling.

    If you’re looking for help realising WHIP/WHEP based endpoints, or other media streaming pipelines, don’t hesitate to reach out to us !

    • wifi_tethering open_in_new

      This post is public

      arunraghavan.net /2024/09/gstreamer-and-webrtc-http-signalling/

    • chevron_right

      Jakub Steiner: Klatovy FPV

      news.movim.eu / PlanetGnome · 5 days ago - 22:00

    They say you can’t forget how to ride a bicycle. Well I don’t think it applies to FPV racing. Haven’t touched the sticks in a year. But that does not make me not hang out with old buddies at the longest running event in Czechia.

    Klatovy poster as Pixelart (CRT emulation)

    • wifi_tethering open_in_new

      This post is public

      blog.jimmac.eu /2024/klatovy/

    • chevron_right

      Felipe Borges: Looking for more internship project ideas for Outreachy (December-March cohort)

      news.movim.eu / PlanetGnome · 5 days ago - 08:33

    GNOME is interested in participating in the Outreachy December-March cohort, and while we already have a few great projects, we are looking for experienced mentors with a couple more project ideas. Hurry up, we have until September 11 to conclude our list of ideas.

    Please, submit project ideas on https://gitlab.gnome.org/Teams/internship/project-ideas

    Feel free to message us on #internships:gnome.org (matrix) if you have any doubts.

    • wifi_tethering open_in_new

      This post is public

      feborg.es /looking-for-more-internship-project-ideas-for-outreachy-december-march-cohort/

    • chevron_right

      This Week in GNOME: #163 Public Transit

      news.movim.eu / PlanetGnome · Friday, 30 August - 00:00 · 6 minutes

    Update on what happened across the GNOME project in the week from August 23 to August 30.

    GNOME Core Apps and Libraries

    Maps

    Maps gives you quick access to maps all across the world.

    mlundblad says

    Maps now supports Transitous ( https://transitous.org/ ), using a crowd-sourcing approach, backed by the MOTIS search engine for public transit routing (for the time being for areas that isn’t already supported by existing plugins).

    GLib

    The low-level core library that forms the basis for projects such as GTK and GNOME.

    Philip Withnall reports

    Thanks to work by Evan Welsh, GLib 2.83.0 (the next unstable release, due in a few months) will have support for sync/async/finish introspection annotations when built and run with a suitably new version of gobject-introspection, https://gitlab.gnome.org/GNOME/glib/-/merge_requests/3746

    Philip Withnall says

    Thanks to Luca Bacci for putting in time to improve debuggability of GLib on Windows CI machines, this should help with keeping GLib running on Windows and Windows-like platforms in the future (more help always needed if anyone is interested)

    GNOME Circle Apps and Libraries

    Brage Fuglseth announces

    I’ve informally documented the GNOME Circle review procedure . Existing conventions are laid out and briefly explained, and the ways of which we keep member projects in check are formalized through the concepts of control reviews and group notices . My goal with this is to increase the transparency and consistency of how the Circle Committee operates.

    If you’re already a member of GNOME Circle , a developer considering applying, or just a curious bystander, and have questions about this, don’t hesitate to reach out.

    Tuba

    Browse the Fediverse.

    Brage Fuglseth reports

    This week Tuba was accepted into GNOME Circle. Tuba lets you explore the federated social web. With its extensive support for popular Fediverse platforms like Mastodon, GoToSocial, Akkoma, and more, it makes it easy to stay connected to your favorite communities, family and friends. Congratulations!

    Third Party Projects

    Ronnie Nissan reports

    Unix file permissions can be hard to understand, especially for new linux users, that is why I wrote Concessio, an app to help users understand and convert between said permissions. You can get Concessio from flathub

    Alain says

    The new version of Planify 4.11.0 is here, packed with features and improvements that will make your task management even more efficient and personalized!

    What’s New in Planify 4.11.0

    1. Collapsible Sections: You can now collapse sections for a cleaner and more focused view of your priority tasks.

    2. Task Counter per Section: Easily see the number of tasks in each section with the new task counter.

    3. Pinned Tasks at the Top: Pinned tasks are now displayed at the top of each project, keeping them always in sight.

    4. Overdue Tasks Indicator: The “Today” button now shows an indicator for overdue tasks, helping you stay on top of what’s important.

    5. New Quick Add Keyboard Shortcuts: We’ve added new shortcuts to speed up your workflow:

    - @ to add tags.
    - # to select a project.
    - ! to create a reminder.
    

    6. Completed Tasks View: Check your completed tasks sorted by date and filter them by sections as needed.

    7. Task History Log: Access the new task history log, where you can see when a task was completed or moved between projects or sections.

    8. Improved Drag and Drop: We’ve fixed several bugs with drag and drop, ensuring a smoother experience.

    9. Quick Find Task Detail: Now, when selecting a task in Quick Find, the task detail will automatically open.

    10. Markdown Preference: You can now enable or disable the use of Markdown in task details, based on your preference.

    11. Improved Keyboard Navigation: Keyboard navigation has been optimized to make your experience faster and easier.

    12. Create Sections in Quick Add: If a section doesn’t exist, you can now create it directly from Quick Add.

    13. Attachment Number Indicator: We’ve added a numerical indicator to show the number of attachments on each task.

    14. Preference to Keep Task Detail Pane Open: We’ve added a preference that allows you to keep the task detail pane always open, making navigation faster.

    15. Quicker Recurring Tasks: The option to repeat a task has been moved to the date and time selection widget, making the process of creating recurring tasks faster.

    16. Improved Nextcloud Synchronization: We fixed several bugs related to task synchronization with Nextcloud.

    17. Design and Optimization Improvements: Planify is now faster and consumes fewer resources, thanks to various design and optimization improvements.

    A Heartfelt Thank You We would like to extend our deepest gratitude to everyone who supports the development of Planify by contributing financially. Your generosity helps us continue to improve and expand Planify, making it a better tool for everyone. Thank you for being a vital part of our journey!

    Update now and enjoy an even more powerful and tailored task management experience. We’re continuously working to bring you the best in every release!

    https://flathub.org/apps/io.github.alainm23.planify

    Emmanuele Bassi announces

    After a year, there’s a new stable release of JSON-GLib, the JSON parsing and generation library that tries to be well-integrated with GLib data types and best practices. Version 1.10.0 introduces a new, extensive conformance test suite to ensure that the parsing code is capable of handling both valid and invalid JSON. Additionally, JsonParser has a new “strict” option, for when you need to ensure that the JSON data is strictly conforming to the format specification. The parsing code has been simplified and improved, and passes both static analysis and sanitisers. Finally, there are lots of documentation fixes across the API and manual pages of the command line utilities.

    Blueprint

    A markup language for app developers to create GTK user interfaces.

    James Westman announces

    Blueprint 0.14.0 is here! This release includes a bunch of new features, including several from new contributors, but the main highlights are:

    • Greatly improved decompiler (.ui to .blp) support, including a new CLI command for this purpose
    • Syntax support for string arrays and multi-value accessibility relations
    • QoL improvements to the CLI output and language server
    • A ton of bugfixes

    Events

    Daniel Galleguillos Cruz reports

    GNOME Latam 2024 - Universidad de Antioquia, October 25th and 26th. Medellín, Colombia. A day to celebrate and expand the GNOME community in Latin America. Come and share experiences in the creation and use of GNOME technologies in our region.

    What is GNOME Latam 2024? GNOME Latam 2024 is more than just an event; it’s a movement that brings together developers, users, educators, and free software enthusiasts to share knowledge, experiences, and collaboratively build a more inclusive technological future. Through a series of in-person and online activities, we aim to spread the latest updates within the GNOME community, encourage active participation, and create a space where everyone can contribute, regardless of their experience level or prior knowledge.

    Our mission is clear: to drive the adoption and development of the GNOME environment in Latin America, promoting the values of free software such as freedom, collaboration, and transparency. We believe that access to technology should be a universal right and that everyone, from experts to beginners, has something valuable to contribute.

    Note: The talks will be conducted both in-person and online.

    https://events.gnome.org/event/249/

    This event will be held in Spanish and Portuguese.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • chevron_right

      Georges Basile Stavracas Neto: Fancy Titlebars

      news.movim.eu / PlanetGnome · Wednesday, 28 August - 16:58

    As of today , Mutter will style legacy titlebars (i.e. of X11 / Xwayland apps that don’t use client-side decorations) using Adwaita on GNOME.

    Picture of VLC and Accerciser with a fancier, Adwaita-styled titlebar

    Shadows match the Adwaita style as well, including shadows of unfocused windows. These titlebars continue to follow the system dark and light mode, even when apps don’t.

    Should make using legacy apps a little less unpleasant 🙂

    • wifi_tethering open_in_new

      This post is public

      feaneron.com /2024/08/28/fancy-titlebars/

    • chevron_right

      Divyansh Jain: TinySPARQL GSoC Final Report

      news.movim.eu / PlanetGnome · Sunday, 25 August - 22:15 · 5 minutes

    We have finally reached the final week of GSoC. It has been an amazing journey! Let’s summarize what was done, the current state of the project and what’s next.

    Introduction

    This summer, I had the opportunity to work as a student developer under the Google Summer of Code 2024 program with the GNOME Community. I focused on creating a web-based Integrated Development Environment (IDE) specifically designed for writing and executing SPARQL queries within TinySPARQL (formerly Tracker).

    This user-friendly interface empowers developers by allowing them to compose and edit multiline SPARQL queries directly in a code editor, eliminating the need for the traditional terminal approach. Once a query is written, it can be easily executed via the HTTP SPARQL endpoint, and the results will be displayed in a visually appealing format, enhancing readability and user experience.

    By lowering the barrier to entry for newcomers, boosting developer productivity with visual editing, and fostering collaboration through easier query sharing, this web IDE aims to significantly improve the experience for those using libtracker-sparql to interact with RDF databases.

    I would like to express my sincere gratitude to my mentors, Carlos and Sam , for their guidance and support throughout the internship period. Their expertise was invaluable in helping me navigate the project and gain a deeper understanding of the subject matter. I would also like to thank my co-mentee Rachel , for her excellent collaboration and contributions to making this project a reality and fostering a fast-paced development environment.

    I’m excited to announce that as the internship concludes, we have a functional web IDE that enables users to run SPARQL queries and view the results directly in their web browser. Here is the working demo of the web IDE that was developed from scratch in this GSoC Project.

    Working of TinySPARQL Web IDE

    What was done

    This project was divided into two primary components: the backend C code, which enabled the web IDE to be served and run from the command line, and the frontend JavaScript code, which enhanced the web IDE’s visual appeal and added all user-facing functionalities. I primarily focused on the backend C side of the project, while Rachel worked on the frontend. Therefore, this blog post will delve into the backend aspects of the project. To learn more about the frontend development, please check out Rachel’s blog .

    The work done by me, could be divided into three major phases:

    • Pre-Development Phase
      • During the pre-development phase, I focused on familiarizing myself with the existing codebase and preparing it for easier development. This involved removing support for older versions of libraries, such as Libsoup.
      • TinySPARQL previously supported both Libsoup 2 and Libsoup 3 libraries, but these versions had different function names and macros.
      • This compatibility requirement could significantly impact development time. To streamline the process, we decided to drop support for Libsoup 2.
      • The following merge requests document the work done in this phase:
    • Setting Up the Basic Web IDE
      • In this phase, I extended the HTTP endpoint exposed by the tinysparql endpoint command to also serve the web IDE. The goal was to enable the endpoint to serve HTML, CSS, and JavaScript files, in addition to RDF data. This was a crucial step, as frontend development could only begin once the basic web IDE was ready.
      • During this phase, the HTTP module became more complex. To aid in debugging and diagnosing errors, we added a debugging functionality. By running TRACKER_DEBUG=http , one can now view logs of all GET and POST methods, providing valuable insights into the HTTP module’s behavior.
      • The following merge requests document the work done in this phase:
    • Separating the Web IDE
      • The web IDE added significant size (around 800KB-1MB) to the libtracker-sparql library. Since not all users might need the web IDE functionality, we decided to separate it from libtracker-sparql . This separation improves efficiency for users who won’t be using the web IDE.
      • To achieve this isolation, we implemented a dedicated subcommand tinysparql webide for the web IDE, allowing it to run independently from the SPARQL endpoint.
      • Here’s a breakdown of the process:
        1. Isolating HTTP Code: I started by extracting the HTTP code from libtracker-sparql into a new static library named libtracker-http . This library contains the abstraction TrackerHttpServer over the Libsoup server, which can be reused in the tinysparql webide subcommand.
        2. Creating a Subcommand: Following the isolation of the web IDE into its own library and the removal of relevant gresources from libtracker-sparql , we were finally able to create a dedicated subcommand for the web IDE. As a result, the size of libtinysparql.so.0.800.0 has been reduced by approximately 700KB.”
      • The following merge requests document the work done in this phase:

    Final Result

    This is the web IDE we developed during the internship.

    TinySPARQL Web IDE SPARQL Query successfully executed Error handling

    Future Work

    Despite having a functional web IDE and completing many of the tasks outlined in the proposal (even exceeding the original scope due to the collaborative efforts of two developers), there are still areas for improvement.

    I plan to continue working on the web IDE in the future, focusing on the following enhancements:

    • Multi-Endpoint Support: Implement a mechanism for querying different SPARQL endpoints. This could involve adding a text box input to the frontend for dynamically entering endpoint URLs or providing a connection string option when creating the web IDE instance from the command line.
    • Unified HTTP Handling: Implement a consistent HTTP handler for all cases, allowing TrackerEndpointHttp to handle requests both inside and outside the /sparql path.
    • SPARQL Extraction: Extract the SPARQL query from POST requests in TrackerEndpointHttp or pass the raw POST data in the ::request signal, enabling TrackerEndpointHttp to determine if it contains a SPARQL query.
    • Avahi Configuration: Move the Avahi code for announcing server availability or assign TrackerEndpointHttp responsibility for managing the content and type of broadcasted data.
    • CORS Configuration: Make CORS settings configurable at the API level, allowing for more granular control and avoiding the default enforcement of the * wildcard.

    GUADEC Experience

    One of the highlights of my GSoC journey was the opportunity to present my project at GUADEC, the annual GNOME conference. It was an incredible experience to share my work with a diverse audience of developers and enthusiasts. Be sure to check out our presentation on the TinySPARQL Web IDE, delivered by Rachel and me at GUADEC.

    Final Remarks

    Thank you for taking the time to read this. Your support means a great deal to me. This internship was a valuable learning experience, as it was my first exposure to professional-level C code and working with numerous libraries solely based on official documentation. I am now more confident in my skills than ever. I gained a deeper understanding of the benefits of collaboration and how it can significantly accelerate development while maintaining high code quality.