• chevron_right

      Allan Day: GNOME maintainers: here’s how to keep your issue tracker in good shape

      news.movim.eu / PlanetGnome · 2 days ago - 15:29 · 2 minutes

    One of the goals of the new GNOME project handbook is to provide effective guidelines for contributors. Most of the guidelines are based on recommendations that GNOME already had, which were then improved and updated. These improvements were based on input from others in the project, as well as by drawing on recommendations from elsewhere.

    The best example of this effort was around issue management . Before the handbook, GNOME’s issue management guidelines were seriously out of date, and were incomplete in a number of areas. Now we have shiny new issue management guidelines which are full of good advice and wisdom!

    The state of our issue trackers matters. An issue tracker with thousands of open issues is intimidating to a new contributor. Likewise, lots of issues without a clear status or resolution makes it difficult for potential contributors to know what to do. My hope is that, with effective issue management guidelines, GNOME can improve the overall state of its issue trackers.

    So what magic sauce does the handbook recommend to turn an out of control and burdensome issue tracker into a source of calm and delight, I hear you ask? The formula is fairly simple:

    • Review all incoming issues, and regularly conduct reviews of old issues, in order to weed out reports which are ambiguous, obsolete, duplicates, and so on
    • Close issues which haven’t seen activity in over a year
    • Apply the “needs design” and “needs info” labels as needed
    • Close issues that have been labelled “need info” for 6 weeks
    • Issues labelled “needs design” get closed after 1 year of inactivity, like any other
    • Recruit contributors to help with issue management

    To some readers this is probably controversial advice, and likely conflicts with their existing practice. However, there’s nothing new about these issue management procedures. The current incarnation has been in place since 2009 , and some aspects of them are even older. Also, personally speaking, I’m of the view that effective issue management requires taking a strong line (being strong doesn’t mean being impolite, I should add – quite the opposite ). From a project perspective, it is more important to keep the issue tracker focused than it is to maintain a database of every single tiny flaw in its software.

    The guidelines definitely need some more work. There will undoubtedly be some cases where an issue needs to be kept open despite it being untouched for a year, for example, and we should figure out how to reflect that in the guidelines. I also feel that the existing guidelines could be simplified, to make them easier to read and consume.

    I’d be really interested to hear what changes people think are necessary. It is important for the guidelines to be something that maintainers feel that they can realistically implement. The guidelines are not set in stone.

    That said, it would also be awesome if more maintainers were to put the current issue management guidelines into practice in their modules. I do think that they represent a good way to get control of an issue tracker, and this could be a really powerful way for us to make GNOME more approachable to new contributors.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /aday/2024/05/17/gnome-maintainers-heres-how-to-keep-your-issue-tracker-in-good-shape/

    • chevron_right

      Andy Wingo: on hoot, on boot

      news.movim.eu / PlanetGnome · 3 days ago - 20:01 · 5 minutes

    I realized recently that I haven’t been writing much about the Hoot Scheme-to-WebAssembly compiler . Upon reflection, I have been too conscious of its limitations to give it verbal tribute, preferring to spend each marginal hour fixing bugs and filling in features rather than publicising progress.

    In the last month or so, though, Hoot has gotten to a point that pleases me. Not to the point where I would say “accept no substitutes” by any means, but good already for some things, and worth writing about.

    So let’s start today by talking about bootie. Boot, I mean! The boot, the boot, the boot of Hoot.

    hoot boot: temporal tunnel

    The first axis of boot is time. In the beginning, there was nary a toot, and now, through boot, there is Hoot.

    The first boot of Hoot was on paper. Christine Lemmer-Webber had asked me, ages ago, what I thought Guile should do about the web. After thinking a bit, I concluded that it would be best to avoid compromises when building an in-browser Guile: if you have to pollute Guile to match what JavaScript offers, you might as well program in JavaScript. JS is cute of course, but Guile is a bit different in some interesting ways, the most important of which is control: delimited continuations, multiple values, tail calls, dynamic binding, threads, and all that. If Guile’s web bootie doesn’t pack all the funk in its trunk, probably it’s just junk.

    So I wrote up a plan something to which I attributed the name tailification . In retrospect, this is simply a specific flavor of a continuation-passing-style (CPS) transmutation, late in the compiler pipeline. I’ll elocute more in a future dispatch. I did end up writing the tailification pass back then; I could have continued to target JS, but it was sufficiently annoying and I didn’t prosecute. It sat around unused for a few years, until Christine’s irresistable charisma managed to conjure some resources for Hoot.

    In the meantime, the GC extension for WebAssembly shipped (woot woot!), and to boot Hoot, I filled in the missing piece: a backend for Guile’s compiler that tailified and then translated primitive operations to snippets of WebAssembly.

    It was, well, hirsute, but cute and it did compute, so we continued to boot. From this root we grew a small run-time library, written in raw WebAssembly, used for slow-paths for the various primitive operations that are part of Guile’s compiler back-end. We filled out Guile primcalls, in minute commits, growing the WebAssembly runtime library and toolchain as we went.

    Eventually we started constituting facilities defined in terms of those primitives, via a Scheme prelude that was prepended to all programs, within a nested lexical environment. It was never our intention though to drown the user’s programs in a sea of predefined bindings, as if the ultimate program were but a vestigial inhabitant of the lexical lake—don’t dilute the newt!, we would often say [ed: we did not] — so eventually when the prelude became unmanageable, we finally figured out how to do whole-program compilation of a set of modules .

    Then followed a long month in which I would uproot the loot from the boot: take each binding from the prelude and reattribute it into an appropriate module. User code could import all the modules that suit, as long as they were known to Hoot, but no others; it was only until we added the ability for users to programmatically consitute an environment from their modules that Hoot became a language implementation of any repute.

    Which brings us to the work of the last month, about which I cannot be mute. When you have existing Guile code that you want to distribute via the web, Hoot required you transmute its module definitions into the more precise R6RS syntax. Precise, meaning that R6RS modules are static, in a way that Guile modules, at least in absolute terms, are not: Guile programs can use first-class accessors on the module systems to pull out bindings. This is yet another example of what I impute as the original sin of 1990s language development, that modules are just mutable hash maps. You see it in Python, for example: because you don’t know for sure to what values global names are bound, it is easy for any discussion of what a particular piece of code means to end in dispute.

    The question is, though, are the semantics of name binding in a language fixed and absolute? Once your language is booted, are its aspects definitively attributed? I think some perfection, in the sense of becoming more perfect or more like the thing you should be, is something to salute. Anyway, in Guile it would be coherent with Scheme’s lexical binding heritage to restitute some certainty as to the meanings of names, at least in a default compilation node. Lexical binding is, after all, the foundation of the Macro Writer’s Statute of Rights . Of course if you are making a build for development purposes, not to distribute, then you might prefer a build that marks all bindings as dynamic. Otherwise I think it’s reasonable to require the user to explicitly indicate which definitions are denotations, and which constitute locations.

    Hoot therefore now includes an implementation of the static semantics of Guile’s define-module : it can load Guile modules directly, and as a tribute, it also has an implementation of the ambient (guile) module that constitutes the lexical soup of modules that aren’t #:pure . (I agree, it would be better if all modules were explicit about the language they are written in—their imported bindings and so on—but there is an existing corpus to accomodate; the point is moot.)

    The astute reader (whom I salute!) will note that we have a full boot: Hoot is a Guile. Not an implementation to substitute the original, but more of an alternate route to the same destination. So, probably we should scoot the two implementations together, to knock their boots, so to speak, merging the offshoot Hoot into Guile itself.

    But do I circumlocute: I can only plead a case of acute Hoot. Tomorrow, we elocute on a second axis of boot. Until then, happy compute!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/05/16/on-hoot-on-boot

    • chevron_right

      Sam Thursfield: Status update, 16/05/2024 – Learning Async Rust

      news.movim.eu / PlanetGnome · 3 days ago - 12:10 · 6 minutes

    This is another month where too many different things happened to stick them all in one post together. So here’s a ramble on Rust, and there’s more to come in a follow up post.

    I first started learning Rust in late 2020 . It took 3 attempts before I could start to make functional commandline apps, and the current outcome of this is the ssam_openqa tool, which I work on partly to develop my Rust skills. This month I worked on some intrusive changes to finally start using async Rust in the program.

    How it started

    Out of all the available modern languages I might have picked to learn, I picked Rust partly for the size and health of its community: every community has its issues, but Rust has no “BDFL” figure and no one corporation that employs all the core developers, both signs of a project that can last a long time. Look at GNOME, which is turning 27 this year.

    Apart from the community, learning Rust improved the way I code in all languages, by forcing more risks and edge cases to the surface and making me deal with them explicitly in the design. The ecosystem of crates has most of what you could want (although there is quite a lot of experimentation and therefore “churn”, compared to older languages). It’s kind of addictive to know that when you’ve resolved all your compile time errors, you’ll have a program that reliably does what you want.

    There are still some blockers to me adopting Rust everywhere I work (besides legacy codebases). The “cycle time” of the edit+compile+test workflow has a big effect on my happiness as a developer. The fastest incremental build of my simple CLI tool is 9 seconds, which is workable, and when there are compile errors (i.e. most of the time) its usually even faster. However, a release build might take 2 minutes. This is 3000 lines of code with 18 dependencies . I am wary of embarking on a larger project in Rust where the cycle time could be problematically slow.

    Binary size is another thing, although I’ve learned several tricks to keep ssam_openqa at “only” 1.8MB. Use a minimal arg parser library instead of clap . Use minreq for HTTP. Follow the min-size-rust guidelines. Its easy to pull in one convenient dependency that brings in a tree of 100 more things, unless you are careful. (This is a problem for C programmers too, but dependency handling in C is traditionally so horrible that we are already conditioned to avoid too many external helper libraries).

    The third thing I’ve been unsure about until now is async Rust. I never immediately liked the model used by Rust and Python of having a complex event loop hidden in the background, and a magic async keyword that completely changes how a function is executed, and requires all other functions to be async such as you effectively have two *different* languages: the async variant, and the sync variant; and when writing library code you might need to provide two completely different APIs to do the same thing, one async and one sync.

    That said, I don’t have a better idea for how to do async.

    Complicating matters in Rust is the error messages, which can be mystifying if you hit an edge case (see below for where this bit me). So until now I learned to just use thead::spawn for background tasks, with a std::sync::mpsc channel to pass messages back to the main thread, and use blocking IO everywhere. I see other projects doing the same.

    How it’s going

    My blissful ignorance came to an end due to changes in a dependency. I was using the websocket crate in ssam_openqa, which embeds its own async runtime so that callers can use a blocking interface in a thread. I guess this is seen as a failed experiment, as the library is now “sluggishly” maintained and the developers recommend tungstenite instead.

    Tungstenite seems unusable from sync code for anything more than toy examples, you need an async wrapper such as async-tungstenite . So, I thought, I will need to port my *whole codebase* to use an async runtime and an async main loop.

    I tried, and spent a few days lost in a forest of compile errors, but its never the correct approach to try and port code “in one shot” and without a plan. To make matters worse, websocket-rs embeds an *old* version of Rust’s futures library. Nobody told me, but there is “futures 0.1” and “futures 0.3.” Only the latter works with the await keyword; if you await a future from futures 0.1, you’ll get an error about not implementing the expected trait. The docs don’t give any clues about this, eventually I discovered the Compat01As03 wrapper which lets you convert types from futures 0.1 to futures 0.3. Hopefully you never have to deal with this as you’ll only see futures 0.1 on libraries with outdated dependencies, but, now you know.

    Even better, I then realized I could keep the threads and blocking IO around, and just start an async runtime in the websocket processing thread. So I did that in its own MR , gaining an integration test and squashing a few bugs in the process.

    The key piece is here :

    use tokio::runtime;
    use std::thread;
    
    ...
    
        thread::spawn(move || {
            let runtime = runtime::Builder::new_current_thread()
                .enable_io()
                .build()
                .unwrap();
    
            runtime.block_on(async move {
                // Websocket event loop goes here
    

    This code uses the tokio new_current_thread () function to create an async main loop out of the current thread, which can then use block_on () to run an async block and wait for it to exit. It’s a nice way to bring async “piece by piece” into a codebase that otherwise uses blocking IO, without having to rewrite everything up front.

    I have some more work in progress to use async for the two main loops in ssam_openqa: these currently have manual polling loops that periodically check various message queue for events and then call thread::sleep(250) , which work fine in practice for processing low frequency control and status events, but it’s not the slickest nor most efficient way to write a main loop. The classy way to do it is using the tokio::select! macro.

    When should you use async Rust?

    I was hoping for a simple answer to this question, so I asked my colleagues at Codethink where we have a number of Rust experts.

    The problem is, cooperative task scheduling is a very complicated topic. If I convert my main loop to async , but I use the std library blocking IO primitives to read from stdin rather than tokio’s async IO, can Rust detect that and tell me I did something wrong? Well no, it can’t – you’ll just find that event processing stops while you’re waiting for input. Which may or may not even matter.

    There’s no way automatically detect “syscall which might wait for user input” vs “syscall which might take a lot of CPU time to do something”, vs “user-space code which might not defer to the main loop for 10 minutes”; and each of these have the same effect of causing your event loop to freeze.

    The best advice I got was to use tokio console to monitor the event loop and see if any tasks are running longer than they should. This looks like a really helpful debugging tool and I’m definitely going to try it out.

    So I emerge from the month a bit wiser about async Rust, no longer afraid to use it in practice, and best of all, wise enough to know that its not an “all or nothing” switch – its perfectly valid to mix and sync and async in different places, depending on what performance characters you’re looking for.

    • chevron_right

      Christian Hergert: Ptyxis on Flathub

      news.movim.eu / PlanetGnome · 4 days ago - 20:35

    You can get Ptyxis on Flathub now if you would like to run the stable version rather than Nightly. Unless you’re interested in helping QA Ptyxis or contributing that is probably the Flatpak you want to have installed.

    Nightly builds of Ptyxis use the GNOME Nightly SDK meaning GTK from main (or close to it). Living on “trunk” can be a bit painful when it goes through major transitions like is happening now with a move to Vulkan-by-default.

    Enjoy!

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /chergert/2024/05/15/ptyxis-on-flathub/

    • chevron_right

      Jussi Pakkanen: Generative non-AI

      news.movim.eu / PlanetGnome · 5 days ago - 17:47 · 2 minutes

    In last week's episode of the Game Scoop podcast an idea was floated that modern computer game names are uninspiring and that better ones could be made by picking random words from existing NES titles. This felt like a fun programming challenge so I went and implemented it. Code and examples can be found in this GH repo .

    Most of the game names created in this way are word salad gobbledigook or literally translated obscure anime titles ( Prince Turtles Blaster Family ). Running it a few times does give results that are actually quite interesting. They range from games that really should exist ( Operation Metroid ) to surprisingly reasonable ( Gumshoe Foreman's Marble Stadium ), to ones that actually made me laugh out loud ( Punch-Out! Kids ). Here's a list of some of my favourites:

    • Ice Space Piano
    • Castelian Devil Rainbow Bros.
    • The Lost Dinosaur Icarus
    • Mighty Hoops, Mighty Rivals
    • Snake Hammerin'
    • MD Totally Heavy
    • Disney's Die! Connors
    • Monopoly Ransom Manta Caper!
    • Revenge Marble
    • Kung-Fu Hogan's F-15
    • Sinister P.O.W.
    • Duck Combat Baseball

    I emailed my findings back to the podcast host and they actually discussed it in this week's show ( video here starting at approximately 35 minutes ). All in all this was an interesting exercise. However pretty quickly after finishing the project I realized that doing things yourself is no longer what the cool kids are doing. Instead this is the sort of thing that is seemingly tailor-made for AI. All you have to do is to type in a prompt like "create 10 new titles for video games by only taking words from existing NES games" and post that to tiktokstagram.

    I tried that and the results were absolute garbage. Since the prompt has to have the words "video game" and "NES", and LLMs work solely on the basis of "what is the most common thing (i.e. popular)", the output consists almost entirely of the most well known NES titles with maybe some words swapped. I tried to guide it by telling it to use "more random" words. The end result was a list of ten games where eight were alliterative. So much for randomness.

    But more importantly every single one of the recommendations the LLM created was boring. Uninspired. Bland. Waste of electricity, basically.

    Thus we find that creating a list of game names with an LLM is easy but the end result is worthless and unusable. Doing the same task by hand did take a bit more effort but the end result was miles better because it found new and interesting combinations that a "popularity first" estimator seem to not be able to match. Which matches the preconception I had about LLMs from prior tests and seeing how other people have used them.

    • chevron_right

      Andy Wingo: partitioning pitfalls for generational collectors

      news.movim.eu / PlanetGnome · 6 days ago - 13:03 · 8 minutes

    You might have heard of long-pole problems: when you are packing a tent, its bag needs to be at least as big as the longest pole. (This is my preferred etymology; there are others .) In garbage collection, the long pole is the longest chain of object references; there is no way you can algorithmically speed up pointer-chasing of (say) a long linked-list.

    As a GC author, some of your standard tools can mitigate the long-pole problem, and some don’t apply.

    Parallelism doesn’t help: a baby takes 9 months, no matter how many people you assign to the problem. You need to visit each node in the chain, one after the other, and having multiple workers available to process those nodes does not get us there any faster. Parallelism does help in general, of course, but doesn’t help for long poles.

    You can apply concurrency : instead of stopping the user’s program (the mutator ) to enumerate live objects, you trace while the mutator is running. This can happen on mutator threads, interleaved with allocation, via what is known as incremental tracing. Or it can happen in dedicated tracing threads, which is what people usually mean when they refer to concurrent tracing. Though it does impose some coordination overhead on the mutator, the pause-time benefits are such that most production garbage collectors do trace concurrently with the mutator, if they have the development resources to manage the complexity.

    Then there is partitioning : instead of tracing the whole graph all the time, try to just trace part of it and just reclaim memory for that part. In most programs the odds of a long pole are proportional to the graph size, and so tracing a graph subset should reduce total pause time.

    The usual way to partition is generational , in which the main program allocates into a nursery , and objects that survive a few collections then get promoted into old space . But there may be other partitions, for example to put “large objects” (for example, bigger than a few virtual memory pages) in their own section, to be managed with their own algorithm.

    partitions, take one

    And that brings me to today’s article: generational partitioning has a failure mode which manifests itself as spurious out-of-memory. For example, in V8, running this small JavaScript program that repeatedly allocates a 1-megabyte buffer grows to consume all memory in the system and eventually panics:

    while (1) new Uint8Array(1e6);
    

    This is a funny result, to say the least. Let’s dig in a bit to see why.

    First off, note that allocating a 1-megabyte Uint8Array makes a large object , because it is bigger than half a V8 page , which is 256 kB on most systems There is a backing store for the array that will get allocated into the large object space, and then the JS object wrapper that gets allocated in the young generation of the regular object space.

    Let’s imagine the heap consists of the nursery, the old space, and a large object space ( lospace , for short). (See the next section for a refinement of this model.) In the loop, we cause allocation in the nursery and the lospace. When the nursery starts to get full, we run a minor collection, to trace the newly allocated part of the object graph, possibly promoting some objects to the old generation.

    Promoting an object adds to the byte count in the old generation. You could say that promoting causes allocation in the old-gen, though it might happen just on an accounting level if an object is promoted in place. In V8, old-gen allocation is subject to a limit , and that limit is set by a gnarly series of heuristics . Exceeding the limit will cause a major GC, in which the whole heap is traced.

    So, minor collections cause major collections via promotion. But what if a minor collection never promotes? This can happen in request-oriented workflows, in which a request comes in, you handle it in JS, write out the result, and then handle the next. If the nursery is at least as large as the memory allocated in one request, no object will ever survive more than one collection. But in our new Uint8Array(1e6) example above, we are allocating to newspace and the lospace. If we never cause promotion, we will always collect newspace but never trigger the full GC that would also collect the large object space, triggering this out-of-memory phenomenon.

    partitions, take two

    In general, the phenomenon we are looking for is nursery allocations that cause non-nursery allocations, where those non-nursery allocations will not themselves bring about a major GC.

    In our example above, it was a typed array object with an associated lospace backing store, assuming the lospace allocation wouldn’t eventually trigger GC. But V8’s heap is not exactly like our model, for one because it actually has separate nursery and old-generation lospaces, and for two because allocation to the old-generation lospace does count towards the old-gen allocation limit. And yet, that simple does still blow through all memory. So what is the deal?

    The answer is that V8’s heap now has around two dozen spaces , and allocations to some of those spaces escape the limits and allocation counters. In this case, V8’s sandbox makes the link between the JS typed array object and its backing store pass through a table of indirections . At each allocation, we make an object in the nursery, allocate a backing store in the nursery lospace, and then allocate a new entry in the external pointer table, storing the index of that entry in the JS object. When the object needs to get its backing store, it dereferences the corresponding entry in the external pointer table.

    Our tight loop above would therefore cause an allocation (in the external pointer table) that itself would not hasten a major collection.

    Seen this way, one solution immediately presents itself: maybe we should find a way to make external pointer table entry allocations trigger a GC based on some heuristic, perhaps the old-gen allocated bytes counter. But then you have to actually trigger GC, and this is annoying and not always possible, as EPT entries can be allocated in response to a pointer write. V8 hacker Michael Lippautz summed up the issue in a comment that can be summarized as “it’s gnarly”.

    In the end, it would be better if new-space allocations did not cause old-space allocations . We should separate the external pointer table (EPT) into old and new spaces. Because there is at most one “host” object that is associated with any EPT entry—if it’s zero objects, the entry is dead—then the heuristics that apply to host objects will do the right thing with regards to EPT entries.

    A couple weeks ago, I landed a patch to do this. It was much easier said than done; the patch went through upwards of 40 revisions, as it took me a while to understand the delicate interactions between the concurrent and parallel parts of the collector, which were themselves evolving as I was working on the patch.

    The challenge mostly came from the fact that V8 has two nursery implementations that operate in different ways .

    The semi-space nursery (the scavenger ) is a parallel stop-the-world collector, which is relatively straightforward... except that there can be a concurrent/incremental major trace in progress while the scavenger runs (a so-called interleaved minor GC). External pointer table entries have mark bits, which the scavenger collector will want to use to determine which entries are in use and which can be reclaimed, but the concurrent major marker will also want to use those mark bits. In the end we decided that if a major mark is in progress, an interleaved collection will neither mark nor sweep the external pointer table nursery; a major collection will finish soon, and reclaiming dead EPT entries will be its responsibility.

    The minor mark-sweep nursery does not run concurrently with a major concurrent/incremental trace, which is something of a relief, but it does run concurrently with the mutator. When we promote a page, we also need to promote all EPT entries for objects on that page. To keep track of which objects have external pointers, we had to add a new remembered set , built up during a minor trace, and cleared when the page is swept (also lazily / concurrently). Promoting a page iterates through that set, evacuating EPT entries to the old space . This is additional work during the stop-the-world pause, but hopefully it is not too bad.

    To be honest I don’t have the performance numbers for this one. It rides the train this week to Chromium 126 though, and hopefully it fixes this problem in a robust way.

    partitions, take three

    The V8 sandboxing effort has sprouted a number of indirect tables : external pointers , external buffers , trusted objects , and code objects . Also recently to better integrate with compressed pointers, there is also a table for V8-to-managed-C++ (Oilpan) references . I expect the Oilpan reference table will soon grow a nursery along the same lines as the regular external pointer table.

    In general, if you have a generational partition of your main object space, it would seem that you almost always need a generational partition of every other space. Otherwise either you cause new allocations to occur in what is effectively an old space, perhaps needlessly hastening a major GC, or you forget to track allocations in that space, leading to a memory leak. If the generational hypothesis applies for a wrapper object, it probably also applies for any auxiliary allocations as well.

    fin

    I have a few cleanups remaining in this area but I am relieved to have landed this patch, and pleased to have spent time under the hood of V8’s collector. Many many thanks to Samuel Groß, Michael Lippautz, and Dominik Inführ for their review and patience. Until the next cycle, happy allocating!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/05/13/partitioning-pitfalls-for-generational-collectors

    • chevron_right

      Marcus Lundblad: May Maps

      news.movim.eu / PlanetGnome · Saturday, 11 May - 13:23 · 2 minutes

    It's about time for the spring update of goings on in Maps!

    There's been some changes going on since the release of 46.


    Vector Map by Default

    The vector map is now being used by default, and with it Maps supports dark mode (also the old raster tiles has been retired, though there still exists the hidden feature of running with a local tile directory. Which was never really intended for general use but more as a way to experiment with offline map support). The plan will be to eventually support proper offline map support with a way to download areas in a more user-friendly and organized way then to provide a raw path…).

    Dark Improvements

    Following the introduction of dark map support the default rendering of public transit routes and lines has been improved for the dark mode to give better contrast (something that trickier before when the map view was always light even when the rest of the UI, such as the sidebar itinerary was shown in dark mode).



    More Transit Mode Icons

    Jakub Steiner and Sam Hewitt has been working on designing icons for some additional modes of transit, such as trolley buses, taxi, and monorail.

    trolley-bus.png
    Trolley bus routes

    This screenshot was something I “mocked” by changing the icon for regular bus to temporarily use the newly designed trolley bus icon as we don't currently have any supported transit route provider in Maps currently that exposed trolley bus routes. I originally made this for an excursion with a vintage trolley bus I was going to attend, but that was cancelled in the last minute because of technical issues.

    taxi-station.png
    Showing a taxi station

    And above we have the new taxi icon (this could be used both for showing on-demand communal taxi transit and for taxi stations on the map.

    These icons have not yet been merged into Maps, as there's still some work going on finalizing their design. But I thought I still wanted to show them here…

    Brand Logos

    For a long time we have shown a title image from Wikidata or Wikipedia for places when available. Now we show a logo image (using the Wikidata reference for the brand of a venue) when available, and the place has no dedicated article).

    Explaining Place Types

    As sometimes it can be a bit hard to determine the exact type from the icons shown on the map. And especially for more generic types, such as shops where we have dedicated icons for some, and a generic icon. We now show the type also in the place bubble (using the translations extracted from the iD OSM editor).


    Places with a name shows the icon and type description below the name, dimmed.


    For unnamed places we show the icon and type instead of the name, in the same bold style as the name would normally use.

    Additional Things

    Another detail worth mentioning is that you can now clear the currently showing route from the context menu so you won't have to open the sidebar again and manually erase the filled in destinations.


    Another improvement is that if you already enter a starting point with ”Route from Here“, or enter an address in the sidebar and then use the “Directions”  button from a place bubble, that starting point will now be used instead of the current location.

    Besides this, also some old commented-out code was removed… but there's no screenshots of that, I'm afraid ☺

    • chevron_right

      Tanmay Patil: Acrostic Generator: Part one

      news.movim.eu / PlanetGnome · Saturday, 11 May - 05:15 · 3 minutes

    It’s been a while since my last blog post, which was about my Google Summer of Code project. Even though it has been months since I completed GSoC, I have continued working on the project, increasing acrostic support in Crosswords.

    We’ve added support for loading Acrostic Puzzles in Crosswords, but now it’s time to create some acrostics.

    Now that Crosswords has acrostic support, I can use screenshots to help explain what an acrostic is and how the puzzle works.

    Let’s load an Acrostic in Crosswords first.

    Acrostic Puzzle loaded in Crosswords

    The main grid here represents the QUOTE : “ CARNEGIE VISITED PRINCETON… ” and if we read out the first letter of each clue answer (displayed on the right) it forms the SOURCE . For example, in the image above, name of the author is “ DAVID ….” .
    Now, the interesting part is answers for the clues fit it in the SOURCE.

    Let’s consider another small example:
    QUOTE : “To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”
    AUTHOR : “ Ralph Waldo Emerson”

    If you see correctly, letters of SOURCE here are part of the QUOTE. One set of answers to clues could be:

    Solutions generated using AcrosticGenerator. Read the first letter of each Answer from top to bottom. It forms ‘Ralph Waldo Emerson’.

    Coding the Acrostic Generator

    As seen above, to create an acrostic, we need two things: the QUOTE and the SOURCE string. These will be our inputs from the user.

    Additionally, we need to set some constraints on the generated word size. By default, we have set MIN_WORD_SIZE to 3 and MAX_WORD_SIZE to 20.. The user is allowed to change it. However, users are allowed to change these settings.

    Step 1: Check if we can create an acrostic from given input

    You must have already guessed it. We check if the characters in the SOURCE are available in the QUOTE string. To do this, we utilize IPuzCharset data structure.
    Without going in much detail, it simply stores characters and their frequencies.
    For example, for the string “MAX MIN”, it’s charset looks like [{‘M’: 2}, {‘A’: 1}, {‘X’:1}, {‘I’: 1}, {’N’: 1}].

    First, We build a charset of the source string and then iterate through it. For the source string to be valid, the count of every character in the source charset should be less than or equal to the count of that character in the quote charset.

    for (iter = ipuz_charset_iter_first (source_charset);
    iter;
    iter = ipuz_charset_iter_next (iter))
    {
    IPuzCharsetIterValue value;
    value = ipuz_charset_iter_get_value (iter);

    if (value.count > ipuz_charset_get_char_count (quote_charset, value.c))
    {
    // Source characters are missing in the provided quote
    return FALSE;
    }
    }

    return TRUE;

    Since, now we have a word size constraint, we need to add one more check.
    Let’s understand through this an example.

    QUOTE: LIFE IS TOO SHORT
    SOURCE: TOLSTOI
    MIN_WORD_SIZE: 3
    MAX_WORD_SIZE: 20

    Since, MIN_WORD_SIZE is set to 3, the generated answers should have a minimum of three letters in them.

    Possible solutions considering every solution
    has a length equal to the minimum word size:
    T _ _
    O _ _
    L _ _
    S _ _
    T _ _
    O _ _
    O _ _

    If we take the sum of the number of letters in the above solutions, It’s 21. That is greater than number of letters in the QUOTE string (14). So, we can’t create an acrostic from the above input.

    if  ((n_source_characters * min_word_size) > n_quote_characters)
    {
    //Quote text is too short to accomodate the specificed minimum word size for each clue
    return FALSE;
    }

    While writing this blog post, I found out we called this error “ SOURCE_TOO_SHORT ”. It should be “ QUOTE_TOO_SHORT ” / “ SOURCE_TOO_LARGE ”.

    Stay tuned for further implementation in the next post!

    • wifi_tethering open_in_new

      This post is public

      medium.com /@txnmxy3/acrostic-generator-part-one-6d119cb1e981

    • chevron_right

      Felix Häcker: #147 Secure Keys

      news.movim.eu / PlanetGnome · Friday, 10 May - 00:00 · 6 minutes

    Update on what happened across the GNOME project in the week from May 03 to May 10.

    Sovereign Tech Fund

    Sonny says

    As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects.

    Here are the highlights for the past week

    Andy is making progress on URL handling for apps. We are planning on advancing and using the freedesktop intent-apps proposal which Andy implemented in xdg-desktop-portal .

    Felix completed the work to add keyring collections support to Key Rack .

    Adrien worked on replacing deprecated and inaccessible GtkTreeView in Disk Usage Analyzer (Baobab)

    Adrien worked on replacing deperecated and inaccessible GtkEntryCompletion in Files (Nautilus)

    Dhanuka finalized the Keyring implementation in oo7 .

    Dhanuka landed rekeying support in oo7

    Hubert made good progress on the USB portal and the portal is now able to display a permission dialog.

    Julian added notifications spec v2 support to GLib GNotification

    Julian created a draft merge request for new notification specs against xdg-desktop-portal-gtk

    Antonio finished preparing nautilus components for reuse in a new FileChooser window. Ready for review

    GNOME Core Apps and Libraries

    GLib

    The low-level core library that forms the basis for projects such as GTK and GNOME.

    Philip Withnall announces

    A series of fixes for a GDBus security issue with processes accepting spoofed signal senders has landed. Big thanks to Simon McVittie for putting together the fix (and an impressive set of regression tests), to Alicia Boya García for reporting the issue, and Ray Strode for reviews. https://discourse.gnome.org/t/security-fixes-for-signal-handling-in-gdbus-in-glib/20882

    GNOME Circle Apps and Libraries

    Ear Tag

    Edit audio file tags.

    knuxify says

    Ear Tag 0.6.1 has been released, bringing a few minor quality-of-life improvements and a switch to the new AdwDialog widgets. You can get the latest release from Flathub .

    (Sidenote - I am looking for contributors who would be willing to help with Ear Tag’s testing, bug-fixing and further development, with the goal of potentially finding co-maintainers - if you’re interested, see issue #132 for more details.)

    Third Party Projects

    Alain announces

    Planify 4.7.2 is here!

    We’re excited to announce the release of Planify version 4.7.2, with exciting new features and improvements to help you manage your tasks and projects even more efficiently!

    1. Inbox as Independent Project: We’ve completely rebuilt the functionality of Inbox. Now, it’s an independent project with the ability to move your tasks between different synchronized services. The Inbox is the default place to add new tasks, allowing you to quickly get your ideas out of your head and then plan them when you’re ready.

    2. Enhanced Task Duplication: When you duplicate a task now, all subtasks and labels are automatically duplicated, saving you time and effort in managing your projects.

    3. Duplication of Sections and Projects: You can now easily duplicate entire sections and projects, making it easier to create new projects based on existing structures.

    4. Improvements in Quick Add: We’ve improved the usability of Quick Add. Now, the “Create More” option is directly displayed in the user interface, making it easier to visualize and configure your new tasks.

    5. Improvements on Small Screens: For those working on devices with small screens, we’ve enhanced the user experience to ensure smooth and efficient navigation.

    6. Project Expiry Date: Your project’s expiry date now clearly shows the remaining days, helping you keep track of your deadlines more effectively.

    7. Enhanced Tag Panel: The tag panel now shows the number of tasks associated with each tag, rather than just the number of tags, giving you a clearer view of your tagged tasks.

    8. Archiving of Projects and Sections: You can now archive entire projects and sections! This feature helps you keep your workspace organized and clutter-free.

    9. New Task Preferences View and Task Details Sidebar: Introducing a new task preferences view! You can now customize your task preferences with ease. Additionally, we’ve enabled the option to view task details using the new sidebar view, providing quick access to all the information you need.

    10. Translation Updates: We thank @Scrambled777 for the Hindi translation update and @twlvnn for the Bulgarian translation update.

    These are just some of the new features and improvements you’ll find in Planify version 4.7.2. We hope you enjoy using these new tools to make your task and project management even more efficient and productive.

    Download the update today and take your productivity to the next level with Planify!

    Thank you for being part of the Planify community!

    Giant Pink Robots! says

    Varia download and torrent manager got a pretty big update.

    • Powerful download scheduling feature that allows the user to specify what times in a week they would like to start or stop downloading, with an unlimited amount of custom timespans.
    • The ability to import a cookies.txt file exported from a browser to support downloads from restricted areas like many cloud storage services.
    • Support for remote timestamps if the user wants the downloaded file to have the original timestamp metadata.
    • Two new filtering options on the sidebar for seeding torrents and failed downloads.
    • An option to automatically quit the application when all downloads are completed.
    • An option to start the app in background mode whenever it’s started.
    • Support for Spanish, Persian and Hindi languages.

    Mateus R. Costa announces

    bign-handheld-thumbnailer (a Nintendo DS and 3DS files thumbnailer) version 0.9.0 was intended to appear on previous week’s TWIG but due to performance issues had to be postponed.

    Version 0.9.0 is notable because it finally introduced CXI and CCI (which were deemed too hard to implement for the original 0.1.0 code) and there were a few misc improvements. However it was pointed that the thumbnailer was loading full games to the memory (official 3DS games can weight up to almost 4 GB) even though there were some suggestions on how to improve that I still failed to initiallly make it work. At that point a COPR repo also became available to help distributed the compiled RPM.

    Version 1.0.0 was intended to fix the performance issue once for all, but for more details I recommend reading the blog post about this release at my personal blog: https://www.mateusrodcosta.dev/blog/bign-handheld-thumbnailer-what-i-learned-linux-thumbnailer-rust/

    Letterpress

    Create beautiful ASCII art

    Gregor Niehl says

    A new minor release of Letterpress is out! No big UI changes in this 2.1 release, mostly small touches here and there:

    • Images can now be pasted from the clipboard
    • Zoom is now more consistent between different factors
    • The Drag-n-Drop overlay was stolen from Loupe redesigned
    • The GNOME runtime was updated to version 46, which means the Tips Dialog now uses the new Adw.Dialog , and the About Dialog is now truly an About Dialog

    The app has also been translated to Simplified Chinese!

    I’m happy to announce that, in the meantime, @FineFindus has joined the project as maintainer, so it’s no longer maintained by a single person.

    Gir.Core

    Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

    badcel reports

    Gir.Core 0.5.0 was released. This is one of the biggest releases since the initial release:

    • A lot of of new APIs are supported (especially records)
    • Bugs got squashed
    • The library versions were updated to GNOME SDK 46 and target .NET 8 in addition to .NET 6 and 7
    • New samples were added
    • The homepage got updated

    Anyone interested bringing C# / F# back into the Linux ecosystem is welcome to come by and try out the new version.

    Miscellaneous

    Dan Yeaw says

    Gvsbuild , the GTK stack for Windows, version 2024.5.0 is out. Along with the latest GTK version 4.14.4, we also released for the first time a pre-built version of the binaries. To set up a development environment for a GTK app on Windows, you can unzip the package, set a couple of environmental variables, and start coding.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!