call_end

    • Pl chevron_right

      Michael Meeks: 2025-08-07 Thursday

      news.movim.eu / PlanetGnome • 7 August

    • Packed, hearts set for home, rested by the beach, bus, plane, car - back home. Good to see Leanne; bed early - two hours ahead time-wise.
    • Pl chevron_right

      Peter Hutterer: libinput and Lua plugins (Part 2)

      news.movim.eu / PlanetGnome • 7 August • 2 minutes

    Part 2 is, perhaps suprisingly, a follow-up to libinput and lua-plugins (Part 1) .

    The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.

    Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes. The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.

    What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options ) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows .

    But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection") function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.

    So hooray for fewer features and happy implementing!

    [1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues

    • Pl chevron_right

      Steven Deobald: 2025-08-01 Foundation Update

      news.movim.eu / PlanetGnome • 6 August

    This will perhaps be the least-fun Foundation Update of the year. July 27th was supposed to be the “Board Hack Day” (yay?… policy hackfest? everyone’s favourite?), but it ended up consumed with more immediately pressing issues. Somewhat unfortunate, really, as I think we were all looking forward to removing the executive tasks from the Board project wall and boiling their work down to more strategic and policy work. I suppose we’ll just have to do that throughout the year.

    Many people are on vacation right now, so the Foundation feels somewhat quiet in its post-GUADEC moment. Budget planning is happening. Doozers are doozering. The forest gnomes are probably taking a nap. There’s a lot of annoying paperwork being shuffled around.

    I hope for a more exciting Foundation Update in the next week or two. Can’t win ’em all.

    • Pl chevron_right

      Jussi Pakkanen: Let's properly analyze an AI article for once

      news.movim.eu / PlanetGnome • 6 August • 8 minutes

    Recently the CEO of Github wrote a blog post called Developers reinvented . It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you.

    But before we begin, a detour to the east.

    Statistics and the Soviet Union

    One of the great wonders of statistical science of the previous century was without a doubt the Soviet Union. They managed to invent and perfect dozens of ways to turn data to your liking, no matter the reality. Almost every official statistic issued by USSR was a lie. Most people know this. But even most of those do not grasp just how much the stats differed from reality. I sure didn't until I read this book . Let's look at some examples.

    Only ever report percentages

    The USSR's glorious statistics tended to be of the type "manufacturing of shoes grew over 600% this five year period". That certainly sounds a lot better than "In the last five years our factory made 700 pairs of shoes as opposed to 100" or even "7 shoes instead of 1". If you are really forward thinking, you can even cut down shoe production on those five year periods when you are not being measured. It makes the stats even more impressive, even though in reality many people have no shoes at all.

    The USSR classified the real numbers as state secrets because the truth would have made them look bad. If a corporation only gives you percentages, they may be doing the same thing. Apply skepticism as needed.

    Creative comparisons

    The previous section said the manufacturing of shoes has grown. Can you tell what it is not saying? That's right, growth over what? It is implied that the comparison is to the previous five year plan. But it is not. Apparently a common comparison in these cases was the production amounts of the year 1913. This "best practice" was not only used in the early part of the 1900s, it was used far into the 1980s.

    Some of you might wonder why 1913 and not 1916, which was the last year before the bolsheviks took over? Simply because that was the century's worst year for Russia as a whole. So if you encounter a claim that "car manufacturing was up 3700%" some year in 1980s Soviet Union, now you know what that actually meant.

    "Better" measurements

    According to official propaganda, the USSR was the world's leading country in wheat production. In this case they even listed out the production in absolute tonnes. In reality it was all fake. The established way of measuring wheat yields is to measure the "dry weight", that is, the mass of final processed grains. When it became apparent that the USSR could not compete with imperial scum, they changed their measurements to "wet weight". This included the mass of everything that came out from the nozzle of a harvester, such as stalks, rats, mud, rain water, dissidents and so on.

    Some people outside the iron curtain even believed those numbers. Add your own analogy between those people and modern VC investors here.

    To business then

    The actual blog post starts with this thing that can be considered a picture.

    What message would this choice of image tell about the person who chose to use it in their blog post?

    1. Said person does not have sufficient technical understanding to grasp the fact that children's toy blocks should, in fact, be affected by gravity.
    2. Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".
    Are these the sort of qualities a person in charge of the largest software development platform on Earth should have? No, they are not.

    To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself. Cultural misappropriation is high on the list of core values at Github HQ it seems.

    With that let's move on to the actual content, which is this post from Twitter (to quote Matthew Garrett, I will respect their name change once Elon Musk starts respecting his child's).

    Oh, wow! A field study. That makes things clear. With evidence and all! How can we possibly argue against that?

    Easily. As with a child.

    Let's look at this "study" (and I'm using the word in its loosest possible term here) and its details with an actual critical eye. The first thing is statistical representativeness. The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278, but, you know, one order of magnitude one way or another, who cares about those? Certainly not business big shot movers and shakers. Like Stockton Rush for example.

    The math above requires an unbiased sampling. The post does not even attempt to answer whether that is the case. It would mean getting answers to questions like:

    • How were the 22 people chosen?
    • How many different companies, skill levels, nationalities, genders, age groups etc were represented?
    • Did they have any personal financial incentive on making their new AI tools look good?
    • Were they under any sort of duress to produce the "correct" answers?
    • Was the test run multiple times until it produced the desired result?
    The latter is an age old trick where you run a test with random results over and over on small groups. Eventually you will get a run that points the way you want. Then you drop the earlier measurements and publish the last one. In "the circles" this is known as data set selection .

    Just to be sure, I'm not saying that is what they did. But if someone drove a dump truck full of money to my house and asked me to create a "study" that produced those results, that is exactly how I would do it. (I would not actually do it because I have a spine.)

    Moving on. The main headline grabber is "Either you embrace AI or get out of this career". If you actually read the post (I know), what you find is that this is actually a quote from one of the participants. It's a bit difficult to decipher from the phrasing but my reading is that this is not a grandstanding hurrah of all things AI, but more of a "I guess this is something I'll have to get used to" kind of submission.

    The post then goes on a buzzwordsalad tour of statements that range from the incomprehensible to the puzzling. Perhaps the weirdest is this nugget on education:

    Teaching [programming] in a way that evaluates rote syntax or memorization of APIs is becoming obsolete.

    It is not "becoming obsolete". It has been considered the wrong thing to do for as long as computer science has existed. Learning the syntax of most programming languages takes a few lessons, the rest of the semester is spent on actually using the language to solve problems. Any curriculum not doing that is just plain bad. Even worse than CS education in Russia in 1913.

    You might also ponder that if the author is so out of touch with reality in this simple issue, how completely off the rest of his statements might be.

    A magician's shuffle

    As nonsensical as the Twitter post is, we have not yet even mentioned the biggest misdirection in it. You might not even have noticed it yet. I certainly did not until I read the actual post. Try if you can spot it.

    Ready? Let's go.

    The actual fruit of this "study" boils down to this snippet.

    Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition.

    Let that sink in. For the last several years the main supposed advantage of AI tools has been the fact that they save massive amounts of developer time. This has lead to the "fire all your developers and replace them with AI bots" trend sweeping the nation. Now even this AI advertisement of a "study" can not find any such advantages and starts backpedaling into something completely different. Just like we have always been at war with Eastasia, AI has never been about "productivity". No. No. It is all about "increased ambition", whatever that is. The post then carries on with this even more baffling statement.

    When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

    Really? Only the most advanced agentics you say? That is a bold statement to make given that the leading reason for software project failure is scope creep. This is the one area where human beings have decades long track record for beating any artificial system. Even if machines were able to do it better, "Make your project failures more probable! Faster! Spectacularer!" is a tough rallying cry to sell.

    To conclude, the actual findings of this "study" seem to be that:

    1. AI does not improve developer productivity or skills
    2. AI does increase developer ambition
    This is strictly worse than the current state of affairs.
    • Pl chevron_right

      Georges Basile Stavracas Neto: GUADEC 2025

      news.movim.eu / PlanetGnome • 5 August • 3 minutes

    I’m back from GUADEC 2025. I’m still super tired, but I wanted to write down my thoughts before they vanish into the eternal void.

    First let me start with a massive thank you for everyone that helped organize the event. It looked extremely well rounded, the kind of well rounded that can only be explained by a lot of work from the organizers. Thank you all!

    Preparations

    For this GUADEC I did something special: little calendars!

    3D printed calendars on the table

    These were 3D printed from a custom model I’ve made, based on the app icon of GNOME Calendar. I brought a small batch to GUADEC, and to my surprise – and joy – they vanished in just a couple of hours in the first day! It was very cool to see the calendars around after that:

    A hand holding a 3D printed calendar Someone using a smartphone, with a 3D printed calendar visible Someone walking with a backpack with a 3D printed calendar A backpack with a 3D printed calendar hanging out

    Talks

    This year I gave two talks:

    The first one was rather difficult. On the one hand, streaming GNOME development and interacting with people online for more than 6 years has given me many anecdotal insights about the social dynamics of free software. On the other hand, it was very difficult to materialize these insights and summarize them in form of a talk.

    I’ve received good feedback about this talk, but for some reason I still left it feeling like it missed something . I don’t know what, exactly. But if anyone felt energized to try some streaming, goal accomplished I guess? 🙂

    The second talk was just a regular update on the XDG Desktop Portal project. It was made with the sole intention of preparing territory for the Flatpak & Portals BoF that occurred later. Not much to say about it, it was a technical talk, with some good questions and discussions after that.

    As for the talks that I’ve watched, to me there is one big highlight for this GUADEC: Emmanuelle’s “Getting Things Done in GNOME” .

    Emmanuele on stage with "How do things happen in GNOME?" on the projector

    Emmanuele published the contents of this talk in article form recently.

    Sometimes, when we’re in the inflection point towards something, the right person with the right sensitivities can say the right things. I think that’s what Emmanuele did here. I think myself and others have already been feeling that the “maintainer”, in the traditional sense of the word, wasn’t a fitting description of how things have been working lately. Emmanuele gifted us with new vocabulary for that: “special interest group”. By the end of GUADEC, we were comfortably using this new descriptor.

    Photography

    It’s not exactly a secret that I started dipping my toes towards a long admired hobby: photography. This GUADEC was the first time I traveled with a camera and a lens, and actually attempted to document the highlights and impressions.

    I’m not going to dump all of it here, but I found Brescia absolutely fascinating and fell in love with the colors and urban vision there. It’s not the most walkable or car-free city I’ve ever been, but there are certain aspects to it that caught my attention!

    The buildings had a lovely color palette, sometimes bright, sometimes pastel:

    Some of the textures of the material of walls were intriguing:

    I fell in love with how Brescia lights itself:

    I was fascinated by how many Vespas (and similar) could be found around the city, and decided to photograph them all! Some of the prettiest ones:

    The prize for best modeling goes to Sri!

    Conclusion

    Massive thanks to everyone that helped organize GUADEC this year. It was a fantastic event. It was great to see old friends, and meet new people in there. There were many newcomers attending this GUADEC!

    And here’s the GUADEC dinner group photo!

    Group photo during the GUADEC dinner party
    • Pl chevron_right

      Thibault Martin: TIL that You can spot base64 encoded JSON

      news.movim.eu / PlanetGnome • 5 August • 1 minute

    I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

    {
      "serial": 13,
      "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
      "meta": {
        "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0="
      },
      "encrypted_data": "ONXZsJhz37eJA[...]",
      "encryption_version": "v0"
    }
    

    Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypted the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

    I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

    I was incredulous but gave it a go, and it worked!!

    $ echo "eyJzYW[...]" | base64 -d
    {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}
    

    I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

    He replied,

    Whenever you see ey , that's {" and then if it's followed by a letter, you'll get J followed by a letter.

    I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

    $ echo "{" | base64
    ewo=
    $ echo "{\"" | base64
    eyIK
    $ echo "{\"s" | base64
    eyJzCg==
    $ echo "{\"a" | base64
    eyJhCg==
    $ echo "{\"word\"" | base64
    eyJ3b3JkIgo=
    

    Thanks Davide and Denis for showing me this simple but pretty useful trick!

    • Pl chevron_right

      Matthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

      news.movim.eu / PlanetGnome • 5 August • 11 minutes

    There's a lovely device called a pistorm , an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

    These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

    We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

    So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

    And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

    Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

    The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

    But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

    The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

    We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

    First, let's talk about Amiga graphics . We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

    Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

    Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom , the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

    And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

    Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does
    and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

    Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

    So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available .

    [1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

    comment count unavailable comments
    • Pl chevron_right

      Christian Hergert: Week 31 Status

      news.movim.eu / PlanetGnome • 4 August • 7 minutes

    Foundry

    • Added a new gutter renderer for diagnostics using the FoundryOnTypeDiagnostics described last week.

    • Write another new gutter renderer for “line changes”.

      I’m really happy with how I can use fibers w/ GWeakRef to do worker loops but not keep the “owner object” alive. As long as you have a nice way to break out of the fiber loop when the object disposes (e.g. trigger a DexCancellable / DexPromise /etc) then writing this sort of widget is cleaner/simpler than before w/ GAsyncReadyCallback .

      foundry-changes-gutter-renderer.c

    • Added a :show-overview property to the line changes renderer which conveniently allows it to work as both a per-line change status and be placed in the right-side gutter as an overview of the whole document to see your place in it. Builder just recently got this feature implemented by Nokse and this is basically just a simplified version of that thanks to fibers.

    • Abstract TTY auth input into a new FoundryInput abstraction. This is currently used by the git subsystem to acquire credentials for SSH, krb, user, user/pass, etc depending on what the peer supports. However, it became pretty obvious to me that we can use it for more than just Git. It maps pretty well to at least two more features coming down the pipeline.

      Since the input mechanisms are used on a thread for TTY input (to avoid blocking main loops, fiber schedulers, etc), they needed to be thread-safe. Most things are immutable and a few well controlled places are mutable.

      The concept of a validator is implemented externally as a FoundryInputValidator which allows for re-use and separating the mechanism from policy. Quite like how it turned out honestly.

      There are abstractions for text, switches, choices, files. You might notice they will map fairly well to AdwPreferenceRow things and that is by design, since in the apps I manage, that would be their intended display mechanism.

    • Templates have finally landed in Foundry with the introduction of a FoundryTemplateManager , FoundryTemplateProvider , and FoundryTemplate . They use the new generalized FoundryInput abstractions that were discussed above.

      That allows for a foundry template list command to list templates and foundry template create to expand a certain template.

      The FoundryInput of the templates are queried via the PTY just like username/password auth works via FoundryInput . Questions are asked, input received, template expansion may continue.

      This will also allow for dynamic creation of the “Create Template” widgetry in Builder later on without sacrificing on design.

    • Meson templates from Builder have also been ported over which means that you can actually use those foundry template commands above to replace your use of Builder if that is all you used it for.

      All the normal ones are there (GTK, Adwaita, library, cli, etc).

    • A new license abstraction was created so that libraries and tooling can get access to licenses/snippets in a simple form w/o duplication. That generally gets used for template expansion and file headers.

    • The FoundryBuildPipeline gained a new vfunc for prepare_to_run() . We always had this in Builder but it never came over to Foundry until now.

      This is the core mechanism behind being able to run a command as if it were the target application (e.g. unit tests).

    • After doing the template work, I realized that we should probably just auto initialize the project so you don’t have to run foundry init afterwards. Extracted the mechanism for setting up the initial .foundry directory state and made templates use that.

    • One of the build pipeline mechanisms still missing from Builder is the ability to sit in the middle of a PTY and extract build diagnostics. This is how errors from GCC are extracted during the build (as well as for other languages).

      So I brought over our “PTY intercept” which takes your consumer FD and creates a producer FD which is bridged to another consumer FD.

      Then the JIT’d error extract regexes may be run over the middle and then create diagnostics as necessary.

      To make this simple to consume in applications, a new FoundryPtyDiagnostics object is created. You set the PTY to use for that and attach it’s intercept PTY to the build/run managers default PTY and then all the GAction will wire up correctly. That object is also a GListModel making it easy to display in application UI.

    • A FoundryService is managed by the FoundryContext . They are just subsystems that combine to useful things in Foundry. One way they can be interacted with is GAction as the base class implements GActionGroup .

      I did some cleanup to make this work well and now you can just attach the FoundryContext s GActionGroup using foundry_context_dup_action_group() to a GtkWindow using gtk_widget_insert_action_group() . At that point your buttons are basically just "context.build-manager.build" for the action-name property.

      All sorts of services export actions now for operations like build, run, clean, invalidate, purge, update dependencies, etc.

      There is a test GTK app in testsuite/tools/ you can play with this all to get ideas and/or integrate into your own app. It also integrates the live diagnostics/PTY code to exemplify that.

    • Fixed the FoundryNoRun tool to connect to the proper PTY in the deployment/run phase.

    • The purge operation now writes information about what files are being deleted to the default build PTY.

    • The new FoundryTextSettings abstraction has landed which is roughly similar to IdeFileSettings in Builder. This time it is much cleaned up now that we have DexFuture to work with.

      I’ve ported the editorconfig support over to use this as well as a new implementation of modeline support which again, is a lot simpler now that we can use fibers/threadpools effectively.

      Plugins can set their text-settings priority in their .plugin file. That way settings can have a specific order such as user-overrides, modelines, editorconfig, gsettings overrides, language defaults, and what-not.

    • The FoundryVcs gained a new foundry_vcs_query_file_status() API which allows querying for the, shocking, file status. That will give you bitflags to know in both the stage or working tree if a file is new/modified/deleted.

      To make this even more useful, you can use the FoundryDirectoryListing class (which is a GListModel of FoundryDirectoryItem ) to include vcs::status file-attribute and your GFileInfo will be populated with the uint32 bitflags for a key under the same name.

      It’s also provided as a property on the FoundryDirectoryItem to make writing those git “status icons” dead simple in file panels.

    Boxes

    • Found an issue w/ trailing \x00 in paths when new Boxes is opening an ISO from disk on a system with older xdg portals. Sent a pointer on the issue tracker to what Text Editor had to do as well here.

    Libpeas

    • GJS gained support for pkgconfig variables and we use that now to determine which mozjs version to link against. That is required to be able to use the proper JS API we need to setup the context.

    Ptyxis

    • Merged some improves to the custom link support in Ptyxis. This is used to allow you to highlight custom URL regexes. So you can turn things like “RHEL-1234” into a link to the RHEL issue tracker.

    • Track down an issue filed about titles not updating tab/window titles. It was just an issue with $PROMPT_COMMAND overwriting what they had just changed.

    Text Editor

    • A lot of the maintainership of this program is just directing people to the right place. Be that GtkSourceView, GTK, shared-mime-info, etc. Do more of that.

      As an aside, I really wish people spent more time understanding how things work rather than fire-and-forget. The FOSS community used to take pride in ensuring the issue reports landed in the right place to avoid over burdening maintainers, and I’m sad that is been lost in the past decade or so. Probably just a sign of success.

    Builder

    • Did a quick and dirty fix for a hang that could slow down startup due to the Manuals code going to the worker process to get the default architecture. Builder doesn’t link against Flatpak in the UI process hence why that did it. But it’s also super easy to put a couple line hard-coded #ifdef and avoid the whole RPC.

    Libdex

    • Released 0.11.1 for GNOME 49.beta. I’m strongly considering making the actual 49 release our 1.0. Things have really solidified over the past year with libdex and I’m happy enough to put my stamp of approval on that.

    Libspelling

    • Fix an issue with discovery of the no-spellcheck-tag which is used to avoid spellchecking things that are general syntax in language specifications. Helps a bunch when loading a large document and that can get out of sync/changed before the worker discovers it.

    • Fixed a LSAN discovered leak in the testsuite. Still one more to go. Fought LSAN and CI a bit because I can’t seem to reproduce what the CI systems get.

    Other

    • Told Chat-GPT to spit me out a throw away script that parses my status reports and converts them into something generally usable by WordPress. Obviously there is a lot of dislike/scrutiny/distrust of LLMs and their creators/operators, but I really don’t see the metaphorical cat going back in the bag when you enable people in a few seconds to scratch an itch. I certainly hope we continue to scrutinize and control scope though.

    • Pl chevron_right

      Peter Hutterer: unplug - a tool to test input devices via uinput

      news.movim.eu / PlanetGnome • 4 August • 1 minute

    Yet another day, yet another need for testing a device I don't have. That's fine and that's why many years ago I wrote libinput record and libinput replay (more powerful successors to evemu and evtest). Alas, this time I had a dependency on multiple devices to be present in the system, in a specific order, sending specific events. And juggling this many terminal windows with libinput replay open was annoying. So I decided it's worth the time fixing this once and for all (haha, lolz) and wrote unplug . The target market for this is niche, but if you're in the same situation, it'll be quite useful.

    Pictures cause a thousand words to finally shut up and be quiet so here's the screenshot after running pip install unplug [1]:

    This shows the currently pre-packaged set of recordings that you get for free when you install unplug. For your use-case you can run libinput record , save the output in a directory and then start unplug path/to/directory . The navigation is as expected, hitting enter on the devices plugs them in, hitting enter on the selected sequence sends that event sequence through the previously plugged device.

    Annotation of the recordings (which must end in .yml to be found) can be done by adding a YAML unplug: entry with a name and optionally a multiline description . If you have recordings that should be included in the default set, please file a merge request . Happy emulating!

    [1] And allowing access to /dev/uinput . Details, schmetails...