call_end

    • Pl chevron_right

      Steven Deobald: 2025-08-08 Foundation Update

      news.movim.eu / PlanetGnome • 9 August • 3 minutes

    ## Opaque Things

    Very unfortunately, most of my past two weeks have been spent on “opaque things.” Let’s just label that whole bundle “bureaucracy” for now.

    ## Apology

    I owe a massive apology to the GIMP team. These folks have been incredibly patient while the GNOME Foundation, as their fiscal host, sets up a bunch of new paperwork for them. It’s been a slow process.

    Due to the above bureaucracy, this process has been even slower than usual. Particularly on GIMP’s big pearl anniversary year, this is especially frustrating for them and the timing is awful.

    As long as I am in this role, I accept responsibility for the Foundation’s struggles in supporting the GIMP project. I’m sorry, folks, and I hope we get past the current round of difficulties quickly so we can provide the fiscal hosting you deserve.

    ## Draft Budget

    Thanks to Deepa’s tremendous work teasing apart our financial reporting, she and I have a draft budget to present to the Board on their August 12th regular meeting.

    The board has already seen a preliminary version of the budget as of their July 27th meeting and an early draft as of last week. Our schedule has an optional budget review August 26th, another required budget review on September 9th and, if we need a final meeting to pass the budget, we can do that at the September 23rd board meeting. Last year, Richard passed the first on-time GNOME Foundation budget in many years. I’m optimistic we can pass a clear, uncomplicated budget a month early in 2025.

    (Our fiscal year is October – September.)

    I’m incredibly grateful to Deepa for all the manual work she’s put into our finances during an already busy summer. Deepa’s also started putting in place a tagging mechanism with our bookkeepers which will hopefully resolve this in a more automatic way in the future. The tagging mechanism will work in conjunction with our chart of accounts, which doesn’t really represent the GNOME Foundation’s operating capital at all, as a 501(c)(3)’s chart of accounts is geared toward the picture we show to the IRS, not the picture we show to the Board or use to pass a budget. It’s the same data, but two very different lenses on it.

    Understanding is the first step. After a month of wrestling with the data, we now understand it. Automation is the second step. The board should know exactly where the Foundation stands, financially, every year, every quarter, and every month… without the need for manual reports.

    ## 501(c)(3) Structural Improvements

    As I mention in my GUADEC keynote , the GNOME Foundation has a long road ahead of it to become an ideal 501(c)(3) … but we just need to make sure we’re focused on continuous improvement. Other non-profits also struggle with their own structural challenges, since charities can take many different shapes and are often held together almost entirely by volunteers. Every organization is different, every organization wants to succeed.

    I had the pleasure of some consultation conversations this week with Amy Parker of the OpenSSL Foundation and Halle Baksh, a 501(c) expert from California. Delightful folks. Super helpful.

    One of the greatest things I’m finding about working in the non-profit space is that we have a tremendous support network. Everyone wants us to succeed and every time we’re ready to take the next step, there will be someone there to help us level up. Every time we do, the Foundation will get more mature and, as a result, more effective.

    ## Travel Policy Freeze

    I created some confusion by mentioning the travel policy freeze in my AGM slides. A group of hackers asked me at dinner on the last night in Brescia, “does this mean that all travel for contributors is cancelled?”

    No, absolutely not. The travel policy freeze means that every travel request must be authorized by the Executive Director and the President; we have temporarily suspended the Travel Committee’s spending authority. We will of course still sponsor visas for interns and other program-related travel. The intention of the travel policy freeze is to reduce administrative travel costs and add them to program travel, not the other way around.

    Sorry if I freaked anyone out. The goal (as long as I’m around) will always be to push Foundation resources toward programs like development, infrastructure, and events.

    ## Getting Back To Work

    Sorry again for the short update. I’m optimistic that we’ll get over this bureaucratic bump in the road soon enough. Thanks for your patience.

    • Pl chevron_right

      Sebastian Wick: GNOME 49 Backlight Changes

      news.movim.eu / PlanetGnome • 8 August • 3 minutes

    One of the things I’m working on at Red Hat is HDR support. HDR is inherently linked to luminance (brightness, but ignoring human perception) which makes it an important parameter for us that we would like to be in control of.

    One reason is rather stupid. Most external HDR displays refuse to let the user control the luminance in their on-screen-display (OSD) if the display is in HDR mode. Why? Good question. Read my previous blog post .

    The other reason is that the amount of HDR headroom we have available is the result of the maximum luminance we can achieve versus the luminance that we use as the luminance a sheet of white paper has (reference white level). For power consumption reasons, we want to be able to dynamically change the available headroom, depending on how much headroom the content can make use of. If there is no HDR content on the screen, there is no need to crank up the backlight to give us more headroom, because the headroom will be unused.

    To work around the first issue, mutter can change the signal it sends to the display, so that white is not a signal value of 1.0 but somewhere else between 0 and 1. This essentially emulates a backlight in software. The drawback is that we’re not using a bunch of bits in the signal anymore and issues like banding might become more noticeable, but since we’re using this only with 10 or 12 bits HDR signals, this isn’t an issue in practice.

    This has been implemented in GNOME 48 already, but it was an API that mutter exposed and Settings showed as “HDR Brightness”.

    GNOME Settings Display panel showing the HDR Brightness Setting

    "HDR Brightness" in GNOME Settings

    The second issue requires us to be able to map the backlight value to luminance, and change the backlight atomically with an update to the screen. We could work towards adding those things to the existing sysfs backlight API but it turns out that there are a number of problems with it. Mapping the sysfs entry to a connected display is really hard (GNOME pretends that there is only one single internal display that can ever be controlled), and writing a value to the backlight requires root privileges or calling a logind dbus API. One internal panel can expose multiple backlights, the value of 0 can mean the display turns off or is just really dim.

    So a decision was made to create a new API that will be part of KMS, the API that we use to control the displays.

    The sysfs backlight has been controlled by gnome-settings-daemon and GNOME Shell called a dbus API when the screen brightness slider was moved.

    To recap:

    • There is a sysfs backlight API for basically one internal panel requires logind or a setuid helper executable
    • gnome-settings-daemon controlled this single screen sysfs backlight
    • Mutter has a “software backlight” feature
    • KMS will get a new backlight API that needs to be controlled by mutter

    Overall, this is quite messy, so I decided to clean this up.

    Over the last year, I moved the sysfs backlight handling from gnome-settings-daemon into mutter, added logic to decide which backlight to use (sysfs or the “software backlight”), and made it generic so that any screen can have a backlight. This means mutter is the single source of truth for the backlight itself. The backlight itself gets its value from a number of sources. The user can configure the screen brightness in the quick settings menu and via keyboard shortcuts. Power saving features can kick in and dim the screen. Lastly, an Ambient Light Sensor (ALS) can take control over the screen brightness. To make things more interesting, a single “logical monitor” can have multiple hardware monitors which each can have a backlight. All of that logic is now neatly sitting in GNOME Shell which takes signals from gnome-settings-daemon about the ALS and dimming. I also changed the Quick Settings UI to make it possible to control the brightness on multiple screens, and removed the old “HDR Brightness” from Settings.

    All of this means that we can now handle screen brightness on multiple monitors and when the new KMS backlight API makes it upstream, we can just plug it in, and start to dynamically create HDR headroom.

    • Pl chevron_right

      Richard Hughes: LVFS Sustainability Plan

      news.movim.eu / PlanetGnome • 8 August • 2 minutes

    tl;dr: I’m asking the biggest users of the LVFS to sponsor the project.

    The Linux Foundation is kindly paying for all the hosting costs of the LVFS, and Red Hat pays for all my time — but as LVFS grows and grows that’s going to be less and less sustainable longer term. We’re trying to find funding to hire additional resources as a “me replacement” so that there is backup and additional attention to LVFS (and so that I can go on holiday for two weeks without needing to take a laptop with me).

    This year there will be a fair-use quota introduced, with different sponsorship levels having a different quota allowance. Nothing currently happens if the quota is exceeded, although there will be additional warnings asking the vendor to contribute. The “associate” (free) quota is also generous, with 50,000 monthly downloads and 50 monthly uploads. This means that almost all the 140 vendors on the LVFS should expect no changes.

    Vendors providing millions of firmware files to end users (and deriving tremendous value from the LVFS…) should really either be providing a developer to help write shared code, design abstractions and review patches (like AMD does) or allocate some funding so that we can pay for resources to take action for them. So far no OEMs provide any financial help for the infrastructure itself, although two have recently offered — and we’re now in a position to “ say yes ” to the offers of help.

    I’ve written a LVFS Project Sustainability Plan that explains the problem and how OEMs should work with the Linux Foundation to help fund the LVFS.

    I’m aware funding open source software is a delicate matter and I certainly do not want to cause anyone worry. We need the LVFS to have strong foundations; it needs to grow, adapt, and be resilient – and it needs vendor support.

    Draft timeline, which is probably a little aggressive for the OEMs — so the dates might be moved back in the future:

    APR 2025 : We started showing the historical percentage “fair use” download utilization graph in vendor pages. As time goes on this will also be recorded into per-protocol sections too.

    downloads over time

    JUL 2025 : We started showing the historical percentage “fair use” upload utilization, also broken into per-protocol sections:

    uploads over time

    JUL 2025 : We started restricting logos on the main index page to vendors joining as startup or above level — note Red Hat isn’t sponsoring the LVFS with money (but they do pay my salary!) — I’ve just used the logo as a placeholder to show what it would look like.

    AUG 2025 : I created this blogpost and sent an email to the lvfs-announce mailing list.

    AUG 2025 : We allow vendors to join as startup or premier sponsors shown on the main page and show the badge on the vendor list

    DEC 2025 : Start showing over-quota warnings on the per-firmware pages

    DEC 2025 : Turn off detailed per-firmware analytics to vendors below startup sponsor level

    APR 2026 : Turn off access to custom LVFS API for vendors below Startup Sponsorship level, for instance:

    • /lvfs/component/{}/modify/json
    • /lvfs/vendors/auth
    • /lvfs/firmware/auth

    APR 2026 : Limit the number of authenticated automated robot uploads for less than Startup Sponsorship levels.

    Comments welcome!

    • Pl chevron_right

      Michael Meeks: 2025-08-07 Thursday

      news.movim.eu / PlanetGnome • 7 August

    • Packed, hearts set for home, rested by the beach, bus, plane, car - back home. Good to see Leanne; bed early - two hours ahead time-wise.
    • Pl chevron_right

      Peter Hutterer: libinput and Lua plugins (Part 2)

      news.movim.eu / PlanetGnome • 7 August • 2 minutes

    Part 2 is, perhaps suprisingly, a follow-up to libinput and lua-plugins (Part 1) .

    The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.

    Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes. The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.

    What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options ) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows .

    But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection") function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.

    So hooray for fewer features and happy implementing!

    [1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues

    • Pl chevron_right

      Steven Deobald: 2025-08-01 Foundation Update

      news.movim.eu / PlanetGnome • 6 August

    This will perhaps be the least-fun Foundation Update of the year. July 27th was supposed to be the “Board Hack Day” (yay?… policy hackfest? everyone’s favourite?), but it ended up consumed with more immediately pressing issues. Somewhat unfortunate, really, as I think we were all looking forward to removing the executive tasks from the Board project wall and boiling their work down to more strategic and policy work. I suppose we’ll just have to do that throughout the year.

    Many people are on vacation right now, so the Foundation feels somewhat quiet in its post-GUADEC moment. Budget planning is happening. Doozers are doozering. The forest gnomes are probably taking a nap. There’s a lot of annoying paperwork being shuffled around.

    I hope for a more exciting Foundation Update in the next week or two. Can’t win ’em all.

    • Pl chevron_right

      Jussi Pakkanen: Let's properly analyze an AI article for once

      news.movim.eu / PlanetGnome • 6 August • 8 minutes

    Recently the CEO of Github wrote a blog post called Developers reinvented . It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you.

    But before we begin, a detour to the east.

    Statistics and the Soviet Union

    One of the great wonders of statistical science of the previous century was without a doubt the Soviet Union. They managed to invent and perfect dozens of ways to turn data to your liking, no matter the reality. Almost every official statistic issued by USSR was a lie. Most people know this. But even most of those do not grasp just how much the stats differed from reality. I sure didn't until I read this book . Let's look at some examples.

    Only ever report percentages

    The USSR's glorious statistics tended to be of the type "manufacturing of shoes grew over 600% this five year period". That certainly sounds a lot better than "In the last five years our factory made 700 pairs of shoes as opposed to 100" or even "7 shoes instead of 1". If you are really forward thinking, you can even cut down shoe production on those five year periods when you are not being measured. It makes the stats even more impressive, even though in reality many people have no shoes at all.

    The USSR classified the real numbers as state secrets because the truth would have made them look bad. If a corporation only gives you percentages, they may be doing the same thing. Apply skepticism as needed.

    Creative comparisons

    The previous section said the manufacturing of shoes has grown. Can you tell what it is not saying? That's right, growth over what? It is implied that the comparison is to the previous five year plan. But it is not. Apparently a common comparison in these cases was the production amounts of the year 1913. This "best practice" was not only used in the early part of the 1900s, it was used far into the 1980s.

    Some of you might wonder why 1913 and not 1916, which was the last year before the bolsheviks took over? Simply because that was the century's worst year for Russia as a whole. So if you encounter a claim that "car manufacturing was up 3700%" some year in 1980s Soviet Union, now you know what that actually meant.

    "Better" measurements

    According to official propaganda, the USSR was the world's leading country in wheat production. In this case they even listed out the production in absolute tonnes. In reality it was all fake. The established way of measuring wheat yields is to measure the "dry weight", that is, the mass of final processed grains. When it became apparent that the USSR could not compete with imperial scum, they changed their measurements to "wet weight". This included the mass of everything that came out from the nozzle of a harvester, such as stalks, rats, mud, rain water, dissidents and so on.

    Some people outside the iron curtain even believed those numbers. Add your own analogy between those people and modern VC investors here.

    To business then

    The actual blog post starts with this thing that can be considered a picture.

    What message would this choice of image tell about the person who chose to use it in their blog post?

    1. Said person does not have sufficient technical understanding to grasp the fact that children's toy blocks should, in fact, be affected by gravity.
    2. Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".
    Are these the sort of qualities a person in charge of the largest software development platform on Earth should have? No, they are not.

    To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself. Cultural misappropriation is high on the list of core values at Github HQ it seems.

    With that let's move on to the actual content, which is this post from Twitter (to quote Matthew Garrett, I will respect their name change once Elon Musk starts respecting his child's).

    Oh, wow! A field study. That makes things clear. With evidence and all! How can we possibly argue against that?

    Easily. As with a child.

    Let's look at this "study" (and I'm using the word in its loosest possible term here) and its details with an actual critical eye. The first thing is statistical representativeness. The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278, but, you know, one order of magnitude one way or another, who cares about those? Certainly not business big shot movers and shakers. Like Stockton Rush for example.

    The math above requires an unbiased sampling. The post does not even attempt to answer whether that is the case. It would mean getting answers to questions like:

    • How were the 22 people chosen?
    • How many different companies, skill levels, nationalities, genders, age groups etc were represented?
    • Did they have any personal financial incentive on making their new AI tools look good?
    • Were they under any sort of duress to produce the "correct" answers?
    • Was the test run multiple times until it produced the desired result?
    The latter is an age old trick where you run a test with random results over and over on small groups. Eventually you will get a run that points the way you want. Then you drop the earlier measurements and publish the last one. In "the circles" this is known as data set selection .

    Just to be sure, I'm not saying that is what they did. But if someone drove a dump truck full of money to my house and asked me to create a "study" that produced those results, that is exactly how I would do it. (I would not actually do it because I have a spine.)

    Moving on. The main headline grabber is "Either you embrace AI or get out of this career". If you actually read the post (I know), what you find is that this is actually a quote from one of the participants. It's a bit difficult to decipher from the phrasing but my reading is that this is not a grandstanding hurrah of all things AI, but more of a "I guess this is something I'll have to get used to" kind of submission.

    The post then goes on a buzzwordsalad tour of statements that range from the incomprehensible to the puzzling. Perhaps the weirdest is this nugget on education:

    Teaching [programming] in a way that evaluates rote syntax or memorization of APIs is becoming obsolete.

    It is not "becoming obsolete". It has been considered the wrong thing to do for as long as computer science has existed. Learning the syntax of most programming languages takes a few lessons, the rest of the semester is spent on actually using the language to solve problems. Any curriculum not doing that is just plain bad. Even worse than CS education in Russia in 1913.

    You might also ponder that if the author is so out of touch with reality in this simple issue, how completely off the rest of his statements might be.

    A magician's shuffle

    As nonsensical as the Twitter post is, we have not yet even mentioned the biggest misdirection in it. You might not even have noticed it yet. I certainly did not until I read the actual post. Try if you can spot it.

    Ready? Let's go.

    The actual fruit of this "study" boils down to this snippet.

    Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition.

    Let that sink in. For the last several years the main supposed advantage of AI tools has been the fact that they save massive amounts of developer time. This has lead to the "fire all your developers and replace them with AI bots" trend sweeping the nation. Now even this AI advertisement of a "study" can not find any such advantages and starts backpedaling into something completely different. Just like we have always been at war with Eastasia, AI has never been about "productivity". No. No. It is all about "increased ambition", whatever that is. The post then carries on with this even more baffling statement.

    When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

    Really? Only the most advanced agentics you say? That is a bold statement to make given that the leading reason for software project failure is scope creep. This is the one area where human beings have decades long track record for beating any artificial system. Even if machines were able to do it better, "Make your project failures more probable! Faster! Spectacularer!" is a tough rallying cry to sell.

    To conclude, the actual findings of this "study" seem to be that:

    1. AI does not improve developer productivity or skills
    2. AI does increase developer ambition
    This is strictly worse than the current state of affairs.
    • Pl chevron_right

      Georges Basile Stavracas Neto: GUADEC 2025

      news.movim.eu / PlanetGnome • 5 August • 3 minutes

    I’m back from GUADEC 2025. I’m still super tired, but I wanted to write down my thoughts before they vanish into the eternal void.

    First let me start with a massive thank you for everyone that helped organize the event. It looked extremely well rounded, the kind of well rounded that can only be explained by a lot of work from the organizers. Thank you all!

    Preparations

    For this GUADEC I did something special: little calendars!

    3D printed calendars on the table

    These were 3D printed from a custom model I’ve made, based on the app icon of GNOME Calendar. I brought a small batch to GUADEC, and to my surprise – and joy – they vanished in just a couple of hours in the first day! It was very cool to see the calendars around after that:

    A hand holding a 3D printed calendar Someone using a smartphone, with a 3D printed calendar visible Someone walking with a backpack with a 3D printed calendar A backpack with a 3D printed calendar hanging out

    Talks

    This year I gave two talks:

    The first one was rather difficult. On the one hand, streaming GNOME development and interacting with people online for more than 6 years has given me many anecdotal insights about the social dynamics of free software. On the other hand, it was very difficult to materialize these insights and summarize them in form of a talk.

    I’ve received good feedback about this talk, but for some reason I still left it feeling like it missed something . I don’t know what, exactly. But if anyone felt energized to try some streaming, goal accomplished I guess? 🙂

    The second talk was just a regular update on the XDG Desktop Portal project. It was made with the sole intention of preparing territory for the Flatpak & Portals BoF that occurred later. Not much to say about it, it was a technical talk, with some good questions and discussions after that.

    As for the talks that I’ve watched, to me there is one big highlight for this GUADEC: Emmanuelle’s “Getting Things Done in GNOME” .

    Emmanuele on stage with "How do things happen in GNOME?" on the projector

    Emmanuele published the contents of this talk in article form recently.

    Sometimes, when we’re in the inflection point towards something, the right person with the right sensitivities can say the right things. I think that’s what Emmanuele did here. I think myself and others have already been feeling that the “maintainer”, in the traditional sense of the word, wasn’t a fitting description of how things have been working lately. Emmanuele gifted us with new vocabulary for that: “special interest group”. By the end of GUADEC, we were comfortably using this new descriptor.

    Photography

    It’s not exactly a secret that I started dipping my toes towards a long admired hobby: photography. This GUADEC was the first time I traveled with a camera and a lens, and actually attempted to document the highlights and impressions.

    I’m not going to dump all of it here, but I found Brescia absolutely fascinating and fell in love with the colors and urban vision there. It’s not the most walkable or car-free city I’ve ever been, but there are certain aspects to it that caught my attention!

    The buildings had a lovely color palette, sometimes bright, sometimes pastel:

    Some of the textures of the material of walls were intriguing:

    I fell in love with how Brescia lights itself:

    I was fascinated by how many Vespas (and similar) could be found around the city, and decided to photograph them all! Some of the prettiest ones:

    The prize for best modeling goes to Sri!

    Conclusion

    Massive thanks to everyone that helped organize GUADEC this year. It was a fantastic event. It was great to see old friends, and meet new people in there. There were many newcomers attending this GUADEC!

    And here’s the GUADEC dinner group photo!

    Group photo during the GUADEC dinner party
    • Pl chevron_right

      Thibault Martin: TIL that You can spot base64 encoded JSON

      news.movim.eu / PlanetGnome • 5 August • 1 minute

    I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

    {
      "serial": 13,
      "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
      "meta": {
        "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0="
      },
      "encrypted_data": "ONXZsJhz37eJA[...]",
      "encryption_version": "v0"
    }
    

    Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypted the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

    I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

    I was incredulous but gave it a go, and it worked!!

    $ echo "eyJzYW[...]" | base64 -d
    {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}
    

    I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

    He replied,

    Whenever you see ey , that's {" and then if it's followed by a letter, you'll get J followed by a letter.

    I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

    $ echo "{" | base64
    ewo=
    $ echo "{\"" | base64
    eyIK
    $ echo "{\"s" | base64
    eyJzCg==
    $ echo "{\"a" | base64
    eyJhCg==
    $ echo "{\"word\"" | base64
    eyJ3b3JkIgo=
    

    Thanks Davide and Denis for showing me this simple but pretty useful trick!