call_end

    • Pl chevron_right

      Christian Hergert: Week 32 Status

      news.movim.eu / PlanetGnome • 11 August • 4 minutes

    Foundry

    This week was largely around getting the new template engine landed so it can be part of the 1.0 ABI. Basically just racing to get everything landed in time to commit to the API/ABI contract.

    • FoundryTextBuffer gained some new type prerequisites to make it easier for writing applications against them. Since Foundry is a command line tool as well as a library, we don’t just use GtkTextBuffer since the CLI doesn’t even link against GTK. But it is abstracted in such a way that the GTK application would implement the FoundryTextBuffer interface with a derived GtkSourceBuffer .

    • FoundryTextSettings has landed which provides a layered approach to text editor settings similar (but better) than we have currently in GNOME Builder. There is a new modeline implementation, editorconfig, and gsettings backed settings provider which apply in that order (with per-file overrides allowed at the tip).

      Where the settings-backed implementation surpasses Builder is that it allows for layering there too. You can have user-overrides by project, project defaults, as well as Foundry defaults.

      I still need to get the default settings per-language that we have already (and are mostly shared with Text Editor too) as reasonable defaults.

    • To allow changing the GSettings-based text settings above, the foundry settings set ... command gained support for specific paths using the same :/ suffix that the gsettings command uses.

    • Spent some time on the upcoming chat API for models so I can experiment with what is possible when you control the entire tools stack.

    • Dropped some features so they wouldn’t be part of the 1.0. We can implement them later on as time permits. Specifically I don’t want to commit to a MCP or DAP implementation yet since I’m not fond of either of them as an API.

    • The FoundryInput subsystem gained support for license and language inputs. This makes it much simpler to write templates in the new internal template format.

    • Allow running foundry template create ./FILE.template to create a set of files or project from a template file. That allows you to interate on your own templates for your project without having to have them installed at the right location.

    • Wrote new project templates for empty project, shared library project, and gtk4 projects. Still need to finish the gtk4 project a bit to match feature parity with the version from Builder.

      I very much am happy with how the library project turned out because this time around it supports Gir, Pkgconfig, Vapi generation, gi-doc, and more. I still need to get library potfile support though.

      I also wrote new templates for creating gobjects and gtkwidgets in C (but we can port to other languages if necessary). This is a new type of “code template” as opposed to “project template”. It still allows for multiple files to be created in the target project.

      What is particularly useful about it though is that we can allow projects to expose templates specific to that project in the UI. In Foundry, that means you have template access to create new plugins, LSPs, and services quite easily.

    • Projects can specify their default license now to make more things just happen automatically for contributors when creating new files.

    • Templates can include the default project license header simply now by doing `{{include “license.c”}} where the suffix gets the properly commented license block.

    • The API for expand templates has changed to return a GListModel of FoundryTemplateOutput . The primary motivator here is that I want to be able to have UI in Builder that lets you preview template before actually saving the templates to disk.

    • A new API landed that we had in Builder for listing build targets. Currently, only the meson plugin implements the FoundryBuildTargetProvider . This is mostly plumbing for upcoming features.

    • The new template format is a bit of amalgamation from a few formats that is just based on my experience trying to find a way to maintain these templates.

      It starts with a GKeyFile block that describes the template and inputs to the template.

      Then you have a series of what looks like markdown code blocks. You can have conditionals around them which allows for optionally including files based on input.

      The filename for the blocks can also be expanded based on template inputs. The expansions are just TmplExpr expressions from template-glib.

      An example can be found at:

      https://gitlab.gnome.org/GNOME/foundry/-/blob/main/plugins/meson-templates/library.project

    Template-GLib

    • Found some oopsies in how TmplExpr evaluated branches so fixed those up. Last year I wrote most of a C compiler and taking a look at this code really makes me want to rewrite it all. The intermixing of Yacc and GObject Introspection is ripe for improvement.

    • Added support for == and != of GStrv expressions.

    Other

    • Play CI whack-a-mole for ICU changes in nightly SDKs

    • Propagate foundry changes to projects depending on it so that we have useful flatpak manifests with minimal feature flags enabled.

    • Took a look at some performance issues in GNOME OS and passed along some debugging techniques. Especially useful for when all you got is an array of registers and need to know something.

    • Libpeas release for GNOME 49 beta

    • Pl chevron_right

      Peter Hutterer: xkeyboard-config 2.45 has a new install location

      news.movim.eu / PlanetGnome • 11 August • 2 minutes

    This is a heads ups that if you install xkeyboard-config 2.45 (the package that provides the XKB data files), some manual interaction may be needed. Version 2.45 has changed the install location after over 20 years to be a) more correct and b) more flexible.

    When you select a keyboard layout like "fr" or "de" (or any other ones really), what typically happens in the background is that an XKB parser (xkbcomp if you're on X, libxkbcommon if you're on Wayland) goes off and parses the data files provided by xkeyboard-config to populate the layouts. For historical reasons these data files have resided in /usr/share/X11/xkb and that directory is hardcoded in more places than it should be (i.e. more than zero). As of xkeyboard-config 2.45 however, the data files are now installed in the much more sensible directory /usr/share/xkeyboard-config-2 with a matching xkeyboard-config-2.pc for anyone who relies on the data files. The old location is symlinked to the new location so everything keeps working, people are happy, no hatemail needs to be written, etc. Good times.

    The reason for this change is two-fold: moving it to a package-specific directory opens up the (admittedly mostly theoretical) use-case of some other package providing XKB data files. But even more so, it finally allows us to start versioning the data files and introduce new formats that may be backwards-incompatible for current parsers. This is not yet the case however, the current format in the new location is guaranteed to be the same as the format we've always had, it's really just a location change in preparation for future changes.

    Now, from an upstream perspective this is not just hunky, it's also dory. Distributions however struggle a bit more with this change because of packaging format restrictions. RPM for example is quite unhappy with a directory being replaced by a symlink which means that Fedora and OpenSuSE have to resort to the .rpmmoved hack. If you have ever used the custom layout and/or added other files to the XKB data files you will need to manually move those files from /usr/share/X11/xkb.rpmmoved/ to the new equivalent location . If you have never used that layout and/or modified local you can just delete /usr/share/X11/xkb.rpmmoved . Of course, if you're on Wayland you shouldn't need to modify system directories anyway since you can do it in your $HOME .

    Corresponding issues on what to do on Arch and Gentoo , I'm not immediately aware of other distributions's issues but if you search for them in your bugtracker you'll find them.

    • Pl chevron_right

      Steven Deobald: 2025-08-08 Foundation Update

      news.movim.eu / PlanetGnome • 9 August • 3 minutes

    ## Opaque Things

    Very unfortunately, most of my past two weeks have been spent on “opaque things.” Let’s just label that whole bundle “bureaucracy” for now.

    ## Apology

    I owe a massive apology to the GIMP team. These folks have been incredibly patient while the GNOME Foundation, as their fiscal host, sets up a bunch of new paperwork for them. It’s been a slow process.

    Due to the above bureaucracy, this process has been even slower than usual. Particularly on GIMP’s big pearl anniversary year, this is especially frustrating for them and the timing is awful.

    As long as I am in this role, I accept responsibility for the Foundation’s struggles in supporting the GIMP project. I’m sorry, folks, and I hope we get past the current round of difficulties quickly so we can provide the fiscal hosting you deserve.

    ## Draft Budget

    Thanks to Deepa’s tremendous work teasing apart our financial reporting, she and I have a draft budget to present to the Board on their August 12th regular meeting.

    The board has already seen a preliminary version of the budget as of their July 27th meeting and an early draft as of last week. Our schedule has an optional budget review August 26th, another required budget review on September 9th and, if we need a final meeting to pass the budget, we can do that at the September 23rd board meeting. Last year, Richard passed the first on-time GNOME Foundation budget in many years. I’m optimistic we can pass a clear, uncomplicated budget a month early in 2025.

    (Our fiscal year is October – September.)

    I’m incredibly grateful to Deepa for all the manual work she’s put into our finances during an already busy summer. Deepa’s also started putting in place a tagging mechanism with our bookkeepers which will hopefully resolve this in a more automatic way in the future. The tagging mechanism will work in conjunction with our chart of accounts, which doesn’t really represent the GNOME Foundation’s operating capital at all, as a 501(c)(3)’s chart of accounts is geared toward the picture we show to the IRS, not the picture we show to the Board or use to pass a budget. It’s the same data, but two very different lenses on it.

    Understanding is the first step. After a month of wrestling with the data, we now understand it. Automation is the second step. The board should know exactly where the Foundation stands, financially, every year, every quarter, and every month… without the need for manual reports.

    ## 501(c)(3) Structural Improvements

    As I mention in my GUADEC keynote , the GNOME Foundation has a long road ahead of it to become an ideal 501(c)(3) … but we just need to make sure we’re focused on continuous improvement. Other non-profits also struggle with their own structural challenges, since charities can take many different shapes and are often held together almost entirely by volunteers. Every organization is different, every organization wants to succeed.

    I had the pleasure of some consultation conversations this week with Amy Parker of the OpenSSL Foundation and Halle Baksh, a 501(c) expert from California. Delightful folks. Super helpful.

    One of the greatest things I’m finding about working in the non-profit space is that we have a tremendous support network. Everyone wants us to succeed and every time we’re ready to take the next step, there will be someone there to help us level up. Every time we do, the Foundation will get more mature and, as a result, more effective.

    ## Travel Policy Freeze

    I created some confusion by mentioning the travel policy freeze in my AGM slides. A group of hackers asked me at dinner on the last night in Brescia, “does this mean that all travel for contributors is cancelled?”

    No, absolutely not. The travel policy freeze means that every travel request must be authorized by the Executive Director and the President; we have temporarily suspended the Travel Committee’s spending authority. We will of course still sponsor visas for interns and other program-related travel. The intention of the travel policy freeze is to reduce administrative travel costs and add them to program travel, not the other way around.

    Sorry if I freaked anyone out. The goal (as long as I’m around) will always be to push Foundation resources toward programs like development, infrastructure, and events.

    ## Getting Back To Work

    Sorry again for the short update. I’m optimistic that we’ll get over this bureaucratic bump in the road soon enough. Thanks for your patience.

    • Pl chevron_right

      Sebastian Wick: GNOME 49 Backlight Changes

      news.movim.eu / PlanetGnome • 8 August • 3 minutes

    One of the things I’m working on at Red Hat is HDR support. HDR is inherently linked to luminance (brightness, but ignoring human perception) which makes it an important parameter for us that we would like to be in control of.

    One reason is rather stupid. Most external HDR displays refuse to let the user control the luminance in their on-screen-display (OSD) if the display is in HDR mode. Why? Good question. Read my previous blog post .

    The other reason is that the amount of HDR headroom we have available is the result of the maximum luminance we can achieve versus the luminance that we use as the luminance a sheet of white paper has (reference white level). For power consumption reasons, we want to be able to dynamically change the available headroom, depending on how much headroom the content can make use of. If there is no HDR content on the screen, there is no need to crank up the backlight to give us more headroom, because the headroom will be unused.

    To work around the first issue, mutter can change the signal it sends to the display, so that white is not a signal value of 1.0 but somewhere else between 0 and 1. This essentially emulates a backlight in software. The drawback is that we’re not using a bunch of bits in the signal anymore and issues like banding might become more noticeable, but since we’re using this only with 10 or 12 bits HDR signals, this isn’t an issue in practice.

    This has been implemented in GNOME 48 already, but it was an API that mutter exposed and Settings showed as “HDR Brightness”.

    GNOME Settings Display panel showing the HDR Brightness Setting

    "HDR Brightness" in GNOME Settings

    The second issue requires us to be able to map the backlight value to luminance, and change the backlight atomically with an update to the screen. We could work towards adding those things to the existing sysfs backlight API but it turns out that there are a number of problems with it. Mapping the sysfs entry to a connected display is really hard (GNOME pretends that there is only one single internal display that can ever be controlled), and writing a value to the backlight requires root privileges or calling a logind dbus API. One internal panel can expose multiple backlights, the value of 0 can mean the display turns off or is just really dim.

    So a decision was made to create a new API that will be part of KMS, the API that we use to control the displays.

    The sysfs backlight has been controlled by gnome-settings-daemon and GNOME Shell called a dbus API when the screen brightness slider was moved.

    To recap:

    • There is a sysfs backlight API for basically one internal panel requires logind or a setuid helper executable
    • gnome-settings-daemon controlled this single screen sysfs backlight
    • Mutter has a “software backlight” feature
    • KMS will get a new backlight API that needs to be controlled by mutter

    Overall, this is quite messy, so I decided to clean this up.

    Over the last year, I moved the sysfs backlight handling from gnome-settings-daemon into mutter, added logic to decide which backlight to use (sysfs or the “software backlight”), and made it generic so that any screen can have a backlight. This means mutter is the single source of truth for the backlight itself. The backlight itself gets its value from a number of sources. The user can configure the screen brightness in the quick settings menu and via keyboard shortcuts. Power saving features can kick in and dim the screen. Lastly, an Ambient Light Sensor (ALS) can take control over the screen brightness. To make things more interesting, a single “logical monitor” can have multiple hardware monitors which each can have a backlight. All of that logic is now neatly sitting in GNOME Shell which takes signals from gnome-settings-daemon about the ALS and dimming. I also changed the Quick Settings UI to make it possible to control the brightness on multiple screens, and removed the old “HDR Brightness” from Settings.

    All of this means that we can now handle screen brightness on multiple monitors and when the new KMS backlight API makes it upstream, we can just plug it in, and start to dynamically create HDR headroom.

    • Pl chevron_right

      Richard Hughes: LVFS Sustainability Plan

      news.movim.eu / PlanetGnome • 8 August • 2 minutes

    tl;dr: I’m asking the biggest users of the LVFS to sponsor the project.

    The Linux Foundation is kindly paying for all the hosting costs of the LVFS, and Red Hat pays for all my time — but as LVFS grows and grows that’s going to be less and less sustainable longer term. We’re trying to find funding to hire additional resources as a “me replacement” so that there is backup and additional attention to LVFS (and so that I can go on holiday for two weeks without needing to take a laptop with me).

    This year there will be a fair-use quota introduced, with different sponsorship levels having a different quota allowance. Nothing currently happens if the quota is exceeded, although there will be additional warnings asking the vendor to contribute. The “associate” (free) quota is also generous, with 50,000 monthly downloads and 50 monthly uploads. This means that almost all the 140 vendors on the LVFS should expect no changes.

    Vendors providing millions of firmware files to end users (and deriving tremendous value from the LVFS…) should really either be providing a developer to help write shared code, design abstractions and review patches (like AMD does) or allocate some funding so that we can pay for resources to take action for them. So far no OEMs provide any financial help for the infrastructure itself, although two have recently offered — and we’re now in a position to “ say yes ” to the offers of help.

    I’ve written a LVFS Project Sustainability Plan that explains the problem and how OEMs should work with the Linux Foundation to help fund the LVFS.

    I’m aware funding open source software is a delicate matter and I certainly do not want to cause anyone worry. We need the LVFS to have strong foundations; it needs to grow, adapt, and be resilient – and it needs vendor support.

    Draft timeline, which is probably a little aggressive for the OEMs — so the dates might be moved back in the future:

    APR 2025 : We started showing the historical percentage “fair use” download utilization graph in vendor pages. As time goes on this will also be recorded into per-protocol sections too.

    downloads over time

    JUL 2025 : We started showing the historical percentage “fair use” upload utilization, also broken into per-protocol sections:

    uploads over time

    JUL 2025 : We started restricting logos on the main index page to vendors joining as startup or above level — note Red Hat isn’t sponsoring the LVFS with money (but they do pay my salary!) — I’ve just used the logo as a placeholder to show what it would look like.

    AUG 2025 : I created this blogpost and sent an email to the lvfs-announce mailing list.

    AUG 2025 : We allow vendors to join as startup or premier sponsors shown on the main page and show the badge on the vendor list

    DEC 2025 : Start showing over-quota warnings on the per-firmware pages

    DEC 2025 : Turn off detailed per-firmware analytics to vendors below startup sponsor level

    APR 2026 : Turn off access to custom LVFS API for vendors below Startup Sponsorship level, for instance:

    • /lvfs/component/{}/modify/json
    • /lvfs/vendors/auth
    • /lvfs/firmware/auth

    APR 2026 : Limit the number of authenticated automated robot uploads for less than Startup Sponsorship levels.

    Comments welcome!

    • Pl chevron_right

      Michael Meeks: 2025-08-07 Thursday

      news.movim.eu / PlanetGnome • 7 August

    • Packed, hearts set for home, rested by the beach, bus, plane, car - back home. Good to see Leanne; bed early - two hours ahead time-wise.
    • Pl chevron_right

      Peter Hutterer: libinput and Lua plugins (Part 2)

      news.movim.eu / PlanetGnome • 7 August • 2 minutes

    Part 2 is, perhaps suprisingly, a follow-up to libinput and lua-plugins (Part 1) .

    The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.

    Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes. The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.

    What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options ) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows .

    But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection") function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.

    So hooray for fewer features and happy implementing!

    [1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues

    • Pl chevron_right

      Steven Deobald: 2025-08-01 Foundation Update

      news.movim.eu / PlanetGnome • 6 August

    This will perhaps be the least-fun Foundation Update of the year. July 27th was supposed to be the “Board Hack Day” (yay?… policy hackfest? everyone’s favourite?), but it ended up consumed with more immediately pressing issues. Somewhat unfortunate, really, as I think we were all looking forward to removing the executive tasks from the Board project wall and boiling their work down to more strategic and policy work. I suppose we’ll just have to do that throughout the year.

    Many people are on vacation right now, so the Foundation feels somewhat quiet in its post-GUADEC moment. Budget planning is happening. Doozers are doozering. The forest gnomes are probably taking a nap. There’s a lot of annoying paperwork being shuffled around.

    I hope for a more exciting Foundation Update in the next week or two. Can’t win ’em all.

    • Pl chevron_right

      Jussi Pakkanen: Let's properly analyze an AI article for once

      news.movim.eu / PlanetGnome • 6 August • 8 minutes

    Recently the CEO of Github wrote a blog post called Developers reinvented . It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you.

    But before we begin, a detour to the east.

    Statistics and the Soviet Union

    One of the great wonders of statistical science of the previous century was without a doubt the Soviet Union. They managed to invent and perfect dozens of ways to turn data to your liking, no matter the reality. Almost every official statistic issued by USSR was a lie. Most people know this. But even most of those do not grasp just how much the stats differed from reality. I sure didn't until I read this book . Let's look at some examples.

    Only ever report percentages

    The USSR's glorious statistics tended to be of the type "manufacturing of shoes grew over 600% this five year period". That certainly sounds a lot better than "In the last five years our factory made 700 pairs of shoes as opposed to 100" or even "7 shoes instead of 1". If you are really forward thinking, you can even cut down shoe production on those five year periods when you are not being measured. It makes the stats even more impressive, even though in reality many people have no shoes at all.

    The USSR classified the real numbers as state secrets because the truth would have made them look bad. If a corporation only gives you percentages, they may be doing the same thing. Apply skepticism as needed.

    Creative comparisons

    The previous section said the manufacturing of shoes has grown. Can you tell what it is not saying? That's right, growth over what? It is implied that the comparison is to the previous five year plan. But it is not. Apparently a common comparison in these cases was the production amounts of the year 1913. This "best practice" was not only used in the early part of the 1900s, it was used far into the 1980s.

    Some of you might wonder why 1913 and not 1916, which was the last year before the bolsheviks took over? Simply because that was the century's worst year for Russia as a whole. So if you encounter a claim that "car manufacturing was up 3700%" some year in 1980s Soviet Union, now you know what that actually meant.

    "Better" measurements

    According to official propaganda, the USSR was the world's leading country in wheat production. In this case they even listed out the production in absolute tonnes. In reality it was all fake. The established way of measuring wheat yields is to measure the "dry weight", that is, the mass of final processed grains. When it became apparent that the USSR could not compete with imperial scum, they changed their measurements to "wet weight". This included the mass of everything that came out from the nozzle of a harvester, such as stalks, rats, mud, rain water, dissidents and so on.

    Some people outside the iron curtain even believed those numbers. Add your own analogy between those people and modern VC investors here.

    To business then

    The actual blog post starts with this thing that can be considered a picture.

    What message would this choice of image tell about the person who chose to use it in their blog post?

    1. Said person does not have sufficient technical understanding to grasp the fact that children's toy blocks should, in fact, be affected by gravity.
    2. Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".
    Are these the sort of qualities a person in charge of the largest software development platform on Earth should have? No, they are not.

    To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself. Cultural misappropriation is high on the list of core values at Github HQ it seems.

    With that let's move on to the actual content, which is this post from Twitter (to quote Matthew Garrett, I will respect their name change once Elon Musk starts respecting his child's).

    Oh, wow! A field study. That makes things clear. With evidence and all! How can we possibly argue against that?

    Easily. As with a child.

    Let's look at this "study" (and I'm using the word in its loosest possible term here) and its details with an actual critical eye. The first thing is statistical representativeness. The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278, but, you know, one order of magnitude one way or another, who cares about those? Certainly not business big shot movers and shakers. Like Stockton Rush for example.

    The math above requires an unbiased sampling. The post does not even attempt to answer whether that is the case. It would mean getting answers to questions like:

    • How were the 22 people chosen?
    • How many different companies, skill levels, nationalities, genders, age groups etc were represented?
    • Did they have any personal financial incentive on making their new AI tools look good?
    • Were they under any sort of duress to produce the "correct" answers?
    • Was the test run multiple times until it produced the desired result?
    The latter is an age old trick where you run a test with random results over and over on small groups. Eventually you will get a run that points the way you want. Then you drop the earlier measurements and publish the last one. In "the circles" this is known as data set selection .

    Just to be sure, I'm not saying that is what they did. But if someone drove a dump truck full of money to my house and asked me to create a "study" that produced those results, that is exactly how I would do it. (I would not actually do it because I have a spine.)

    Moving on. The main headline grabber is "Either you embrace AI or get out of this career". If you actually read the post (I know), what you find is that this is actually a quote from one of the participants. It's a bit difficult to decipher from the phrasing but my reading is that this is not a grandstanding hurrah of all things AI, but more of a "I guess this is something I'll have to get used to" kind of submission.

    The post then goes on a buzzwordsalad tour of statements that range from the incomprehensible to the puzzling. Perhaps the weirdest is this nugget on education:

    Teaching [programming] in a way that evaluates rote syntax or memorization of APIs is becoming obsolete.

    It is not "becoming obsolete". It has been considered the wrong thing to do for as long as computer science has existed. Learning the syntax of most programming languages takes a few lessons, the rest of the semester is spent on actually using the language to solve problems. Any curriculum not doing that is just plain bad. Even worse than CS education in Russia in 1913.

    You might also ponder that if the author is so out of touch with reality in this simple issue, how completely off the rest of his statements might be.

    A magician's shuffle

    As nonsensical as the Twitter post is, we have not yet even mentioned the biggest misdirection in it. You might not even have noticed it yet. I certainly did not until I read the actual post. Try if you can spot it.

    Ready? Let's go.

    The actual fruit of this "study" boils down to this snippet.

    Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition.

    Let that sink in. For the last several years the main supposed advantage of AI tools has been the fact that they save massive amounts of developer time. This has lead to the "fire all your developers and replace them with AI bots" trend sweeping the nation. Now even this AI advertisement of a "study" can not find any such advantages and starts backpedaling into something completely different. Just like we have always been at war with Eastasia, AI has never been about "productivity". No. No. It is all about "increased ambition", whatever that is. The post then carries on with this even more baffling statement.

    When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

    Really? Only the most advanced agentics you say? That is a bold statement to make given that the leading reason for software project failure is scope creep. This is the one area where human beings have decades long track record for beating any artificial system. Even if machines were able to do it better, "Make your project failures more probable! Faster! Spectacularer!" is a tough rallying cry to sell.

    To conclude, the actual findings of this "study" seem to be that:

    1. AI does not improve developer productivity or skills
    2. AI does increase developer ambition
    This is strictly worse than the current state of affairs.