call_end

    • Pl chevron_right

      Daniel García Moreno: Log Detective: GSoC 2025 (part 2)

      news.movim.eu / PlanetGnome • 27 August • 4 minutes

    This week is the last week of Google Summer of Code for this year edition, and I'll do a summary of what we have been doing and future plans for the Log Detective integration in openSUSE.

    I wrote a blog post about this project when it starts, so if you didn't read, you can take a look .

    All the work and code was done in the logdetective-obs github repository .

    Aazam, aka the intern, also wrote some blog post about his work .

    gsoc.png

    Initial Research

    The idea of the project was to explore ways of integration of LogDetective with the openSUSE dev workflow. So we started collecting some build failures in the openSUSE build service and testing with log detective to check if the output is smart or even relevant.

    The intern creates a script to collect log from build failures and we store the LLM answer.

    Doing this we detected that the log-detective.com explain is very slow and the local tool is less accurate.

    The results that we got weren't too good, but at this point of the project is something expected. The model is being trained and will improve with every new log that users send.

    Local vs Remote, model comparison

    The logdetective command line tool is nice, but LLM requires a lot of resources to run locally. This tool also uses the models published by fedora in huggingface.co , so it's not as accurate as the remote instance that has a better trained model and is up to date.

    In any case, the local logdetective tool is interesting and, at this moment, it's faster than the deployed log-detective.com.

    Using this tool, Aazam did some research, comparing the output with different models .

    logdetective apache-arrow_standard_x86_64.log\
      --model unsloth/Qwen2.5-Coder-7B-Instruct-128K-GGUF\
      -F Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf\
      --prompt ~/prompt.yaml
    

    So using other specific models, the logdetective tool can return better results.

    The plugin

    openSUSE packagers uses the osc tool, to build packages in build.opensuse.org. This tool can be extended with plugins and we created a new plugin to use log-detective.

    So a packager can get some AI explanation about a build failure with a simple command:

    $ osc ld -r -p devel:languages:python --package python-pygame --repo openSUSE_Tumbleweed
    🔍 Running logdetective for python-pygame (openSUSE_Tumbleweed/x86_64)...
    Log url: https://api.opensuse.org/public/build/devel:languages:python/openSUSE_Tumbleweed/x86_64/python-pygame/_log
    🌐 Sending log to LogDetective API...
     The RPM build process for the `python-pygame` package failed during the
    `%check` phase, as indicated by the "Bad exit status from /var/tmp/rpm-
    tmp.hiddzT (%check)" log snippet. This failure occurred due to seven test
    failures, one error, and six skipped steps in the test suite of the package.
    
    
    
    The primary issue causing the build failure seems to be the traceback error in
    the `surface_test.py` file, which can be seen in the following log snippets:
    
    
    
    [[  278s] Traceback (most recent call last):
    [  278s]   File "/home/abuild/rpmbuild/BUILD/python-
    pygame-2.6.1-build/BUILDROOT/usr/lib64/python3.11/site-
    packages/pygame/tests/surface_test.py", line 284, in test_copy_rle
    ]
    [[  278s] Traceback (most recent call last):
    [  278s]   File "/home/abuild/rpmbuild/BUILD/python-
    pygame-2.6.1-build/BUILDROOT/usr/lib64/python3.11/site-
    packages/pygame/tests/surface_test.py", line 274, in
    test_mustlock_surf_alpha_rle
    ]
    [[  278s] Traceback (most recent call last):
    [  278s]   File "/home/abuild/rpmbuild/BUILD/python-
    pygame-2.6.1-build/BUILDROOT/usr/lib64/python3.11/site-
    packages/pygame/tests/surface_test.py", line 342, in test_solarwolf_rle_usage
    ]
    
    
    
    These traceback errors suggest that there are issues with the test functions
    `test_copy_rle`, `test_mustlock_surf_alpha_rle`, and `test_solarwolf_rle_usage`
    in the `surface_test.py` file. To resolve this issue, you should:
    
    
    
    1. Identify the root cause of the traceback errors in each of the functions.
    2. Fix the issues found, either by modifying the test functions or correcting
    the underlying problem they are testing for.
    3. Re-build the RPM package with the updated `surface_test.py` file.
    
    
    
    It's also recommended to review the other test failures and skipped steps, as
    they might indicate other issues with the package or its dependencies that
    should be addressed.
    

    Or if we are building locally, it's possible to run using the installed logdetective command line. But this method is less accurate, because the model used is not smart enough, yet:

    $ osc build
    [...]
    $ osc ld --local-log --repo openSUSE_Tumbleweed
    Found local build log: /var/tmp/build-root/openSUSE_Tumbleweed-x86_64/.build.log
    🚀 Analyzing local build log: /tmp/tmpuu1sg1q0
    INFO:logdetective:Loading model from fedora-copr/Mistral-7B-Instruct-v0.3-GGUF
    ...
    

    Future plans

    1. Adding submit-log functionality to osc-ld-plugin . In-progress . This will make it easier to collaborate with log-detective.com, allowing openSUSE packagers to send new data for model training.
    2. Gitea bot. openSUSE development workflow is moving to git , so we can create a bot that comment on Pull Request with packages that fails to build. Something similar to what fedora people have for centos gitlab
    3. Add a new tab to log-detective.com website to submit logs directly from OBS

    This was an interesting Summer of Code project. I learned a bit about LLM. I think that this project has a lot of potential, this can be integrated in different workflows and an expert LLM can be a great tool to help packagers, summarizing big logs, tagging similar failures, etc.

    We have a lot of packages and a lot of data in OBS , so we should start to feed the LLM with this data, and in combination with the fedora project, the log-detective could be a real expert in open source RPM packaging.

    • Pl chevron_right

      Jussi Pakkanen: Reimplementing argparse in Pystd

      news.movim.eu / PlanetGnome • 26 August • 1 minute

    One of the pieces of the Python standard library I tend to use the most is argparse . It is really convenient to use so I chose to implement that in Pystd. The conversion was fairly simple, with one exception. As C++ is not duck typed, adapting the behaviour to be strictly typed while still feeling "the same" took some thinking. Parsing command line options is also quite complicated and filled with weird edge cases. For example, if you have short options -a and -b , then according to some command line parsers (but not others) an argument of -ab is valid (but only sometimes).

    I chose to ignore all the hard bits and instead focus on the core parts that I use most of the time, meaning:

    • Short and long command line arguments
    • Arguments can be of type bool, int, string or array of strings
    • Full command line help
    • Same actions as in argparse
    The main missing feature is subcommands (and all of the edge cases, obviously). That being said, approximately 400 lines of code later, we have this:

    The only cheat is that I did the column alignment by hand due to laziness. Using the new implementation resembles regular argparse (full code is here ). First you create the parser and options:

    Then you do the parse and use the results:

    Compiling this test application and all support code from scratch takes approximately 0.2 seconds.

    • Pl chevron_right

      Christian Hergert: Status Week 34

      news.movim.eu / PlanetGnome • 26 August • 3 minutes

    Foundry

    • Spent a bit of time working out how we can enable selection of app patterns in Foundry. The goal here would be to have some very common libadwaita usage patterns available for selection in the new-project guide.

      Ultimately it will rely on FoundryInputCombo / FoundryInputChoice but we’ll have to beef it up to support iconography.

    • Finish up a couple speed-runs so they can be uploaded to to gitlab. chergert/assist and chergert/staged are there and can serve as an example of how to use the libfoundry API.

    • A big portion of this week has been figuring out how I want to expose tweaks from libfoundry into applications. There are a lot of caveats here which make it somewhat tricky.

      For example, not every application will need every toggle, so we need a way to filter them. GListModel is probably our easiest way out with this, but we’ll likely need a bit more control over provenance of tweaks here for filtering.

    • Tweak engine has a new “path” design which allows us to dynamically query available tweaks as you dive down deeper into the UI. This is primarily to help avoid some of the slower parts of the tweaks engine in GNOME Builder.

      Also, to help support this, we can define tweaks using static data which allows for registration/query much faster. For example, there is no need to do UI merging since that can happen automatically.

      There are also cases where you may need to register tweaks which are more complex than simple GSettings . We should be able to accommodate that just fine.

    • Added API for add/remove to Git stage. Much simpler than using the libgit2 index APIs directly, for the common things.

      We may want to add a “Stage” API at some point. But for now, the helpers do the job for the non-partial-stage use-case.

      Just foundry_git_vcs_stage_entry (vcs, entry, contents) where the entry comes from your status list. You can retrieve that with foundry_git_vcs_list_status (vcs) . If you are brave enough to do your own diff/patch you can implement staging lines with this too.

      However, that is probably where we want an improved stage helper.

    • Added API for creating commits.

      Much easier to do foundry_git_vcs_commit (vcs, msg, name, email) than the alternatives.

    • Running LLM tools can now be done through the conversation object which allows for better retention of state. Specifically because that can allow the conversation to track a history object other than a simple wrapped FoundryLlmMessage .

      For example, some message subclasses may have extra state like a “directory listing” which UI could use to show something more interesting than some text.

    • Simplify creating UI around FoundryLlmConversation with a new busy property that you can use to cancel your in-flight futures.

    • Fixed an issue where running the LLM tool (via subprocess) would not proxy the request back to the parent UI process. Fixing that means that you can update the application UI when the conversation is running a tool. If you were, for example, to cancel the build then the tool would get canceled too.

    • Made the spellcheck integration work the same as we do in Builder and Text Editor which is to move the cursor first on right-click before showing corrections.

    Libspelling

    • New release with some small bugfixes.

    Levers

    • Another speed run application which is basically the preferences part of Builder but on top of Foundry. The idea here is that you can just drop into a project and tweak most aspects of it.

      Not intended to be a “real” application like the other speed-runs, but at least it helps ensure that the API is relatively useful.

    • Pl chevron_right

      Pablo Correa Gomez: Retrospective of my term as a Director for the GNOME Foundation

      news.movim.eu / PlanetGnome • 25 August • 7 minutes

    On the spring 2024 I presented my candidature , and eventually got elected for the Board of Directors at the GNOME Foundation. Directors are elected democratically by the GNOME Foundation members, and positions are usually up for election every 2 years. However, when I started my term there was one position open for a single year (due to some changes in composition of the Board). And I volunteered for it, thus my term expired more or less a month ago. When directors get elected, the information that voters have to decide on their vote is:

    • Their personal connection to the candidates
    • The candidatures themselves
    • And further questions the voters might ask about specific topics

    For the shake of transparency and accountability, I am going to discuss my own personal candidature against the work that I contributed to the Board for the last year. Although pretty basic and a self-assessment, I believe this is important for any organization. Without retrospectives, feedback, and public discussion it is very hard to learn lesson, make improvements, and ensure knowledge transfer. Not to mention that public scrutiny and discussion are crucial for any kind of functional democratic governance. Though there are reasons to sometimes not discuss things in public, most of my work for the last year can indeed be discussed in public.

    Candidature goals

    I joined the Board to contribute mostly from the economic perspective. I wanted to become part of the Finance Committee to contribute to improve:

    • The communication about economic and funds-related topics. My goal was to ensure that funds were handled more transparently, and build trust by making sure the information reported was detailed and correct. Avoiding issues like those of which I spoke in a previous blog post .
    • The way funds were assigned, to make sure those were spent on fulfilling the goals we set to the Foundation.

    Unfortunately, when I joined the Board there were some considerable governance issues still unresolved that we got handed over from the previous year. Given my status as a developer and recent community member based in Europe, I had not had time to build relationships with many people in the community. As such, I also set it as one of my goals to represent those people that I knew, which had been under-represented in the previous board.

    Goals into actions

    Let it be said that being part of a governance body for a non-profit in a foreign country with a different history and legal background can be challenging. A non-trivial amount of my time was dedicated to understanding and getting used to such a body: its internal rules, ways of working, and expectations. I wish the governance was closer to home (has anybody in the American continent worked in organizations with a General Assembly?), but that would be a discussion for another time. Now I will stick to discussing actions related to the goals above.

    Becoming part of the Finance Committee

    I presented my candidature to become the Treasurer. The Board did a vote , and instead decided to appoint me as Vice-treasurer. As such, I had no powers, more than the right to attend the Finance Committee meetings, which was enough for me to work on my other self-stated goals. Unfortunately, despite multiple requests from different Board members, and assurances in different meetings , I was never called to a Finance Committee meeting. With the resignation of Michael Downey as a directory during the change of the year, I was then offered the Treasurer position. I declined the offer, since at that point I was not willing to volunteer more of my time for the Foundation.

    Economic-related communication and funds

    During the first months of my term I got heavily involved in this aspect. In addition to taking part in discussions and decision-making, I was one of the the main authors of a general economic update and one focused specifically on the budget . It of course took input and effort from multiple directors, but I am confident that my contribution was considerable. Happily, those updates did have an impact on how people saw the Foundation and the project, with some hints that such work could help regain good will and trust from further people in the membership and the wider FOSS community. Those two were concentrated in the first half of my term, and although there are usually more economic news around budget planning, I did not contribute further work on this for the second half of my term.

    Regarding the usage of funds, I was quite involved in the drafting and approval of the budget for 2024/2025. Even if Richard (then Interim Executive Director) took the lion’s share of the work. Such budget managed to retain spending essential for the fulfilling of the goals (conferences, infra), while at the same time making sure available restricted funds were put to use (parental controls and well-being). Unfortunately, some of the fears I expressed in my candidature were right, and expenses had to be cut considerably, leading to us not being able to support all the previous staffers. In general, those executive decisions were not made by me, but by the Executive Director. However, I am happy my input was taken into consideration, and we managed to have a budget that brought us to 2025.

    Governance and community engagement

    Although the governance issues were carried over from a previous board, I tried to apply the best of my judgement in every situation. I did communicate, when the situation allowed, with the community, and sit in countless meetings to both mediate and express my opinion. At some point, when I was not sure I was doing a good job, presented my resignation a considerable amount of people I knew had voted for me, and all considered it would be best for me to stay.

    Evaluation

    I put a lot of effort and time moving forward those things I promised in the first half of my term. Unfortunately, I was unable to continue with such engagement through the second half. Part of that had to be with bad management of my time and energy, where I surely burned too much too early. Another part had to be with the realization that many of the problems that I was hoping to tackle at the Foundation are symptoms of deeper structural issues at the project level. For example, no team or committee has official and effective communication channels. So why would the people running the project change their behavior just because being part of  the Board? I believe I do not have the leverage, skills, or experience to tackle those issues at their core, which strongly reduced my motivation. I am happy, though, that senior members in the community are really trying .

    I am specially sorry I did not take the Treasurer position when I was offered it. This was less than I expected from myself. I offered my resignation to community members that I knew had voted me, but nobody took it. I also presented my resignation to another Board member which took most of the burden for my decision of not taking the role. It was also politely declined. In hindsight I maybe should have resigned anyway.

    On the positive side, I learned a lot of lessons just by being on the Board, specially related to governance. I take most of those lessons with me to postmarketOS , and I hope they serve as an useful inter-FOSS lesson sharing experiment. They have surely already helped!

    Overall, I am a bit disappointed with myself, as I would have expected to keep doing a lot of the things I did at the beginning for the whole of my term. At the same time, considering the complete year, I think my overall level of work and engagement was probably on average compared to the rest of the Board. In general, the whole experience was considerably draining, specially emotionally, as we were going through uncommon circumstances, e.g: governance issues, staff reduction, etc.

    Moving forward

    I do not plan to come back to do governance for the GNOME project for a while. I am very happy with what we are building in postmarketOS: the community and the full-stack and upstreaming commitment we bring with us. I hope the GNOME project and Foundation can transition to more open and structured models of governance, where tracking decisions, communicating regularly, and taking responsibility over good and bad decisions becomes the norm. I hope this posts motivates other directors to write their own retrospectives and hold themselves accountable in public in the future.

    It is also encouraging to see that we have a new Executive Director, Steven, which rapidly understood many of the issues and needs of the project and organization in a similar way than I do. I wish him the best of successes, and thank him for taking over such an endeavor.

    Thanks also to every Board member: Cassidy, Erik, Federico, Julian, Karen, Michael, Philip, Robert, as well as the staff I interacted Rosanna, Kristi, Bart, and Richard. I’ve seen the passion and effort, and it is truly remarkable. With this, I also want to give a special mention to Allan Day. He’s worked non-stop during months to decisively contribute to the Foundation staying afloat. Thank you very much for your work, it is unmatched.

    If you have any feedback you want to share, thoughts about my term, improvements to suggest, or just some thanks, feel free to comment or email me. I am always happy to hear other people’s thoughts, and there’s no improvement without feedback!

    • Pl chevron_right

      Steven Deobald: 2025-08-22 Foundation Update

      news.movim.eu / PlanetGnome • 23 August

    ## Bureaucracy

    You may have noted that there wasn’t a Foundation Update last week. This is because almost everything that has been happening at the Foundation lately falls under the banner of “bureaucracy” and/or “administration” and there isn’t much to say about either of those topics publicly.

    Also, last Friday was GNOME’s 28th birthday and no one wants to hear about paper-shuffling on their birthday. 🙂

    I’m sorry to say there isn’t much to add this week, either. But I hope you did all take a moment to reflect on the nearly-three-decades-of-work that’s gone into GNOME last week.

    Happy Belated Birthday, Computer. 🎂

    • Pl chevron_right

      This Week in GNOME: #213 Fixed Rules

      news.movim.eu / PlanetGnome • 22 August • 3 minutes

    Update on what happened across the GNOME project in the week from August 15 to August 22.

    GNOME Core Apps and Libraries

    Glycin

    Sandboxed and extendable image loading and editing.

    Sophie (she/her) announces

    Glycin 2.0.beta.3 has been released. Among the important changes are fixes for thumbnailers not working in certain configurations, loading speed for JPEG XL having been dramatically improved, fixed sandbox rules that broke image loading on some systems, and fixed editing for some JPEG images saved in progressive mode.

    GNOME Circle Apps and Libraries

    Déjà Dup Backups

    A simple backup tool.

    Michael Terry announces

    Déjà Dup 49.beta was released! It just fixes a few small bugs and uses the new libadwaita shortcuts dialog.

    But if you haven’t tried the 49.x branch yet, it has a big UI refactor and adds file-manager-based restore for Restic backups.

    Read more details and install instructions in the previous 49.alpha announcement . Thanks for any testing you can do before this reaches the masses!

    Third Party Projects

    Mir Sobhan announces

    We forked the TWIG website and forged it into a “good first issue” tracker. It catches all GNOME-related projects on GitHub and GNOME GitLab to show issues labeled “good first issue” or “Newcomers.” This can help newcomers find places to contribute including myself.

    Website: https://ggfi.mirsobhan.ir Repo: https://gitlab.gnome.org/misano/goodfirstissue

    Džeremi says

    Chronograph gets a BIG new 4.0 update!

    What is Chronograph?

    Chronograph is an app for syncing lyrics, making them display like karaoke in supported players. It comes with a beautiful GTK4 + LibAdwaita interface and includes a built-in metadata editor, so you can manage your music library along with syncing lyrics. Default LRC files can be published to the large lyrics database LRClib.net , which is widely used by many open-source players to fetch lyrics. Until now, Chronograph supported only line-by-line lyrics, which was enough for most cases since standard LRC is the most common format. But times change…

    Word-by-Word support!

    Starting August 24th , Chronograph will gain support for Word-by-Word syncing. This feature uses the eLRC format (also known as LRC A2 or Enchanted LRC). In eLRC, each word has its own timestamp, allowing players that support it to animate lyrics word-by-word , giving you a true karaoke experience. And this is just the beginning: future updates will also bring support for TTML (Timed Text Markup Language) .

    Final notes

    I hope you’ll enjoy using the latest version of Chronograph, and together we can spread awareness of eLRC to the wider community. Sync lyrics of your loved songs! ♥️

    chronograph_library.CMcl3eq5_Z1HOd9M.webp

    chronograph_wbw_edit.L9yNvdA__ZNm1MJ.webp

    chronograph_wbw_sync.ggN6JXua_Z1BsdIp.webp

    Nathan Perlman announces

    Rewaita — Give Adwaita some flavour

    Hi there, a few weeks ago I released Rewaita , a spiritual successor to Gradience . With it, you can recolour GTK4/Adwaita apps to popular colour schemes. That’s where the name comes from ~ Re(colour Ad)waita.

    As of v1.0.4, released this week, you can create your own custom colour palettes if the ones we provide don’t suit you, and you can also change the window controls to be either coloured or MacOS-styled.

    You can find it on Flathub , but also in the AUR and NIXPKGS (the Nix Package is still under review).

    Rewaita is also going through rapid development, so any help would be appreciated , or just leave us a star :). In particular, GTK3 and Cinnamon support are next up on the chopping block.

    Miscellaneous

    JumpLink says

    ts-for-gir - TypeScript bindings for GObject Introspection

    This week we’ve released a major improvement for GObject interface implementation: Virtual Interface Generation .

    Instead of having to implement all methods of a GObject interface, developers can now only implement the virtual methods ( vfunc_* ). This matches the actual GObject-Introspection pattern and makes interface implementation much cleaner.

    Before (implement all methods):

    class CustomPaintable implements Gdk.Paintable {
      // Implement all methods manually
      get_current_image(): Gdk.Paintable { ... }
      get_flags(): Gdk.PaintableFlags { ... }
      get_intrinsic_width(): number { ... }
      // ... and many more
    }

    After (only virtual methods):

    class CustomPaintable implements Gdk.Paintable.Interface {
      // Declare for TypeScript compatibility
      declare get_current_image: Gdk.Paintable["get_current_image"];
      declare get_flags: Gdk.Paintable["get_flags"];
      
      // Only implement virtual methods
      vfunc_get_current_image(): Gdk.Paintable { ... }
      vfunc_get_flags(): Gdk.PaintableFlags { ... }
    }

    We’ve created a comprehensive example: https://github.com/gjsify/ts-for-gir/tree/main/examples/virtual-interface-test

    This shows both Gio.ListModel and Gdk.Paintable implementations using the new pattern.

    Release : v4.0.0-beta.35 and v4.0.0-beta.36

    Note : Last week we also released v4.0.0-beta.34 which introduced Advanced Variant Types by default, completing the gi.ts integration with enhanced TypeScript support for GLib.Variant.deepUnpack() and better type inference for GObject patterns.

    That’s all for this week!

    See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

    • Pl chevron_right

      Sebastian Wick: Testing with Portals

      news.movim.eu / PlanetGnome • 21 August • 1 minute

    At the Linux App Summit (LAS) in Albania three months ago, I gave a talk about testing in the xdg-desktop-portal project. There is a recording of the presentation, and the slides are available as well.

    To give a quick summary of the work I did:

    • Revamped the CI
    • Reworked and improved the pytest based integration test harness
    • Added integration tests for new portals
    • Ported over all the existing GLib/C based integration tests
    • Support ASAN for detecting memory leaks in the tests
    • Made tests pretend to be either a host, Flatpak or Snap app

    The hope I had is that this will result in:

    • Fewer regressions
    • Tests for new features and bug fixes
    • More confidence in refactoring
    • More activity in the project

    While it’s hard to get definite data on those points, at least some of it seems to have become reality. I have seen an increase in activity (there are other factors to this for sure), and a lot of PRs already come with tests without me even having to ask for it. Canonical is involved again, taking care of the Snap side of things. So far it seems like we didn’t introduce any new regressions, but this usually shows after a new release. The experience of refactoring portals also became a lot better because there is a baseline level of confidence when the tests pass, as well as the possibility to easily bisect issues. Overall I’m already quite happy with the results.

    Two weeks ago, Georges merged the last piece of what I talked about in the LAS presentation, so we’re finally testing the code paths that are specific to host, Flatpak and Snap applications! I also continued a bit with improving the tests, and now they can be run with Valgrind , which is super slow and that’s why we’re not doing it in the CI, but it tends to find memory leaks which ASAN does not. With the existing tests, it found 9 small memory leaks .

    If you want to improve the Flatpak story, come and contribute to xdg-desktop-portal. It’s now easier than ever!

    • Pl chevron_right

      Michael Meeks: 2025-08-20 Wednesday

      news.movim.eu / PlanetGnome • 20 August

    • Up extremely early, worked for a few hours, out for a run with J. painful left hip.
    • Merger/finance call, sync with Dave, Gokay & Szymon, Lunch.
    • Published the next strip: on Fixed Price projects
    • Productivity All Hands meeting.
    • Pl chevron_right

      Peter Hutterer: Why is my device a touchpad and a mouse and a keyboard?

      news.movim.eu / PlanetGnome • 20 August • 3 minutes

    If you have spent any time around HID devices under Linux (for example if you are an avid mouse, touchpad or keyboard user) then you may have noticed that your single physical device actually shows up as multiple device nodes (for free! and nothing happens for free these days!). If you haven't noticed this, run libinput record and you may be part of the lucky roughly 50% who get free extra event nodes.

    The pattern is always the same. Assuming you have a device named FooBar ExceptionalDog 2000 AI [1] what you will see are multiple devices

    /dev/input/event0: FooBar ExceptionalDog 2000 AI Mouse
    /dev/input/event1: FooBar ExceptionalDog 2000 AI Keybard 
    /dev/input/event2: FooBar ExceptionalDog 2000 AI Consumer Control 
    
    The Mouse/Keyboard/Consumer Control/... suffixes are a quirk of the kernel's HID implementation which splits out a device based on the Application Collection . [2]

    A HID report descriptor may use collections to group things together. A "Physical Collection" indicates "these things are (on) the same physical thingy". A "Logical Collection" indicates "these things belong together". And you can of course nest these things near-indefinitely so e.g. a logical collection inside a physical collection is a common thing.

    An "Application Collection" is a high-level abstractions to group something together so it can be detected by software. The "something" is defined by the HID usage for this collection. For example, you'll never guess what this device might be based on the hid-recorder output:

    # 0x05, 0x01,                    // Usage Page (Generic Desktop)              0
    # 0x09, 0x06,                    // Usage (Keyboard)                          2
    # 0xa1, 0x01,                    // Collection (Application)                  4
    ...
    # 0xc0,                          // End Collection                            74
    
    Yep, it's a keyboard. Pop the champagne[3] and hooray, you deserve it.

    The kernel, ever eager to help, takes top-level application collections (i.e. those not inside another collection) and applies a usage-specific suffix to the device. For the above Generic Desktop/Keyboard usage you get "Keyboard", the other ones currently supported are "Keypad" and "Mouse" as well as the slightly more niche "System Control", "Consumer Control" and "Wireless Radio Control" and "System Multi Axis". In the Digitizer usage page we have "Stylus", "Pen", "Touchscreen" and "Touchpad". Any other Application Collection is currently unsuffixed (though see [2] again, e.g. the hid-uclogic driver uses "Touch Strip" and other suffixes).

    This suffix is necessary because the kernel also splits out the data sent within each collection as separate evdev event node. Since HID is (mostly) hidden from userspace this makes it much easier for userspace to identify different devices because you can look at a event node and say "well, it has buttons and x/y, so must be a mouse" (this is exactly what udev does when applying the various ID_INPUT properties, with varying levels of success).

    The side effect of this however is that your device may show up as multiple devices and most of those extra devices will never send events . Sometimes that is due to the device supporting multiple modes (e.g. a touchpad may by default emulate a mouse for backwards compatibility but once the kernel toggles it to touchpad mode the mouse feature is mute). Sometimes it's just laziness when vendors re-use the same firmware and leave unused bits in place.

    It's largely a cosmetic problem only, e.g. libinput treats every event node as individual device and if there is a device that never sends events it won't affect the other event nodes. It can cause user confusion though: "why does my laptop say there's a mouse?" and in some cases it can cause functional degradation - the two I can immediately recall are udev detecting the mouse node of a touchpad as pointing stick (because i2c mice aren't a thing), hence the pointing stick configuration may show up in unexpected places. And fake mouse devices prevent features like "disable touchpad if a mouse is plugged in" from working correctly. At the moment we don't have a good solution for detecting these fake devices - short of shipping giant databases with product-specific entries we cannot easily detect which device is fake. After all, a Keyboard node on a gaming mouse may only send events if the user configured the firmware to send keyboard events, and the same is true for a Mouse node on a gaming keyboard.

    So for now, the only solution to those is a per-user udev rule to ignore a device . If we ever figure out a better fix, expect to find a gloating blog post in this very space.

    [1] input device naming is typically bonkers, so I'm just sticking with precedence here
    [2] if there's a custom kernel driver this may not apply and there are quirks to change this so this isn't true for all devices
    [3] or sparkling wine, let's not be regionist here