• chevron_right

      Alberto Ruiz: Booting with Rust: Chapter 2

      news.movim.eu / PlanetGnome · Yesterday - 15:06 · 5 minutes

    In a previous post I gave the context for my pet project ieee1275-rs , it is a framework to build bootable ELF payloads on Open Firmware ( IEEE 1275 ). OF is a standard developed by Sun for SPARC and aimed to provide a standardized firmware interface that was rich and nice to work with, it was later adopted by IBM, Apple for POWER and even the OLPC XO.

    The crate is intended to provide a similar set of facilities as uefi-rs , that is, an abstraction over the entry point and the interfaces. I started the ieee1275-rs crate specifically for IBM’s POWER platforms, although if people want to provide support for SPARC, G3/4/5s and the OLPC XO I would welcome contributions.

    There are several ways the firmware takes a payload to boot, in Fedora we use a PReP partition type, which is a ~4MB partition labeld with the 41h type in MBR or 9E1A2D38-C612-4316-AA26-8B49521E5A8B as the GUID in the GPT table. The ELF is written as raw data in the partition.

    Another alternative is a so called CHRP script in “ppc/bootinfo.txt”, this script can load an ELF located in the same filesystem, this is what the bootable CD/DVD installer uses. I have yet to test whether this is something that can be used across Open Firmware implementations.

    To avoid compatibility issues, the ELF payload has to be compiled as a 32bit big-endian binary as the firmware interface would often assume that endianness and address size.

    The entry point

    As I entered this problem I had some experience writing UEFI binaries, the entry point in UEFI looks like this:

    #![no_main]
    #![no_std]
    use uefi::prelude::*;
    
    #[entry]
    fn main(_image_handle: Handle, mut system_table: SystemTable<Boot>) -> Status {
      uefi::helpers::init(&mut system_table).unwrap();
      system_table.boot_services().stall(10_000_000);
      Status::SUCCESS
    }

    Basically you get a pointer to a table of functions, and that’s how you ask the firmware to perform system functions for you. I thought that maybe Open Firmware did something similar, so I had a look at how GRUB does this and it used a ppc assembler snippet that jumps to grub_ieee1275_entry_fn() , yaboot does a similar thing . I was already grumbling of having to look into how to embed an asm binary to my Rust project. But turns out this snippet conforms to the PPC function calling convention, and since those snippets mostly take care of zeroing the BSS segment but turns out the ELF Rust outputs does not generate one (although I am not sure this means there isn’t a runtime one, I need to investigate this further), I decided to just create a small ppc32be ELF binary with the start function into the top of the .text section at address 0x10000.

    I have created a repository with the most basic setup that you can run. With some cargo configuration to get the right linking options, and a script to create the disk image with the ELF payload on the PReP partition and run qemu, we can get this source code being run by Open Firmware:

    #![no_std]
    #![no_main]
    
    use core::{panic::PanicInfo, ffi::c_void};
    
    #[panic_handler]
    fn _handler (_info: &PanicInfo) -> ! {
        loop {}
    }
    
    #[no_mangle]
    #[link_section = ".text"]
    extern "C" fn _start(_r3: usize, _r4: usize, _entry: extern "C" fn(*mut c_void) -> usize) -> isize {
        loop {}
    }

    Provided we have already created the disk image (check the run_qemu.sh script for more details), we can run our code by executing the following commands:

    $ cargo +nightly build --release --target powerpc-unknown-linux-gnu
    $ dd if=target/powerpc-unknown-linux-gnu/release/openfirmware-basic-entry of=disk.img bs=512 seek=2048 conv=notrunc
    $ qemu-system-ppc64 -M pseries -m 512 --drive file=disk.img
    [...]
      Welcome to Open Firmware
    
      Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
      This program and the accompanying materials are made available
      under the terms of the BSD License available at
      http://www.opensource.org/licenses/bsd-license.php
    
    
    Trying to load:  from: /vdevice/v-scsi@71000003/disk@8000000000000000 ...   Successfully loaded

    Ta da! The wonders of getting your firmware to run an infinite loop. Here’s where the fun begins.

    Doing something actually useful

    Now, to complete the hello world, we need to do something useful. Remeber our _entry argument in the _start() function? That’s our gateway to the firmware functionality. Let’s look at how the IEEE1275 spec tells us how we can work with it.

    This function is a universal entry point that takes a structure as an argument that tells the firmware what to run, depending on the function it expects some extra arguments attached. Let’s look at how we can at least print “Hello World!” on the firmware console.

    The basic structure looks like this:

    #[repr(C)]
    pub struct Args {
      pub service: *const u8, // null terminated ascii string representing the name of the service call
      pub nargs: usize,       // number of arguments
      pub nret: usize,        // number of return values
    }

    This is just the header of every possible call, nargs and nret determine the size of the memory of the entire argument payload. Let’s look at an an example to just exit the program:

    #[no_mangle]
    #[link_section = ".text"]
    extern "C" fn _start(_r3: usize, _r4: usize, entry: extern "C" fn(*mut Args) -> usize) -> isize {
        let mut args = Args {
            service: "exit\0".as_ptr(),
            nargs: 0,
            nret: 0
        };
    
        entry (&mut args as *mut Args);
        0 // The program will exit in the line before, we return 0 to satisfy the compiler
    }

    When we run it in qemu we get the following output:

    Trying to load:  from: /vdevice/v-scsi@71000003/disk@8000000000000000 ...   Successfully loaded
    W3411: Client application returned.

    Aha! We successfully called firmware code!

    To be continued…

    To summarize, we’ve learned that we don’t really need assembly code to produce an entry point to our OF bootloader (tho we need to zero our bss segment if we have one), we’ve learned how to build a valid OF ELF for the PPC architecture and how to call a basic firmware service.

    In a follow up post I intend to show a hello world text output and how the ieee1275 crate helps to abstract away most of the grunt to access common firmware services. Stay tuned!

    • chevron_right

      Alberto Ruiz: Booting with Rust: Chapter 1

      news.movim.eu / PlanetGnome · 2 days ago - 14:10 · 3 minutes

    I have been doing random coding experiments with my spare time that I never got to publicize much outside of my inner circles. I thought I would undust my blog a bit to talk about what I did in case it is useful for others.

    For some background, I used to manage the bootloader team at Red Hat a few years ago alongside Peter Jones and Javier Martinez . I learned a great deal from them and I fell in love with this particular problem space and I have come to enjoy tinkering with experiments in this space.

    There many open challenges in this space that we could use to have a more robust bootpath across Linux distros, from boot attestation for initramfs and cmdline, A/B rollbacks, TPM LUKS decryption (ala BitLocker)…

    One that particularly interests me is unifying the firmware-kernel boot interface across implementations in the hypothetical absence of GRUB.

    Context: the issue with GRUB

    The priority of the team was to support RHEL boot path on all the architectures we supported. Namely x86_64 (legacy BIOS & UEFI), aarch64 (UEFI), s390x and ppc64le (Open Power and PowerVM).

    These are extremely heterogeneous firmware interfaces, some are on their way to extinction (legacy PC BIOS) and some will remain weird for a while.

    GRUB, (GRand Unified Bootloader) as it names stands, intends to be a unified bootloader for all platforms. GRUB has to support a supersetq of firmware interfaces, some of those, like legacy BIOS do not support much other than some rudimentary support disk or network access and basic graphics handling.

    To get to load a kernel and its initramfs, this means that GRUB has to implement basic drivers for storage, networking, TCP/IP, filesystems, volume management… every time there is a new device storage technology, we need to implement a driver twice, once in the kernel and once in GRUB itself. GRUB is, for all intent and purposes, an entire operating system that has to be maintained.

    The maintenance burden is actually quite big, and recently it has been a target for the InfoSec community after the Boot Hole vulnerability. GRUB is implemented in C and it is an extremely complex code base and not as well staffed as it should. It implements its own scripting language (parser et al) and it is clear there are quite a few CVEs lurking in there.

    So, we are basically maintaining code we already have to write, test and maintain in the Linux kernel in a different OS whose whole purposes (in the context of RHEL, CentOS and Fedora) its main job is to boot a Linux kernel.

    This realization led to the initiative that these days are taking shape in the discussions around nmbl (no more boot loader). You can read more about that in that blog post, I am not actively participating in that effort but I encourage you to read about it. I do want to focus on something else and very specific, which is what you do before you load the nmble kernel.

    Booting from disk

    I want to focus on the code that goes from the firmware interface to loading the kernel (nmbl or otherwise) from disk. We want some sort of A/B boot protocol that is somewhat normalized across the platforms we support, we need to pick the kernel from the disk.

    The systemd community has led some of the boot modernization initiatives, vocally supporting the adoption of UKI and signed pre-built initarmfs images, developing the Boot Loader Spec , and other efforts.

    At some point I heard Lennart making the point that we should standardize on using the EFI System Partition as /boot to place the kernel as most firmware implementations know how to talk to a FAT partition.

    This proposal caught my attention and I have been pondering if we could have a relatively small codebase written in a safe language (you know which) that could support a well define protocol for A/B booting a kernel in Legacy BIOS, S390 and OpenFirmware (UEFI and Open Power already support BLS snippets so we are covered there).

    My modest inroad into testing this hypothesis so far has been the development of ieee1275-rs , a Rust module to write programs for the Open Firmware interface, so far I have not been able to load a kernel by myself but I still think the lessons learned and some of the code could be useful to others. Please note this is a personal experiment and nothing Red Hat is officially working on.

    I will be writing more about the technical details of this crate in a follow up blog post where I get into some of the details of writing Rust code for a firmware interface, this post is long enough already. Stay tuned.

    • chevron_right

      Andy Wingo: whippet progress update: funding, features, future

      news.movim.eu / PlanetGnome · 3 days ago - 09:19 · 4 minutes

    Greets greets! Today, an update on recent progress in Whippet , including sponsorship, a new collector, and a new feature.

    the lob, the pitch

    But first, a reminder of what the haps: Whippet is a garbage collector library. The target audience is language run-time authors, particularly “small” run-times: wasm2c , Guile , OCaml , and so on; to a first approximation, the kinds of projects that currently use the Boehm-Demers-Weiser collector .

    The pitch is that if you use Whippet, you get a low-fuss small dependency to vendor into your source tree that offers you access to a choice of advanced garbage collectors: not just the conservative mark-sweep collector from BDW-GC, but also copying collectors, an Immix-derived collector, generational collectors, and so on. You can choose the GC that fits your problem domain, like Java people have done for many years . The Whippet API is designed to be a no-overhead abstraction that decouples your language run-time from the specific choice of GC.

    I co-maintain Guile and will be working on integrating Whippet in the next months, and have high hopes for success.

    bridgeroos !

    I’m delighted to share that Whippet was granted financial support from the European Union via the NGI zero core fund, administered by the Dutch non-profit, NLnet foundation . See the NLnet project page for the overview.

    This funding allows me to devote time to Whippet to bring it from proof-of-concept to production. I’ll finish the missing features , spend some time adding tracing support, measuring performance, and sanding off any rough edges, then work on integrating Whippet into Guile.

    This bloggery is a first update of the progress of the funded NLnet project.

    a new collector!

    I landed a new collector a couple weeks ago, a parallel copying collector (PCC). It’s like a semi-space collector , in that it always evacuates objects (except large objects , which are managed in their own space). However instead of having a single global bump-pointer allocation region, it breaks the heap into 64-kB blocks. In this way it supports multiple mutator threads: mutators do local bump-pointer allocation into their own block, and when their block is full, they fetch another from the global store.

    The block size is 64 kB, but really it’s 128 kB, because each block has two halves: the active region and the copy reserve. It’s a copying collector, after all. Dividing each block in two allows me to easily grow and shrink the heap while ensuring there is always enough reserve space.

    Blocks are allocated in 64-MB aligned slabs, so you get 512 blocks in a slab. The first block in a slab is used by the collector itself, to keep metadata for the rest of the blocks, for example a chain pointer allowing blocks to be collected in lists, a saved allocation pointer for partially-filled blocks, whether the block is paged in or out, and so on.

    The PCC not only supports parallel mutators, it can also trace in parallel. This mechanism works somewhat like allocation, in which multiple trace workers compete to evacuate objects into their local allocation buffers; when an allocation buffer is full, the trace worker grabs another, just like mutators do.

    However, unlike the simple semi-space collector which uses a Cheney grey worklist, the PCC uses the fine-grained work-stealing parallel tracer originally developed for Whippet’s Immix-like collector. Each trace worker maintains a local queue of objects that need tracing , which currently has 1024 entries. If the local queue becomes full, the worker will publish 3/4 of those entries to the worker’s shared worklist . When a worker runs out of local work, it will first try to remove work from its own shared worklist, then will try to steal from other workers.

    Of course, because threads compete to evacuate objects, we have to use atomic compare-and-swap instead of simple forwarding pointer updates ; if you only have one mutator thread and are OK with just one tracing thread, you can avoid the ~30% performance penalty that atomic operations impose. The PCC generally starts to win over a semi-space collector when it can trace with 2 threads, and gets better with each thread you add.

    I sometimes wonder whether the worklist should contain grey edges or grey objects. MMTk seems to do the former, and bundles edges into work packets, which are the unit of work sharing. I don’t know yet what is best and look forward to experimenting once I have better benchmarks.

    Anyway, maintaining an external worklist is cheating in a way: unlike the Cheney worklist, this memory is not accounted for as part of the heap size. If you are targetting a microcontroller or something, probably you need to choose a different kind of collector. Fortunately, Whippet enables this kind of choice, as it contains a number of collector implementations.

    What about benchmarks? Well, I’ll be doing those soon in a more rigorous way. For now I will just say that it seems to behave as expected and I am satisfied; it is useful to have a performance oracle against which to compare other collectors.

    finalizers!

    This week I landed support for finalizers !

    Finalizers work in all the collectors: semi, pcc, whippet, and the BDW collector that is a shim to use BDW-GC behind the Whippet API. They have a well-defined relationship with ephemerons and are multi-priority, allowing embedders to build guardians or phantom references on top .

    In the future I should do some more work to make finalizers support generations, if the collector is generational, allowing a minor collection to avoid visiting finalizers for old objects. But this is a straightforward extension that will happen at some point.

    future!

    And that’s the end of this update. Next up, I am finally going to tackle heap resizing, using the membalancer approach . Then basic Windows and Mac support, then I move on to the tracing and performance measurement phase. Onwards and upwards!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/07/24/whippet-progress-update-funding-features-future

    • chevron_right

      Pedro Sader Azevedo: Accessibility Hackathon

      news.movim.eu / PlanetGnome · 4 days ago - 14:00 · 6 minutes

    When you stop and think about it, user interfaces are almost entirely designed around sight: they display graphical elements that are mostly interacted with by pointing and clicking (or touching). However, as we know, not everyone has “perfect” vision: some are color blind, some are short sighted, some are long sighted, etc. In many cases, these people can still interact with the same user interfaces as everyone else, but those with more severe visual impairments need a different method to use their computers.

    That’s when the Screen Reader comes in! The Screen Reader is a software that adapts computers to the needs of users with low vision or blindness, by reading descriptions of user interfaces out loud and facilitating keyboard navigation.

    Today, we will explore the web with Orca, the Screen Reader of the Linux Desktop! After that, we will contribute towards improving the experience of Orca users.

    Mind you that Screen Readers have tons of advanced features to empower users with as much efficiency as possible. Because of that, it can be challenging to use this kind of software if you are not used to it. I invite you to embrace this challenge as an opportunity to experience the desktop from a different perspective, and to appreciate the way other people user their computers.

    Without further ado, let’s get going!

    Part I - Exploring the Screen Reader

    Enabling and Disabling Orca

    You can enable and disable Orca by pressing Super + Alt + s . This can also be configured via GNOME Settings, under “Accessibility” > “Seeing” > “Screen Reader”.

    &ldquo;Seeing&rdquo; panel within GNOME Settings

    Turn up and the volume and make sure you hear a robotic voice saying “Screen Reader on” and “Screen Reader off”. Then, open the Firefox web browser and check if Orca describes the current open tab. If it is quiet, try closing and re-opening Firefox again or disabling and re-enabling Orca.

    Controlling Orca

    Orca is controlled entirely from the keyboard, having dozens of shortcuts. Besides, these keyboard shortcuts are slightly different if you have a Number Pad (NumPad) or not. I have laid out the most important ones below, but an exhaustive list can be found in the quite excellent Orca documentation .

    Orca Modifier

    Key Without a NumPad (Laptop) With a NumPad (Desktop)
    Orca Modifier Caps Lock NumPad Ins

    Important shortcuts

    Action Without a NumPad With a NumPad
    Toggle screen reader Super + Alt + s Super + Alt + s
    Interrupt screen reading Orca Modifier (press) Orca Modifier (press)
    Toggle caret mode in Firefox F7 F7
    Where am I NumPad Enter Orca Modifier + Enter
    Display page structure Alt + Shift + h Alt + Shift + h
    Display page buttons Alt + Shift + b Alt + Shift + b
    Read current line NumPad 8 Orca Modifier + i
    Read current word NumPad 5 Orca Modifier + k
    Read onward NumPad + Orca Modifier + ;
    Next link k k
    Previous link Shift + k Shift + k
    Next paragraph p p
    Previous paragraph Shift + p Shift + p
    Next button p p
    Previous paragraph Shift + p Shift + p

    Activity: Exploring the web with Orca

    Open the Firefox web browser and head to https://flathub.org . Find an app that looks interesting to you and explore its page using Orca. Try to navigate the entire page!

    Feel free to use the questions below as a guide to your exploration:

    1. Can you read the app description? Try to do it twice.
    2. Can you click the “Install” button? What happens when you do?
    3. Can you figure out if the app is verified or not? How so?
    4. Can you guess what the screenshots of the app are showing?
    5. Can you find the link to the source code of the app?

    Part II - Captioning Screenshots in Flathub

    Flatpak and Flathub

    Flatpak is a modern packaging technology, which aims to work on all Linux distributions. In turn, Flathub is an app store for software that was packaged using Flatpak. Flathub has been embraced as the main channel for publishing applications by many projects, most notably GNOME!

    App listings

    Each app that is published on Flathub has a page for itself, which is called “app listing”. You will find lots of important information about an app on its listing, such as:

    • Name
    • Description
    • Author
    • License
    • Permissions
    • Screenshots

    All this information is sourced from a special file called “MetaInfo File”, that is typically hosted on the app’s repository. Its filename usually ends with .metainfo.xml.in.in . You can click here to see an example of a MetaInfo File. You can read more about this file format on Flathub’s MetaInfo Guidelines .

    Screenshot captions

    To promote high-quality listings, Flathub publishes its own Quality Guidelines. . These guidelines encourage maintainer’s to add captions to their images, which, as we learned in the first activity, helps people who use Screen Readers understand the content of the screenshots of the app. To quote the Quality Guidelines:

    Every screenshot should have a caption briefly describing it. Captions should only be one sentence and not end with a full stop. Don’t start them with a number.

    To be more precise, good captions should clearly convey the functionality demonstrated in the screenshot, as well as give a general idea of the user interface on display. I have cataloged dozens of Flathub listings on this Google Sheet:

    GUADEC 2024 - Accessibility Hackathon

    There you find listings with Caption status set “Exemplary”, which are great models of how captions should be written.

    Activity: Contributing with captions

    Now, we will improve the accessibility of the Linux Desktop (albeit slightly), by contributing to Flathub with screenshot captioning. To do that, follow the instructions laid out below:

    1. Open the Google Sheet on this link
    2. Choose an app with Caption status set “Absent”
    3. Assign “In-progress” to its Caption status
    4. Put your name(s) on its Group column
    5. Head to the Flathub listing link of the app
    6. Find the repository of the app (usually under “Browser the source code” on “Links”)

      Flathub listing links tab

    7. Log in or create an account on the platform that hosts the repository of the app (GitHub, GitLab, etc)
    8. Go back to the repository of the app, and find its MetaInfo File. It is typically placed in a data/ directory, at the root of the repository.
    9. On the MetaInfo File, look for the screenshot tags and find the files referenced in it
    10. Inside each screenshot element, there is an image element. For each of those, create a new line with caption tags. The caption goes in between them, like so:
    <screenshot>
     
    + <caption>In-progress recording duration and button to stop recording</caption>
    </screenshot>
    
    1. When you are done writing your captions, proof read them and run them through a spellchecker. I haven’t found any great ones online, but Language Tool’s is passable.
    2. Commit your changes, following the commit message conventions of the project you are contributing to. If this convention is not documented anywhere, try to infer it looking at the commit history.
    3. Open a Pull Request (PR) / Merge Request (MR) with your contribution, explaining what you did and why it is important. You can use my explanation as a model.
    4. Put a link to your PR/MR on the sheet.
    5. Wait for the maintainer’s feedback

    Part III: Farewell

    Thank you for attending this workshop. I hope you had a good time!

    Until the next one!

    • wifi_tethering open_in_new

      This post is public

      pesader.dev /posts/accessibility-hackathon/

    • chevron_right

      Michael Hill: Renaming multiple files

      news.movim.eu / PlanetGnome · 5 days ago - 16:42 · 1 minute

    After World War II, Jack Kirby and his partner Joe Simon began their first foray into the genre of crime comics. (Kirby would return briefly in 1954 and 1971.) Beginning in 1947 and tailing into 1951, the stories appeared largely in Headline Comics and Justice Traps the Guilty for Crestwood’s Prize Publications. In 2011, Titan Books published a selection of these stories in hardcover, but sixty percent of the stories from this time period aren’t included in the book, and at least 20 stories have never been reprinted. Unlike Simon & Kirby’s much more prolific romance offerings, all of the comics in question are in the public domain and available on Digital Comic Museum and Comic Book Plus sites, thanks to multiple volunteers. I set about creating my own collection of scanned pages.

    When the downloaded .cbz files are extracted into a folder, the resulting image files have names like scan00.jpg, scan01.jpg, etc. In GNOME Files, selecting all the files for a given issue and pressing F2 brings up the batch rename dialogue.

    Selecting the Find and replace text option, I replace “scan” with the book title and issue number, with “p” as a separator for the page number.

    When all the stories have been added, the pages will be sorted by title and issue number. To enable sorting chronologically, a four- or six-digit prefix can be used to specify the cover date, in this case “4706” for June 1947. To add this I press F2 on the same selected files and use the Rename using a template option.

    Using the Jack Kirby Checklist as a guideline, and discarding (very few) stories for insufficient Kirby content, my project yielded a folder containing 633 pages including covers.


    Jack Kirby (1917-1994) was a pioneer in American comic books. He grew up on the Lower East Side of New York City (where he experienced real and aspiring gangsters first hand) and fought in northeastern France in 1944 as an infantryman in Patton’s Third Army. As a partner in Simon and Kirby and the manager of the S&K studio, Kirby defined the word cartoonist—he generally wrote, penciled, inked, and occasionally coloured his own stories.

    Jack Kirby photo property of the Kirby Estate. Used with permission.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /mdhill/2024/07/22/renaming-multiple-files/

    • chevron_right

      Philip Withnall: GUADEC 2024

      news.movim.eu / PlanetGnome · 5 days ago - 15:48 · 1 minute

    Goodness, it’s been a long time since I blogged. I’ve got a lot of updates to give, but perhaps let’s keep this post short, and dedicated to publishing the details of the two talks I gave at GUADEC this year, for posterity. I plan to do some more blog posts in the near future with more updates from the past year and more details of the features I’ve been working on.

    An update on parental controls and digital wellbeing for GNOME 47

    This was my first talk at GUADEC this year, serving as a little teaser of the work I’ve been doing recently (sponsored by Endless Network via the GNOME Foundation) to add features to parental controls and digital wellbeing.

    Thank you to Allan Day for fitting in work on the design for digital wellbeing this cycle, to Florian Müllner and Felipe Borges for reviewing all the code I’ve thrown at them, and to Dylan McCall for feedback on earlier versions of break reminders.

    Here’s the recording , the slides , slide notes and source .

    Somewhat merging gobject-introspection into GLib

    This was the second talk , and a companion to Emmanuele’s talk about changes in introspection in GNOME 46. It gives an overview of how we’ve merged half of gobject-introspection into GLib recently, and what this means for app authors (basically nothing), binding developers (something, on a timeline of your choosing) and distributions (some packaging rework, for the GLib 2.78 and 2.80 releases).

    Thank you to Emmanuele for spearheading this work in GLib and doing the gobject-introspection side of it, and the many GLib and gobject-introspection contributors for helping us stabilise this (in particular with build system improvements) after it landed.

    Here’s the recording , the slides , slide notes and source .

    • wifi_tethering open_in_new

      This post is public

      tecnocode.co.uk /2024/07/22/guadec-2024/

    • chevron_right

      Andy Wingo: finalizers, guardians, phantom references, et cetera

      news.movim.eu / PlanetGnome · 5 days ago - 09:27 · 1 minute

    Consider guardians . Compared to finalizers, in which the cleanup procedures are run concurrently with the mutator, by the garbage collector , guardians allow the mutator to control concurrency. See Guile’s documentation for more notes. Java’s PhantomReference / ReferenceQueue seems to be similar in spirit, though the details differ.

    questions

    If we want guardians, how should we implement them in Whippet ? How do they relate to ephemerons and finalizers ?

    It would be a shame if guardians were primitive, as they are a relatively niche language feature. Guile has them, yes, but really what Guile has is bugs: because Guile implements guardians on top of BDW-GC’s finalizers (without topological ordering), all the other things that finalizers might do in Guile (e.g. closing file ports) might run at the same time as the objects protected by guardians. For the specific object being guarded, this isn’t so much of a problem, because when you put an object in the guardian, it arranges to prepend the guardian finalizer before any existing finalizer. But when a whole clique of objects becomes unreachable, objects referred to by the guarded object may be finalized. So the object you get back from the guardian might refer to, say, already-closed file ports.

    The guardians-are-primitive solution is to add a special guardian pass to the collector that will identify unreachable guarded objects. In this way, the transitive closure of all guarded objects will be already visited by the time finalizables are computed, protecting them from finalization. This would be sufficient, but is it necessary?

    answers?

    Thinking more abstractly, perhaps we can solve this issue and others with a set of finalizer priorities: a collector could have, say, 10 finalizer priorities, and run the finalizer fixpoint once per priority, in order. If no finalizer is registered at a given priority, there is no overhead. A given embedder (Guile, say) could use priority 0 for guardians, priority 1 for “normal” finalizers, and ignore the other priorities. I think such a facility could also support other constructs, including Java’s phantom references, weak references, and reference queues, especially when combined with ephemerons.

    Anyway, all this is a note for posterity. Are there other interesting mutator-exposed GC constructs that can’t be implemented with a combination of ephemerons and multi-priority finalizers? Do let me know!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/07/22/finalizers-guardians-phantom-references-et-cetera

    • chevron_right

      Jiri Eischmann: Installing Nvidia Driver Will Be Easy Again in Fedora Workstation

      news.movim.eu / PlanetGnome · Friday, 19 July - 12:18 · 3 minutes

    The feature my team worked on – Nvidia Driver Installation with Secure Boot Support was approved by FESCo earlier this week and its upstream implementation was also approved several days ago , so it’s on its way to Fedora 41 and I decided to write a blog post with more context and our motivations behind it.

    Installing the Nvidia drivers in Fedora Linux was not easy in the past. You had to add 3rd party repos and then install specific packages. Not very intuitive for beginners. That’s why we teamed up with the RPMFusion community which created a separate repository with the Nvidia driver that was enabled in Fedora Workstation if you agreed to enable third-party software sources. It also shipped AppStream metadata to integrate with app catalogs like GNOME Software. So all the user had to do was open GNOME Software, look up “nvidia”, and click to install it. Simple enough.

    It only had one problem: it didn’t work with Secure Boot enabled. The next boot would simply fail if Secure Boot was enabled and the reason was not obvious for many users. It was not that significant when we came up with the solution, but it grew in significance as more and more machines had Secure Boot enabled.

    The Fedora Workstation Working Group decided earlier this year that it would be better to remove the driver from GNOME Software given the fact that the current solution doesn’t work with Secure Boot. The repository remained among the approved third-party sources, but the user experience of installing the Nvidia driver was significantly degraded.

    It’s really not something Fedora Workstation can afford because the Nvidia driver is more popular than ever in the AI craze. So we started thinking about a solution that would meet the criteria and work with Secure Boot. The most seamless solution would be to sign the module with the Fedora key, but that’s pretty much out of the question. Fedora wouldn’t sign a piece of closed source software from a third party repo.

    So basically the only solution left is self-signing. It’s not ideal from the UX perspective. The user has to create a password for the machine owner key. The next time they boot, they have to go through several screens in terminal user interface of mokutil and enter the password. At such an early stage of the boot process the charset is pretty much limited to ASCII, so you can’t let the user use any other characters when creating the password in GNOME Software. But I think Milan Crha (devel) and Jakub Steiner (UX design), who worked on it, handled the problems pretty well.

    The password is generated for the user.

    When I was submitting the change, I was not expecting a lot of resistance. And if any, then questions about why we’re making proprietary software easily installable. But the biggest resistance was related to security. By enrolling a MOK, you allow all modules installed in the future to be signed by it as well.

    I understand the security implications of it, but you’re already trusting any software from the package repository, you’ve enabled, with the root privileges anyway and the only other alternative is to disable Secure Boot completely which removes that security measure entirely. In addition, the solution with disabled Secure Boot has other problems: it is done differently on different computers, there is no single set of step-by-step instructions which we could give to all users. And they may not be able to disable Secure Boot at all.

    On the other hand, we didn’t do a good job of informing users about the security implications in the original implementation and feedback from the community helped us come up with a better implementation with a reworked dialog. We’ve also added information about the security implications and an instruction how to remove the MOK when it’s no longer needed to the docs.

    The approved version of the dialog.

    So in Fedora Workstation 41, installing the Nvidia driver will be as easy as it can be within the constraints of Fedora policies. We still see this as a temporary solution for older Nvidia cards and until Nvidia rolls out its open source kernel module. Then, hopefully, this perennial pain for Linux users will finally be over.

    • wifi_tethering open_in_new

      This post is public

      enblog.eischmann.cz /2024/07/19/installing-nvidia-driver-will-be-easy-again-in-fedora-workstation/

    • chevron_right

      Jussi Pakkanen: Why refactoring is harder than you think, a pictorial representation

      news.movim.eu / PlanetGnome · Wednesday, 17 July - 14:47

    Suppose you are working on a legacy code base. It looks like this:

    Within it you find a piece of functionality that does a single thing, but is implemented in a hideously complicated way. Like so.

    You look at that and think: "That's awful. I know how to do it better. Faster. Easier. More maintainable." Then you set out to do just that. And you succeed in producing this thing.

    Wow. Look at that slick and smooth niceness. Time to excise the bad implementation from your code base.

    The new one is put in:

    And now you are don ... oh, my!



    • wifi_tethering open_in_new

      This post is public

      nibblestew.blogspot.com /2024/07/why-refactoring-is-harder-than-you.html