• chevron_right

      Dorothy Kabarozi: Conversations in Open Source: Insights from Informal chats with Open Source contributors.

      news.movim.eu / PlanetGnome · 05:06 · 2 minutes

    Introduction

    Open source embodies collaboration, innovation, and accessibility within the technological realm. Seeking personal insights behind the collaborative efforts, I engaged in conversations with individuals integral to the open source community, revealing the diversity, challenges, and impacts of their work.

    Conversations Summary

    Venturing beyond my comfort zone, I connected with seasoned open source contributors, each offering unique perspectives and experiences. Their roles varied from project maintainers to mentors, working on everything from essential libraries to innovative technologies.

    • Das: Shared valuable insights on securing roles in open source, including resources for applications and tips for academic writing and conference speaking. The best part with Das was that she also reviewed my resume and shared many ways i could make it outstanding and shared templates to use for this.We had a really great chat.
    • Samuel: A seasoned C/C++ programmer working mainly on open-source user-space driver development.He was kind enough to share his 20 years long journey on how he started working with open source and how he loves working with low level Hardware.He also commended Outreachy as a great opportunity and my contributions with GNOME in QA testing.He encouraged me to apply for roles in the company he’s working with,and highlighted “Even if they say NO now, next time they will say YES”. Samuel also encouraged me to find my passion and this will guide me to learn faster,create my personal brand and encouraged me to submit some conference talks.
    • Dustin: Shared his 20 year journey and we mostly talked about Programming and software engineering in general . Highlighted the significance of networking and adaptability to learn quickly in open source.He shared a story how he “printed out code on one of his first jobs and learnt a skill of figuring out early what you don’t need to understand when faced with a big code base”. This is one skill I needed at the start instead of drowning in documentation trying to understand the project and where to start.
    • Stefan : Discussed his transition from a GSOC participant to a mentor,shared opensource job links , commended Outreachy as a big plus. He highlighted not to set yourself up by mental blocking that you can’t do anything, because you can.He encouraged to submit talks at conferences, network and publishing my work.

    These interactions showcased the wide-ranging backgrounds and motivations within the open source community and have deepened my respect for the open source community and its contributors. I have some homework to do with my resume and the links to opportunities that were shared with me.Open source welcomes contributors at all levels, offering a platform for innovation and collective achievement.

    Feel free to be an Outreachy intern on the upcoming cohorts to start your journey.

    Best of luck.

    • wifi_tethering open_in_new

      This post is public

      dorothykabarozi.wordpress.com /2024/02/21/conversations-in-open-source-insights-from-informal-chats-with-open-source-contributors/

    • chevron_right

      Carlos Garcia Campos: A Clarification About WebKit Switching to Skia

      news.movim.eu / PlanetGnome · Yesterday - 18:11

    In the previous post I talked about the plans of the WebKit ports currently using Cairo to switch to Skia for 2D rendering. Apple ports don’t use Cairo, so they won’t be switching to Skia. I understand the post title was confusing, I’m sorry about that. The original post has been updated for clarity.

    • wifi_tethering open_in_new

      This post is public

      blogs.igalia.com /carlosgc/2024/02/20/a-clarification-about-webkit-switching-to-skia/

    • chevron_right

      Matthew Garrett: Debugging an odd inability to stream video

      news.movim.eu / PlanetGnome · 2 days ago - 22:30 · 5 minutes

    We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

    This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

    This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

    And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think . tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

    What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

    All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

    (Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

    I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

    comment count unavailable comments
    • chevron_right

      Carlos Garcia Campos: WebKit Switching to Skia for 2D Graphics Rendering

      news.movim.eu / PlanetGnome · 2 days ago - 13:27 · 3 minutes

    In recent years we have had an ongoing effort to improve graphics performance of the WebKit GTK and WPE ports. As a result of this we shipped features like threaded rendering, the DMA-BUF renderer, or proper vertical retrace synchronization (VSync). While these improvements have helped keep WebKit competitive, and even perform better than other engines in some scenarios, it has been clear for a while that we were reaching the limits of what can be achieved with a CPU based 2D renderer.

    There was an attempt at making Cairo support GPU rendering, which did not work particularly well due to the library being designed around stateful operation based upon the PostScript model—resulting in a convenient and familiar API, great output quality, but hard to retarget and with some particularly slow corner cases. Meanwhile, other web engines have moved more work to the GPU, including 2D rendering, where many operations are considerably faster.

    We checked all the available 2D rendering libraries we could find, but none of them met all our requirements, so we decided to try writing our own library. At the beginning it worked really well, with impressive results in performance even compared to other GPU based alternatives. However, it proved challenging to find the right balance between performance and rendering quality, so we decided to try other alternatives before continuing with its development. Our next option had always been Skia. The main reason why we didn’t choose Skia from the beginning was that it didn’t provide a public library with API stability that distros can package and we can use like most of our dependencies. It still wasn’t what we wanted, but now we have more experience in WebKit maintaining third party dependencies inside the source tree like ANGLE and libwebrtc, so it was no longer a blocker either.

    In December 2023 we made the decision of giving Skia a try internally and see if it would be worth the effort of maintaining the project as a third party module inside WebKit. In just one month we had implemented enough features to be able to run all MotionMark tests. The results in the desktop were quite impressive, getting double the score of MotionMark global result. We still had to do more tests in embedded devices which are the actual target of WPE, but it was clear that, at least in the desktop, with this very initial implementation that was not even optimized (we kept our current architecture that is optimized for CPU rendering) we got much better results. We decided that Skia was the option, so we continued working on it and doing more tests in embedded devices. In the boards that we tried we also got better results than CPU rendering, but the difference was not so big, which means that with less powerful GPUs and with our current architecture designed for CPU rendering we were not that far from CPU rendering. That’s the reason why we managed to keep WPE competitive in embeeded devices, but Skia will not only bring performance improvements, it will also simplify the code and will allow us to implement new features . So, we had enough data already to make the final decision of going with Skia.

    In February 2024 we reached a point in which our Skia internal branch was in an “upstreamable” state, so there was no reason to continue working privately. We met with several teams from Google, Sony, Apple and Red Hat to discuss with them about our intention to switch from Cairo to Skia, upstreaming what we had as soon as possible. We got really positive feedback from all of them, so we sent an email to the WebKit developers mailing list to make it public. And again we only got positive feedback, so we started to prepare the patches to import Skia into WebKit, add the CMake integration and the initial Skia implementation for the WPE port that already landed in main.

    We will continue working on the Skia implementation in upstream WebKit, and we also have plans to change our architecture to better support the GPU rendering case in a more efficient way. We don’t have a deadline, it will be ready when we have implemented everything currently supported by Cairo, we don’t plan to switch with regressions. We are focused on the WPE port for now, but at some point we will start working on GTK too and other ports using cairo will eventually start getting Skia support as well.

    • wifi_tethering open_in_new

      This post is public

      blogs.igalia.com /carlosgc/2024/02/19/webkit-switching-to-skia-for-2d-graphics-rendering/

    • chevron_right

      Juan Pablo Ugarte: Cambalache Gtk4 port goes beta!

      news.movim.eu / PlanetGnome · 5 days ago - 14:32

    Hi, I am happy to announce Cambalache’s Gtk4 port has a beta release!

    Version 0.17.2 features minors improvements and a brand new UI ported to Gtk 4!

    Editing Cambalache UI in Cambalache

    The port was easier than expected, still lots of changes as you can see here…

    64 files changed, 2615 insertions(+), 2769 deletions(-)

    I specially like the new GtkDialog API and the removal of gtk_dialog_run()
    With so many changes I expect some new bugs so please if you find any file them here .

    Where to get it?

    You can get it from Flathub Beta

    flatpak remote-add --if-not-exists flathub-beta flathub.org/beta-repo/flathub-
    
    flatpak install flathub-beta ar.xjuan.Cambalache

    or checkout main branch at gitlab

    git clone https://gitlab.gnome.org/jpu/cambalache.git

    Matrix channel

    Have any question? come chat with us at #cambalache:gnome.org

    Mastodon

    Follow me in Mastodon @xjuan to get news related to Cambalache development.

    Happy coding!

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /xjuan/2024/02/16/cambalache-gtk4-port-goes-beta/

    • chevron_right

      Sam Thursfield: Status update, 16/02/2024

      news.movim.eu / PlanetGnome · 5 days ago - 14:16 · 4 minutes

    Some months you work very hard and there is little to show for any of it… so far this is one of those very draining months. I’m looking forward to spring, and spending less time online and at work.

    Rather than ranting I want to share a couple of things from elsewhere in the software world to show what we should be aiming towards in the world of GNOME and free operating systems.

    Firstly, from “Is something bugging you?” :

    Before we even started writing the database, we first wrote a fully-deterministic event-based network simulation that our database could plug into. … if one particular simulation run found a bug in our application logic, we could run it over and over again with the same random seed, and the exact same series of events would happen in the exact same order. That meant that even for the weirdest and rarest bugs, we got infinity “tries” at figuring it out, and could add logging, or do whatever else we needed to do to track it down.

    The text is hyperbolic, and for some reason they think it’s ok to work with Palantir, but its an inspiring read.

    Secondly, a 15 minute video which you should watch in its entirety, and then consider how the game development world got so far ahead of everyone else.

    This is the world that’s possible for operating systems, if we focus on developer experience and integration testing across the whole OS. And GNOME OS + openQA is where I see some of the most promising work happening right now.

    Of course we’re a looooong way from this promised land, despite the progress we’ve made in the last 10+ years. Automated testing of the OS is great, but we don’t always have logs ( bug ), the image randomly fails to start ( bug ), the image takes hours to build , we can’t test merge requests ( bug ), testing takes 15+ minutes to run, etc. Some of these issues seem intractable when occasional volunteer effort is all we have.

    Imagine a world where you can develop and live-deploy changes to your phone and your laptop OS, exhaustively test them in CI, step backwards with a debugger when problems arise – this is what we should be building in the tech industry. A few teams are chipping away at this vision – in the Linux world, GNOME Builder and the Fedora Atomic project spring to mind and I’m sure there are more .

    Anyway, what happened last month?

    Outreachy

    This is the final month of the Outreachy internship that I’m running around end-to-end testing of GNOME. We already had some wins, there are now 5 separate testsuites running against GNOME OS, unfortunately rather useless at present due to random startup failures .

    I spent a *lot* of time working with Tanju on a way to usefully test GNOME on Mobile. I haven’t been able to follow this effort closely beyond seeing a few demos and old blogs . This week was something of a crash course on what there is. Along the way I got pretty confused about scaling in GNOME Shell – it turns out there’s currently a hardcoded minimum screen size, and upstream Mutter will refuse to scale the display below a certain size. In fact upstream GNOME Shell doesn’t have any of the necessary adaptations for use in a mobile form factor. We really need a “GNOME OS Mobile” VM image – here’s an open issue – it’s unlikely to be done within the last 2 weeks of the current internship, though. The best we can do for now is test the apps on a regular desktop screen, but with the window resized to 360×720 pixels.

    On the positive side, hopefully this has been a useful journey for Tanju and Dorothy into the inner workings of GNOME. On a separate note, we submitted a workshop on openQA testing to GUADEC in Denver, and if all goes well with travel sponsorship and US VISA applications, we hope to actually meet in person there in July.

    FOSDEM

    I went to FOSDEM 2024 and had a great time, it was one of my favourite FOSDEM trips. I managed to avoid the ‘flu – i think wearing a mask on the plane is the secret. From Codethink we were 41 people this year – probably a new record.

    I went a day early to join in the end of the GTK hackfest , and did a little work on the Tiny SPARQL database, formerly known as Tracker SPARQL. Together with Carlos we fixed breakage in the CLI , improved HTTP support and prototyped a potential internship project to add a web based query editor .

    My main goal at FOSDEM was to make contact with other openQA users and developers, and we had some success there. Since then i’ve hashed out a wishlist for openQA for GNOME’s use cases, and we’re aiming to set up an open, monthly call where different QA teams can get together and collaborate on a realistic roadmap.

    I saw some great talks too, the “Outreachy 1000 Interns” talk and the Fairphone “Sustainable and Longlasting Phones” were particular highlights. I went to the Bytenight event for the first time and and found an incredible 1970’s Wurlitzer transistor organ in the “smoking area” of the HSBXL hackspace, also beating Javi, Bart and Adam at Super Tuxcart several times.

    • chevron_right

      Andy Wingo: guix on the framework 13 amd

      news.movim.eu / PlanetGnome · 5 days ago - 10:23 · 6 minutes

    I got a new laptop! It’s a Framework 13 AMD : 8 cores, 2 threads per core, 64 GB RAM, 3:2 2256×1504 matte screen. It kicks my 5-year-old Dell XPS 13 in the pants, and I am so relieved to be back to a matte screen. I just got it up and running with Guix , which though easier than past installation experiences was not without some wrinkles, so here I wanted to share a recipe for what worked for me.

    (I swear this isn’t going to become a product review blog, but when I went to post something like this on the Framework forum I got an error saying that new users could only post 2 links. I understand how we got here but hoo, that is a garbage experience!)

    The basic deal

    Upstream Guix works on the Framework 13 AMD, but only with software rendering and no wifi, and I wasn’t able to install from upstream media. This is mainly because Guix uses a modified kernel and doesn’t include necessary firmware. There is a third-party nonguix repository that defines packages for the vanilla Linux kernel and the linux-firmware collection; we have to use that repo if we want all functionality.

    Of course having the firmware be user-hackable would be better, and it would be better if the framework laptop used parts with free firmware. Something for a next revision, hopefully.

    On firmware

    As an aside, I think the official Free Software Foundation position on firmware is bad praxis. To recall, the idea is that if a device has embedded software (firmware) that can be updated, but that software is in a form that users can’t modify, then the system as a whole is not free software. This is technically correct but doesn’t logically imply that that the right strategy for advancing free software is to forbid firmware blobs; you have a number of potential policy choices and you have to look at their expected results to evaluate which one is most in line with your goals.

    Bright lines are useful, of course; I just think that with respect to free software, drawing that line around firmware is not interesting. To illustrate this point, I believe the current FSF position is that if you can run e.g. a USB ethernet adapter without installing non-free firmware, then it is kosher, otherwise it is haram. However many of these devices have firmware; it’s just that you aren’t updating it. So for example the the USB Ethernet adapter I got with my Dell system many years ago has firmware, therefore it has bugs, but I have never updated that firmware because that’s not how we roll. Or, on my old laptop, I never updated the CPU microcode, despite spectre and meltdown and all the rest.

    “Firmware, but never updated” reminds me of the wires around some New York neighborhoods that allow orthodox people to leave the house on Sabbath; useful if you are of a given community and enjoy the feeling of belonging, but I think even the faithful would see it as a hack. It is like how Richard Stallman wouldn’t use travel booking web sites because they had non-free JavaScript, but would happily call someone on the telephone to perform the booking for him, using those same sites. In that case, the net effect on the world of this particular bright line is negative: it does not advance free software in the least and only adds overhead. Privileging principle over praxis is generally a losing strategy.

    Installation

    Firstly I had to turn off secure boot in the bios settings; it’s in “security”.

    I wasn’t expecting wifi to work out of the box, but for some reason the upstream Guix install media was not able to configure the network via the Ethernet expansion card nor an external USB-C ethernet adapter that I had; stuck at the DHCP phase. So my initial installation attempt failed.

    Then I realized that the nonguix repository has installation media, which is the same as upstream but with the vanilla kernel and linux-firmware. So on another machine where I had Guix installed, I added the nonguix channel and built the installation media, via guix system image -t iso9660 nongnu/system/install.scm . That gave me a file that I could write to a USB stick.

    Using that installation media, installing was a breeze.

    However upon reboot, I found that I had no wifi and I was using software rendering; clearly, installation produced an OS config with the Guix kernel instead of upstream Linux. Happily, at this point the ethernet expansion card was able to work, so connect to wired ethernet, open /etc/config.scm , add the needed lines as described in the operating-system part of the nonguix README , reconfigure, and reboot. Building Linux takes a little less than an hour on this machine.

    Fractional scaling

    At that point you have wifi and graphics drivers. I use GNOME, and things seem to work. However the screen defaults to 200% resolution, which makes everything really big. Crisp, pretty, but big. Really you would like something in between? Or that the Framework ships a higher-resolution screen so that 200% would be a good scaling factor; this was the case with my old Dell XPS 13, and it worked well. Anyway with the Framework laptop, I wanted 150% scaling, and it seems these days that the way you have to do this is to use Wayland, which Guix does not yet enable by default.

    So you go into config.scm again, and change where it says %desktop-services to be:

    (modify-services %desktop-services
      (gdm-service-type config =>
        (gdm-configuration (inherit config) (wayland? #t))))
    

    Then when you reboot you are in Wayland. Works fine, it seems. But then you have to go and enable an experimental mutter setting; install dconf-editor , run it, search for keys with “mutter” in the name, find the “experimental settings” key, tell it to not use the default setting, then click the box for “scale-monitor-framebuffer”.

    Then! You can go into GNOME settings and get 125%, 150%, and so on. Great.

    HOWEVER, and I hope this is a transient situation, there is a problem: in GNOME, applications that aren’t native Wayland apps don’t scale nicely. It’s like the app gets rendered to a texture at the original resolution, which then gets scaled up in a blurry way. There aren’t so many of these apps these days as most things have been ported to be Wayland-capable, Firefox included, but Emacs is one of them :( However however! If you install the emacs-pgtk package instead of emacs , it looks better. Not perfect, but good enough. So that’s where I am.

    Bugs

    The laptop hangs on reboot due to this bug , but that seems a minor issue at this point. There is an ongoing tracker discussion on the community forum; like other problems in that thread, I hope that this one resolves itself upstream in Linux over time.

    Other things?

    I didn’t mention the funniest thing about this laptop: it comes in pieces that you have to put together :) I am not so great with hardware, but I had no problem. The build quality seems pretty good; not a MacBook Air, but then it’s also user-repairable, which is a big strong point. It has these funny extension cards that slot into the chassis, which I have found to be quite amusing.

    I haven’t had the machine for long enough but it seems to work fine up to now: suspend, good battery use, not noisy (unless it’s compiling on all 16 threads), graphics, wifi, ethernet, good compilation speed. (I should give compiling LLVM a go; that’s a useful workload.) I don’t have bluetooth or the fingerprint reader working yet; I give it 25% odds that I get around to this during the lifetime of this laptop :)

    Until next time, happy hacking!

    • wifi_tethering open_in_new

      This post is public

      wingolog.org /archives/2024/02/16/guix-on-the-framework-13-amd

    • chevron_right

      Jiri Eischmann: Bisecting Regressions in Fedora Silverblue

      news.movim.eu / PlanetGnome · 6 days ago - 10:00 · 3 minutes

    I know that some blog posts on this topic have already been published, but nevertheless I have decided to share my real-life experience of bisecting regressions in Fedora Silverblue.

    My work laptop, Dell XPS 13 Plus, has an Intel IPU6 webcam. It still lacks upstream drivers, so I have to use RPMFusion packages containing Intel’s software stack to ensure the webcam’s functionality.

    However, on Monday, I discovered that the webcam was not working. Just last week, it was functioning, and I had no idea what could have broken it. I don’t pay much attention to updates; they’re staged automatically and applied with the next boot. Fortunately, I use Fedora Silverblue, where problems can be identified using the bisection method.

    Silverblue utilizes OSTree , essentially described as Git for the operating system. Each Fedora version is a branch, and individual snapshots are commits. You update the system by moving to newer commits and upgrade by switching to the branch with the new version. Silverblue creates daily snapshots, and since OSTree allows you to revert to any commit in the repository, you can roll back the system to any previous date.

    I decided to revert to previous states and determine when the regression occurred. I downloaded commit metadata for the last 7 days:

    sudo ostree pull --commit-metadata-only --depth 7 fedora fedora/39/x86_64/silverblue

    Then, I listed the commits to get their labels and hashes:

    sudo ostree log fedora:fedora/39/x86_64/silverblue

    Subsequently, I returned the system to the beginning of the previous week:

    rpm-ostree deploy 39.20240205.0

    After rebooting, I found that the webcam was working, indicating that something had broken it last week. I decided to halve the interval and return to Wednesday:

    rpm-ostree deploy 39.20240207.0

    In the Wednesday snapshot, the webcam was no longer functioning. Now, I only needed to determine whether it was broken by Tuesday’s or Wednesday’s update. I deployed the Tuesday snapshot:

    rpm-ostree deploy 39.20240206.0

    I found out that the webcam was still working in this snapshot. It was broken by the Wednesday’s update, so I needed to identify which packages had changed. I used the hashes from one of the outputs above:

    rpm-ostree db diff ec2ea04c87913e2a69e71afbbf091ca774bd085530bd4103296e4621a98fc835 fc6cf46319451122df856b59cab82ea4650e9d32ea4bd2fc5d1028107c7ab912
    
    ostree diff commit from: ec2ea04c87913e2a69e71afbbf091ca774bd085530bd4103296e4621a98fc835
    ostree diff commit to: fc6cf46319451122df856b59cab82ea4650e9d32ea4bd2fc5d1028107c7ab912
    Upgraded:
    kernel 6.6.14-200.fc39 -> 6.7.3-200.fc39
    kernel-core 6.6.14-200.fc39 -> 6.7.3-200.fc39
    kernel-modules 6.6.14-200.fc39 -> 6.7.3-200.fc39
    kernel-modules-core 6.6.14-200.fc39 -> 6.7.3-200.fc39
    kernel-modules-extra 6.6.14-200.fc39 -> 6.7.3-200.fc39
    llvm-libs 17.0.6-2.fc39 -> 17.0.6-3.fc39
    mesa-dri-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-filesystem 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-libEGL 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-libGL 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-libgbm 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-libglapi 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-libxatracker 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-va-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
    mesa-vulkan-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
    qadwaitadecorations-qt5 0.1.3-5.fc39 -> 0.1.4-1.fc39
    uresourced 0.5.3-5.fc39 -> 0.5.4-1.fc39

    The kernel upgrade from 6.6 to 6.7 was the logical suspect. I informed a colleague, who maintains the RPMFusion packages with the IPU6 webcams support, that the 6.7 kernel broke the webcam support for me. He asked for the model and informed me that it had an Intel VSC chip, which has a newly added driver in the 6.7 kernel. However, it conflicts with Intel’s software stack in RPMFusion packages. So he asked the Fedora kernel maintainer to temporarily disable the Intel VSC driver. This change will come with the next kernel update.

    Until the update arrives, I decided to stay on the last functional snapshot from the previous Tuesday. To achieve this, I pinned it in OSTree. Because the system was currently running on that snapshot, all I had to do was:

    sudo ostree admin pin 0

    And that’s it. Just a small real-life example of how to identify when and what caused a regression in Silverblue and how to easily revert to a version before the regression and stay on it until a fix arrives.

    • wifi_tethering open_in_new

      This post is public

      enblog.eischmann.cz /2024/02/15/bisecting-regressions-in-fedora-silverblue/

    • chevron_right

      Luis Villa: Boox Page: A Review

      news.movim.eu / PlanetGnome · 6 days ago - 02:26 · 7 minutes

    I love words, and have as far back as I can remember. Mom says I started reading the newspaper religiously in kindergarten, I proudly show off a small percentage of my books in my videoconferencing background, and of course the web is (still) junk straight into my veins.

    So I’ve long wanted an e-ink device that could do multi-vendor ebooks, pdfs, and a feed reader. (I still consume a lot of RSS, mostly via the excellent feedbin .) But my 8yo Kindle kept not dying, and my last shot at this (a Sony) was expensive and frustrating.

    42262456760_8693810e0b_c.jpg?resize=739%2C493&ssl=1 Printery, by Dennis Jarvis ; used under CC BY-2.0 .

    Late last year, though, I found out about the Boox Page . It is… pretty great? Some notes and tips:

    • My use case: I don’t write or take notes on this, so I can’t advise on whether it is a good Remarkable (or similar) competitor. Also don’t use the music player or audiobooks.
    • Price : Pretty reasonable for something I use this much. Open question how long they do Android support, unfortunately—Reddit users report five years for older models, which isn’t great but (unlike a phone) not as hugely active a security risk.
    • Screen and body: screen, lighting, size, and weight are easily on par with my old Kindle, which is to say “comfortable on eyes and hands for a lot of long-term reading”.
    • Battery life: This is not an OG Kindle, but I have gone multiple plane flights without recharging, and it is USB-C, so (unlike my micro-USB Kindle) easy to charge while traveling.
    • Other hardware: Meh.
      • The hardware volume buttons are… fine? But small. I would not miss them if I had to rely on the screen.
      • The cover is a fine cover, but it is not an effective stand – something I miss from my Kindle.
      • I suspect repairability is terrible. Sorry, iFixit :(
    • Core operating system
      • UX: Android modded to use e-ink is… surprisingly fine? I use the Android layer essentially to (1) install apps and (2) launch them. And those work well, though the default launcher screen has some clutter that can’t be removed. Some of the alternate Android launchers apparently work , but I haven’t tried any of them yet.
      • performance: Boot time is very long, and for unknown reasons it seems to default to a full shutdown when you close the cover, which really destroys the “open up the book and immediately start reading” experience. Strongly recommend Power → Power-off Timeout → “never” to avoid this problem.
      • Security: There is no thumbprint or face scan here, and the e-ink keyboard is as good as possible but still slow to enter a high-strength password. So I’ve chosen to put as little private information on this as possible. Has motivated me to update some passwords to “xkcd-style” four-word, lower-case, strings in order to make them easier to read from my phone and type in on the Boox.
      • Openness: it looks like Onyx violates the GPL . Folks have successfully rooted older Boox devices, but I have not seen any evidence of anyone doing that with the Page yet.
    • UX “improvements” : There are so many; most aren’t great but the worst ones can all be turned off.
      • “Naviball” : a weird dot-as-UI thing that I turned off nearly immediately.
      • Gestures: To save screen real-estate, standard Android back and home buttons are by default replaced with gestures ( documentation ). I found the gestures a bit unreliable and so turned the bottom toolbar back on. Not great but fine.
      • E-ink settings: There are a lot of different settings for e-ink refresh strategies (eg do you want it faster but lossy, slower but clearer, etc.), which are adjustable both overall and on a per-app basis. The default settings seem to work pretty well for me, but I suspect at least some apps would benefit from tweaking. Docs here for more details.
    • App stores:
      • Google Play : pre-installed, and can be used to install Kindle, Libby, and Hoopla. It works fine, with two caveats: I don’t love giving this device my Google password; and it is clear that the best iOS apps have noticeably better UX than the best Android apps.
      • F-Droid : Works great! Used it to install the Wikipedia app, KOReader, and Tusky, among others.
    • Getting books:
      • Kindle app works great. You will not lose access to your existing e-book library by moving away from a Kindle.
      • Libby and Hoopla : Both of the apps used by my library (San Francisco Public) to make e-books available install fine from the Google Play App Store and work great. Have happily read long books in both of them.
      • Standard Ebooks: Their website and “advanced epub” work great on the Boox, so I’ve been plowing through a bunch of old public domain stuff. This has been maybe the single most satisfying part of the device, to be honest—it’s been great to refocus my reading that way instead of being guided by Amazon’s recommendations.
      • Onyx : the folks behind Boox have a book store, which can be turned off but not removed from the home-screen UX. I have not used it.
    • Reading books:
      • All of the apps above have worked fine, with the minor detail that in some of them (notably Kindle) you have to find a setting to enable “use volume control as page up/down” in order to use the page turning buttons.
      • KOReader , from F-Droid, was recommended to me. Its default text presentations are, IMHO, not ideal. But once I spent some time experimenting to get proper margins and line spacing, it became very pleasant. I use it to read the “advanced” formatted epubs from Standard Ebooks, and it handles those very well.
    • Managing books:
      • KO Reader and the built-in book reader mostly default to “just expose the file system” as a book management view, which is… not great.
      • KO Reader does not sync reading positions, so if you’re used to going back and forth between your e-ink device and your phone, you may need to do more work, either to just remember your positions, or to set up something like calibre-web .
      • I’m using Story Graph as my main “what did I read lately” repository and it works great as a progressive web app.
    • The web
      • The basics: the Page has a reasonably modern Chrome (called “Neobrowser”). Mostly just works, none of the “experimental” web browser stuff from Kindle.
      • Scrolling: can be painful; e-ink and scrolling don’t mix too well. Much depends on the nature of the web page (fine) or web app (grimace) you’re using – some are fine, others are not so much.
      • Progressive Web Apps: The browser often prompts you to install web sites as app icons on the main launcher page and they generally work great . Really nice alternative to app stores, just like we always hoped they’d be. Maybe just a few years too late :(
    • Other sources of text
      • Feeds: Feedbin.com in the browser is… fine. Refresh is sort of wonky, though. I’ve tried both Android apps listed on the feedbin site , but don’t love either. (I like the experience enough that I have resurrected my dream of a more newspaper-like app to read on this in the morning.)
      • Fediverse: Tusky is the most-recommended open Android Mastodon app, and it is fine, but I have found that I prefer Phanpy in-browser (despite some performance challenges) to the better performance but rough UX edges of Tusky. There are quite a few other options; I have not explored them deeply.
      • PDFs. The couple of PDFs I’ve tried to read have been fine, but I have not pushed it and I suspect they won’t be great given the smaller-screen; either a larger Boox or a Remarkable seem more the right thing if that is your primary use case. I really need to set up some sort of syncing and prioritizing system; many of them are reputed to work (again, basically stock-ish Android).
      • Wikipedia: App works great. Haven’t tried editing yet though ;)
    • Things I’d like to explore more:
      • Security: can I move entirely to F-Droid-sourced apps so I don’t need this to have my Google password? Can I replace most Onyx-provided apps next ?
      • “Find my”: Is there a “find this device” (and/or “remote wipe this”) service that would help me track it down in case of loss, but that doesn’t require trusting the device very much (see Google password above)? No idea but given that I’ve already nearly lost it once, I need to look into it.
      • Sharing out: I like sharing out links, quotes, snippets, etc. but typing on this is a bit of a mess. I’d like to figure out some sort of central output flow, where I can use the Android share feature to dump into a stream that I can then surf and resurface from other devices. Not sure what the right service is to host that, though.
    • Bottom line: I am really enjoying this, and it is enough under my control that I suspect I can make it even better. It certainly isn’t a dream device from a freedom and privacy perspective, but for my limited use case that’s fine.

    • wifi_tethering open_in_new

      This post is public

      lu.is /2024/02/boox-page-a-review/