call_end

    • Pl chevron_right

      Alley Chaggar: YAML Research

      news.movim.eu / PlanetGnome • 18 July • 3 minutes

    Intro

    Hi everyone, sorry for the late post. Midterms are this week for GSoC, which means I’m halfway through GSoC. It’s been an incredible experience so far, and I know it’s going to continue to be great.

    API vs. ABI

    What is the difference between an application programming interface versus an application binary interface? In the beginning, this question tripped me out and confused me, because I wasn’t familiar with ABIs. Understanding what an ABI is has helped me decide which libraries I should consider using in the codegen phase. When talking about Vala, Vala is designed to use a C ABI. First, let’s understand what they are separately and then compare them.

    API

    Personally, I think Understanding API’s is more popular and well known. An API is usually, at a high level, defined by two software components or computers communicating with each other using a set of definitions and protocols. This definition I always thought was pretty vague and expansive. When dealing with code-level APIs, I like to understand it as APIs are existing entities in the user code (source code) that have functions, constants, structures, etc. You can think of it as when you write code, you access libraries through an API. For example, when you write print ( ' hello world ' ) in Python, print () is a part of Python’s standard library API.

    ABI

    ABI, on the other hand, is very similar, but instead of the compiler time, they are executed during runtime. Runtime means when your program is done compiling (going through the lexical, syntax, semantic analysis, etc) and the machine is actually running your executable. Its goals are very low-level and entails how compiled code should interact, particularly in the context of operating systems and libraries. It has protocols and standards for how the OS handles your program, such as storage, memory, hardware, and about how your compiled binary works with other compiled components.

    YAML Libraries

    I’ve started to look into YAML and XML (mainly YAML). I’ve looked into many different libraries dealing with YAML, some such as pluie-yaml , libyaml-glib , glib-yaml , and libyaml . To my understanding and research, there are no well-maintained YAML libraries that integrate GObject or GLib. The goal is to find a well-maintained library that I can use in the codegen.

    Pluie yaml

    I mentioned pluie-yaml , but this library isn’t a C library like json-glib , it’s a shared Vala library. The good thing is that the codegen can use pure Vala libraries because Vala libraries have a C ABI, however, the bad part is that this library is not well-maintained. The last activity was 7 years ago.

    Libyaml glib

    Libyaml-glib is a GLib binding of libyaml, plus a GObject builder that understands YAML. Just like pluie, it’s not a C library. It’s written in Vala. And just like pluie, it’s not well-maintained, with the last activity even stretching to longer, 9 years ago.

    Glib yaml

    Glib-yaml is a GLib-based YAML parser written in C. It again, just like the other libraries, doesn’t pass the maintenance check since it’s been years of no updates or commits in the repo. Going all the way back to 13 years ago. It’s also only a parser, and it doesn’t serialize or emit YAML, so even if it were well-maintained, I’d still need to emit YAML either manually or find another library that does so.

    Libyaml

    In conclusion, libyaml is the C library that I will be using for parsing and emitting YAML. It has a C ABI, and it’s the most well-maintained out of all of the other libraries. Vala already has a VAPI file binding it, yaml-0.1.vapi . However, there is no GObject or GLib integration, unlike json-glib, but that should be fine.

    • Pl chevron_right

      Michael Meeks: 2025-07-16 Wednesday

      news.movim.eu / PlanetGnome • 16 July

    • Up hyper-early, got to work instead of trying to sleep: much more productive, another half a day's work.
    • Found some lost A/C folk, and helped them maintain our system.
    • Sync with Dave, Sarper, Thorsten, Mitch helped me unwind some Apple account horrors.
    • Published the next strip on a topic that plagues engineers: having a wonderful product or service is futile without a proper marketing & sales process.
    • Pl chevron_right

      Michael Meeks: 2025-07-15 Tuesday

      news.movim.eu / PlanetGnome • 15 July

    • Up early, cleared some mail, planning call, sync with Laser, Andras, snatched some lunch, monthly mgmt call.
    • Customer sales call, sync with Miklos, poked at some code, multi-partner call. Fixed a silly flicker issue in the COOL UI.
    • Supervised some EPQ-ness with Lily, lots of sanding and varnish.
    • Curious to read about randomness as an idea to "prevent meritocracies from devolving into court societies dominated by schemers and sycophants." - interesting.
    • Pl chevron_right

      Sam Thursfield: Status update, 15/07/2025

      news.movim.eu / PlanetGnome • 15 July • 2 minutes

    This month has involved very little programming and a huge amount of writing.

    I am accidentally writing a long-form novel about openQA testing. It’s up to 1000 words already and we’re still on the basics.

    The idea was to prepare for my talk at the GNOME conference this year called “Let’s build an openQA testsuite, from scratch” , by writing a tutorial that everyone can follow along at home. My goal for the talk is to share everything I’ve learned about automated GUI testing in the 4 years since we started the GNOME openqa-tests project. There’s a lot to share.

    I don’t have any time to work on the tests myself — nobody seems interested in giving me paid time to work on them, its not exactly a fun weekend project, and my weekends are busy anyway — so my hope is that sharing knowledge will keep at least some momentum around automated GUI testing. Since we don’t seem yet to have mastered writing apps without bugs : -)

    I did a few talks about openQA over the years, always at a high level. “This is how it looks in a web browser”, and so on. Check out “ The best testing tools we’ve ever had: an introduction to OpenQA for GNOME ” from GUADEC 2023, for example. I told you why openQA is interesting but I didn’t have time to talk about how to use it.

    Me trying to convince you to use openQA in 2023

    So this time I will be taking the opposite approach. I’m not going to spend time discussing whether you might use it or not. We’re just going to jump straight in with a minimal Linux system and start testing the hell out of it. Hopefully we’ll have time to jump from there to GNOME OS and write a test for Nautilus as well. I’m just going to live demo everything, and everyone in the talk can follow along with their laptops in real time.

    Anyway, I’ve done enough talks to know that this can’t possibly go completely according to plan. So the tutorial is the backup plan, which you can follow along before or after or during the talk. You can even watch Emmanuele’s talk “Getting Things Done In GNOME” instead, and still learn everything I have to teach, in your own time.

    Tutorials need to make a comeback! As a youth in the 90s, trying to make my own videogames because I didn’t have any, I loved tutorials like Denthor’s Tutorial on VGA Programming and Pete’s QBasic Site and so on. Way back in those dark ages, I even wrote a tutorial about fonts in QBasic . (Don’t judge me… wait, you judged me a while back already, didn’t you).

    Anyway, what I forgot, since those days, is that writing a tutorial takes fucking ages!

    • Pl chevron_right

      Victor Ma: My first design doc

      news.movim.eu / PlanetGnome • 15 July • 3 minutes

    In the last two weeks, I investigated some bugs, tested some fonts, and started working on a design doc.

    Bugs

    I found two more UI-related bugs ( 1 , 2 ). These are in addition to the ones I mentioned in my last blog post—and they’re all related. They have to do with GTK and sidebars and resizing.

    I looked into them briefly, but in the end, my mentor decided that the bugs are complicated enough that he should handle them himself . His fix was to replace all the .ui files with Blueprint files, and then make changes from there to squash all the bugs. The port to Blueprint also makes it much easier to edit the UI in the future.

    Font testing

    Currently, GNOME Crosswords uses the default GNOME font, Cantarell. But we’ve never really explored the possibility of using other fonts. For example, what would Crosswords look like with a monospace font? Or with a handwriting font? This is what I set out to discover.

    To change the font, I used GTK Inspector, combined with this CSS selector, which targets the grid and word suggestions list:

    edit-grid, wordlist {
     font-family: FONT;
    }
    

    This let me dynamically change the font, without having to recompile each time. I created a document with all the fonts that I tried .

    Here’s what Source Code Pro , a monospace font, looks like. It gives a more rigid look—especially for the word suggestion list, where all the letters line up vertically. Monospace font

    And here’s what Annie Use Your Telescope , a handwriting font, looks like. It gives a fun, charming look to the crossword grid—like it’s been filled out by hand. It’s a bit too unconventional to use as the default font, but it would definitely be cool to add as an option that the user can enable. Handwriting font

    Design doc

    My current task is to improve the word suggestion algorithm for the Crosswords Editor. Last week, I starting working on a design doc that explains my intended change. Here’s a short snippet from the doc, which highlights the problem with our current word suggestion algorithm:

    Consider the following grid:

    +---+---+---+---+
    | | | | Z |
    +---+---+---+---+
    | | | | E |
    +---+---+---+---+
    | | | | R |
    +---+---+---+---+
    | W | O | R | | < current slot
    +---+---+---+---+
    

    The 4-Down slot begins with ZER , so the only word it can be is ZERO . This means that the cell in the bottom-right corner must be the letter O .

    But 4-Across starts with WOR . And WORO is not a word. So the bottom-right corner cannot actually be the letter O . This means that the slot is unfillable.

    If the cursor is on the bottom right cell, then our word suggestion algorithm correctly recognizes that the slot is unfillable and returns an empty list.

    But suppose the cursor is on one of the other cells in 4-Across. Then, the algorithm has no idea about 4-Down and the constraint it imposes. So, the algorithm returns all words that match the filter WOR? , like WORD and WORM —even though they do not actually fit the slot.

    CSPs

    In the process of writing the doc, I came across the concept of a constraint satisfaction problem (CSP) , and the related AC-3 algorithm. A CSP is a formalization of a problem that…well…involves satisfying a constraint. And the AC-3 algorithm is an algorithm that’s sometimes used when solving CSPs.

    The problem of filling a crossword grid can be formulated as a CSP. And we can use the AC-3 algorithm to generate perfect word suggestion lists for every cell.

    This isn’t the approach I will be taking. However, we may decide to implement it in the future. So, I documented the AC-3 approach in my design doc.

    • Pl chevron_right

      Michael Meeks: 2025-07-14 Monday

      news.movim.eu / PlanetGnome • 14 July

    • A day of 1:1's, marketing content review, topped off by minuting a PCC meeting.
    • Pl chevron_right

      Toluwaleke Ogundipe: Profiling Crosswords’ Rendering Pipeline

      news.movim.eu / PlanetGnome • 14 July • 10 minutes

    For the sake of formality, if you’re yet to read the [brief] introduction of my GSoC project, here you go.

    Rendering Puzzles in GNOME Crosswords

    GNOME Crosswords currently renders puzzles in two layers. The first is a grid of what we call the layout items: the grid cells, borders between cells and around the grid, and the intersections between borders. This is as illustrated below:

    Layout items

    The game and editor implement this item grid using a set of custom Gtk widgets: PlayCell for the cells and PlayBorder for the borders and intersections, all contained and aligned in a grid layout within a PlayGrid widget. For instance, the simple.ipuz puzzle, with just its item grid, looks like this:

    Rendered puzzle with layout item grid only

    Then it renders another layer, of what we call the layout overlays , above the grid. Overlays are elements of a puzzle which do not exactly fit into the grid, such as barred borders, enumerations (for word breaks, hyphens, etc), cell dividers and arrows (for arrowword puzzles), amongst others. The fact that overlays do not fit into the grid layout makes it practically impossible to render them using widgets. Hence, the need for another layer, and the term “overlay”.

    Overlays are currently implemented in the game and editor by generating an SVG and rendering it onto a GtkSnapshot of the PlayGrid using librsvg. The overlays for the same puzzle, the item grid of which is shown above, look like this:

    Rendered layout overlays

    When laid over the item grid, they together look like this:

    Rendered puzzle with layout item grid and overlays

    All these elements (items and overlays) and their various properties and styles are stored in a GridLayout instance, which encapsulates the appearance of a puzzle in a given state. Instances of this class can then be rendered into widgets, SVG, or any other kind of output.

    The project’s main source includes svg.c , a source file containing code to generate an SVG string from a GridLayout . It provides a function to render overlays only, another for the entire layout, and a function to create an RsvgHandle from the generated SVG string.

    Crosswords, the game, uses the SVG code to display thumbnails of puzzles (though currently only for the Cats and Dogs puzzle set), and Crossword Editor displays thumbnails of puzzle templates in its greeter for users to create new puzzles from.

    Crosswords’ puzzle picker grid
    Crossword Editor’s greeter

    Other than the game and editor, puzzles are also rendered by the crosswords-thumbnailer utility. This renders an entire puzzle layout by generating an SVG string containing both the layout items and overlays, and rendering/writing it to a PNG or SVG file. There is also my GSoC project, which ultimately aims to add support for printing puzzles in the game and editor.

    The Problem

    Whenever a sizeable, yet practical number of puzzles are to be displayed at the same time, or in quick succession, there is a noticeable lag in the user interface resulting from the rendering of thumbnails. In other words, thumbnails take too long to render!

    There are also ongoing efforts to add thumbnails to the list view in the game, for every puzzle, but the current rendering facility just can’t cut it. As for printing, this doesn’t really affect it since it’s not particularly a performance-critical operation.

    The Task

    My task was to profile the puzzle rendering pipeline to determine what the bottleneck(s) was/were. We would later use this information to determine the way forward, whether it be optimising the slow stages of the pipeline, replacing them or eliminating them altogether.

    The Rendering Pipeline

    The following is an outline of the main/top-level stages involved in rendering a puzzle from an IPUZ file to an image (say, an in-memory image buffer):

    1. ipuz_puzzle_from_file(): parses a file in the IPUZ format and returns an IpuzPuzzle object representing the puzzle.
    2. grid_state_new(): creates a GridState instance which represents the state of a puzzle grid at a particular instant, and contains a reference to the IpuzPuzzle object. This class is at the core of Crosswords and is pretty quick to instantiate.
    3. grid_layout_new(): creates a GridLayout instance (as earlier described) from the GridState .
    4. svg_from_layout(): generates an SVG string from the GridLayout .
    5. svg_handle_from_string(): creates an RsvgHandle (from librsvg ) from the generated SVG string and sets a stylesheet on the handle, defining the colours of layout items and overlays.
    6. rsvg_handle_render_document(): renders the generated SVG onto a Cairo surface (say, an image surface, which is essentially an image buffer), via a Cairo context.

    Profiling

    To profile the rendering pipeline specifically, I wrote up a little program to fit my purposes, which can be found here ( puzzle-render-profiler.c ).

    The Attempt with Sysprof

    Initially, I used Sysprof , executing the profiler program under it. Unfortunately, because Sysprof is a sampling profiler and probably also due to its system-wide nature, the results weren’t satisfactory. Also, the functions of interest aren’t long-running functions and each run only once per execution. So the results weren’t accurate enough and somewhat incomplete (missed many nested calls).

    Don’t get me wrong, Sysprof has its strengths and stands strong amongst profilers of its kind. I tried a couple of others, and Sysprof is the only one even worthy of mention here. Most importantly, I’m glad I got to use and explore Sysprof. The next time I use it won’t be my first!

    Using Callgrind

    Callgrind + QCachegrind is sincerely such a godsend!

    Callgrind is a profiling tool that records the call history among functions in a program’s run as a call-graph. By default, the collected data consists of the number of instructions executed, their relationship to source lines, the caller/callee relationship between functions, and the numbers of such calls.

    QCachegrind is a GUI to visualise profiling data. It’s mainly used as a visualisation frontend for data measured by Cachegrind/Callgrind tools from the Valgrind package.

    After a short while of reading through Callgrind’s manual, I pieced together the combination of options (not much at all) that I needed for the task, and the rest is a story of exploration, learning and excitement. With the following command line, I was all set.

    valgrind --tool=callgrind --toggle-collect=render_puzzle ./profiler puzzles/puzzle-sets/cats-and-dogs/doghouse.ipuz

    where:

    • valgrind is the Valgrind .
    • --tool=callgrind selects the Callgrind tool to be run.
    • --toggle-collect=render_puzzle sets render_puzzle , the core function in the profiler program, as the target for Callgrind’s data collection.
    • ./profiler is a symlink to the executable of the profiler program.
    • puzzles/puzzle-sets/cats-and-dogs/doghouse.ipuz is a considerably large puzzle.

    I visualised Callgrind’s output in QCachegrind, which is pretty intuitive to use. My focus is the Call Graph feature, which helps to graphically and interactively analyse the profile data. The graph can also be exported as an image or DOT (a graph description format in the Graphviz language) file. The following is a top-level view of the result of the profile run.

    Top-level profile of the rendering pipeline

    Note that Callgrind measures the number of instructions executed, not exactly execution time, but typically, the former translates proportionally to the latter. The percentages shown in the graph are instruction ratios, i.e the ratio of instructions executed within each function (and its callees) to the total number of instructions executed (within the portions of the program where data was collected).

    This graph shows that loading the generated SVG ( svg_handle_from_string ) takes up the highest percentage of time, followed by rendering the SVG ( rsvg_handle_render_document ). Note that the SVG is simply being rendered to an image buffer, so no encoding, compression, or IO is taking place. The call with the HEX number, instead of a name, simply calls g_object_unref , under which dropping the rsvg::document::Document (owned by the RsvgHandle ) takes the highest percentage. Probing further into svg_handle_from_string and rsvg_handle_render_document :

    Profile of svg_handle_from_string
    Profile of rsvg_handle_render_document

    The performance of rsvg_handle_render_document improves very slightly after its first call within a program due to some one-time initialisation that occurs in the first call, but it is almost insignificant. From these results, it can be seen that the root of the problem is beyond the scope of Crosswords, as it is something to fix in librsvg. My mentors and I thought it would be quicker and easier to replace or eliminate these stages, but before making a final decision, we needed to see real timings first. For anyone who cares to see a colourful, humongous call graph, be my guest:

    Full (sort of) profile of the rendering pipeline

    Getting Real Timings

    The next step was to get real timings for puzzles of various kinds and sizes to make sense of Callgrind’s percentages, determine the relationship between puzzle sizes and render times, and know the maximum amount of time we can save by replacing or eliminating the SVG stages of the pipeline.

    To achieve this, I ran the profiler program over a considerably large collection of puzzles available to the GNOME Crosswords project and piped the output (it’s in CSV format) to a file, the sorted version of which can be found here ( puzzle_render_profile-sorted_by_path.csv ). Then, I wrote a Python script ( visualize_results.py ), which can also be found at the same URL, to plot a couple of graphs from the results.

    Time vs Puzzle size (width * height) (lesser is better)
    Time vs Number of layout elements (items + overlays) (lesser is better)

    Both show a very similar profile from which I made the following deductions:

    • Total render times increase almost linearly with puzzle size, and so do the component stages of the rendering pipeline. The exceptions (those steep valleys in the graphs) are puzzles with significant amounts of NULL cells, the corresponding layout items of which are omitted during SVG generation since NULL cells are not to be rendered.
    • The stages of loading the generated SVG and rendering it take most of the time, significantly more than the other stages, just as the Callgrind graphs above show.

    Below is a stacked bar chart showing the cumulative render times for all puzzles. Note that these timings are only valid relative to one another; comparison with timings from another machine or even the same machine under different conditions would be invalid.

    Cumulative render times for all puzzles (lesser is better)

    The Way Forward

    Thankfully, the maintainer of librsvg, Federico Mena Quintero , happens to be one of my mentors. He looked into the profiling results and ultimately recommended that we cut out the SVG stages ( svg_from_layout , svg_handle_from_string and rsvg_handle_render_document ) entirely and render using Cairo directly. For context, librsvg renders SVGs using Cairo. He also pointed out some specific sources of the librsvg bottlenecks and intends to fix them. The initial work in librsvg is in !1178 and the MR linked from there.

    This is actually no small feat, but is already in the works (actually more complete than the SVG rendering pipeline is, at the point of writing this) and is showing great promise! I’ll be writing about this soon, but here’s a little sneak peek to keep your taste buds wet.

    Comparison of the new and old render pipeline timings with puzzles of varying sizes (lesser is better)
    before after Render of doghouse.ipuz before after Render of catatonic.ipuz

    I’ll leave you to figure it out in the meantime.

    Conclusion

    Every aspect of the task was such a great learning experience. This happened to be my first time profiling C code and using tools like Valgrind and Sysprof; the majority of C code I had written in the past was for bare-metal embedded systems. Now, I’ve got these under my belt, and nothing is taking them away.

    That said, I will greatly appreciate your comments, tips, questions, and what have you; be it about profiling, or anything else discussed herein, or even blog writing (this is only my second time ever, so I need all the help I can get).

    Finally, but by far not the least important, a wise man once told me:

    The good thing about profiling is that it can overturn your expectations :-)

    — H.P. Jansson

    Thanks

    Very big thank yous to my mentors, Jonathan Blandford and Federico Mena Quintero , for their guidance all through the accomplishment of this task and my internship in general. I’ve learnt a great deal from them so far.

    Also, thank you (yes, you reading) so much for your time. Till next time… Anticipate!!!

    • Pl chevron_right

      Sebastian Wick: Blender HDR and the reference white issue

      news.movim.eu / PlanetGnome • 13 July • 6 minutes

    The latest alpha of the upcoming Blender 5.0 release comes with High Dynamic Range (HDR) support for Linux on Wayland which will, if everything works out, make it into the final Blender 5.0 release on October 1, 2025. The post on the developer forum comes with instructions on how to enable the experimental support and how to test it.

    If you are using Fedora Workstation 41, which ships GNOME version 48, everything is already included to run Blender with HDR. All that is required is an HDR compatible display and graphics driver, and turning on HDR in the Display Settings.

    It’s been a lot of personal blood, sweat and tears, paid for by Red Hat across the Linux graphics stack for the last few years to enable applications like Blender to add HDR support. From kernel work, like helping to get the HDR mode working on Intel laptops, and improving the Colorspace and HDR_OUTPUT_METADATA KMS properties, to creating a new library for EDID and DisplayID parsing, and helping with wiring things up in Vulkan.

    I designed the active color management paradigm for Wayland compositors, figured out how to properly support HDR, created two wayland protocols to let clients and compositors communicate the necessary information for active color management, and created documentation around all things color in FOSS graphics. This would have also been impossible without Pekka Paalanen from Collabora and all the other people I can’t possibly list exhaustively.

    For GNOME I implemented the new API design in mutter (the GNOME Shell compositor), and helped my colleagues to support HDR in GTK.

    Now that everything is shipping, applications are starting to make use of the new functionality. To see why Blender targeted Linux on Wayland, we will dig a bit into some of the details of HDR!

    HDR, Vulkan and the reference white level

    Blender’s HDR implementation relies on Vulkan’s VkColorSpaceKHR , which allows applications to specify the color space of their swap chain, enabling proper HDR rendering pipeline integration. The key color space in question is VK_COLOR_SPACE_HDR10_ST2084_EXT , which corresponds to the HDR10 standard using the ST.2084 (PQ) transfer function.

    However, there’s a critical challenge with this Vulkan color space definition: it has an undefined reference white level.

    Reference white indicates the luminance or a signal level at which a diffuse white object (such as a sheet of paper, or the white parts of a UI) appears in an image. If images with different reference white levels end up at different signal levels in a composited image, the result is that “white” in one of the images is still being perceived as white, while the “white” from the other image is now being perceived as gray. If you ever scrolled through Instagram on an iPhone or played an HDR game on Windows, you will probably have noticed this effect.

    The solution to this issue is called anchoring. The reference white level of all images needs to be normalized in order for “white” ending up on the same signal level in the composited image.

    Another issue with the reference white level specific to PQ is the prevalent myth, that the absolute luminance of a PQ signal must be replicated on the actual display a user is viewing the content at. PQ is a bit of a weird transfer characteristic because any given signal level corresponds to an absolute luminance with the unit cd/m² (also known as nit). However, the absolute luminance is only meaningful for the reference viewing environment! If an image is being viewed in the reference viewing environment of ITU-R BT.2100, (essentially a dark room) and the image signal of 203 nits is being shown at 203 nits on the display, it makes the image appear as the artist intended. The same is not true when the same image is being viewed on a phone with the summer sun blasting on the screen from behind.

    PQ is no different from other transfer characteristics in that the reference white level needs to be anchored, and that the anchoring point does not have to correspond to the luminance values that the image encodes.

    Coming back to the Vulkan color space VK_COLOR_SPACE_HDR10_ST2084_EXT definition: “HDR10 (BT2020) color space, encoded according to SMPTE ST2084 Perceptual Quantizer (PQ) specification”. Neither ITU-R BT.2020 (primary chromaticity) nor ST.2084 (transfer characteristics), nor the closely related ITU-R BT.2100 define the reference white level. In practice, the reference level of 203 cd/m² from ITU-R BT.2408 (“Suggested guidance for operational practices in high dynamic range television production”) is used. Notable, this is however not specified in the Vulkan definition of VK_COLOR_SPACE_HDR10_ST2084_EXT .

    The consequences of this? On almost all platforms, VK_COLOR_SPACE_HDR10_ST2084_EXT implicitly means that the image the application submits to the presentation engine (what we call the compositor in the Linux world) is assumed to have a reference white level of 203 cd/m², and the presentation engine adjusts the signal in such a way that the reference white level of the composited image ends up at a signal value that is appropriate for the actual viewing environment of the user. On GNOME, the way to control this currently is the “HDR Brightness” slider in the Display Settings, but will become the regular screen brightness slider in the Quick Settings menu.

    On Windows, the misunderstanding that a PQ signal value must be replicated one to one on the actual display has been immortalized in the APIs. It was only until support for HDR was added to laptops that this decision was revisited, but changing the previous APIs was already impossible at this point. Their solution was exposing the reference white level in the Win32 API and tasking applications to continuously query the level and adjust the image to match the new level. Few applications actually do this, with most games providing a built-in slider instead.

    The reference white level of VK_COLOR_SPACE_HDR10_ST2084_EXT on Windows is essentially a continuously changing value that needs to be queried from Windows APIs outside of Vulkan.

    This has two implications:

    • It is impossible to write a cross-platform HDR application in Vulkan (if Windows is one of the targets)
    • On Wayland, a “standard” HDR signal can just be passed on to the compositor, while on Windows, more work is required

    While the cross-platform issue is solvable, and something we’re working on, the way Windows works also means that the cross-platform API might become harder to use because we cannot change the underlying Windows mechanisms.

    No Windows Support

    The result is that Blender currently does not support HDR on Windows.

    Jeroen-Bakker saying

    Jeroen-Bakker explaining the lack of Windows support

    The design of the Wayland color-management protocol, and the resulting active color-management paradigm of Wayland compositors was a good choice, making it easy for developers to do the right thing, while also giving them more control if they so chose.

    Looking forward

    We have managed to transition the compositor model from a dumb blitter to a component which takes an active part in color management, we have image viewers and video players with HDR support and now we have tools for producing HDR content! While it is extremely exciting for me that we have managed to do this properly, we also have a lot of work ahead of us, some of which I will hopefully tell you about in a future blog post!

    • Pl chevron_right

      Philip Withnall: GUADEC handbook

      news.movim.eu / PlanetGnome • 12 July

    I was reminded today that I put together some notes last year with people’s feedback about what worked well at the last GUADEC. The idea was that this could be built on, and eventually become another part of the GNOME handbook, so that we have a good checklist to organise events from each year.

    I’m not organising GUADEC, so this is about as far as I can push this proto-handbook, but if anyone wants to fork it and build on it then please feel free.

    Most of the notes so far relate to A/V things, remote participation, and some climate considerations. Obviously a lot more about on-the-ground organisation would have to be added to make it a full handbook, but it’s a start.