call_end

    • chevron_right

      Adjusting One Line Of Linux Code Yields 5x Wakeup Latency Reduction For Modern Xeon CPUs

      news.movim.eu / Phoronix • 11:27

    A new patch posted to the Linux kernel mailing list aims to address the high wake-up latency experienced on modern Intel Xeon server platforms. With Sapphire Rapids and newer, "excessive" wakeup latencies with the Linux menu governor and NOHZ_FULL configuration can negatively impair Xeon CPUs for latency-sensitive workloads but a 16 line patch aims to better improve the situation. That is, changing one line of actual code and the rest being code comments...
    • chevron_right

      New Patches Aim To Make x86 Linux EFI Stub & Relocatable Kernel Support Unconditional

      news.movim.eu / Phoronix • 10:55

    Prominent Intel Linux engineer H. Peter Anvin has posted a new patch series working to clean-up the Linux x86/x86_64 kernel boot code. Besides cleaning up the code, the kernel configuration would drop options around EFI stub mode and relocatable kernels in making those features now always enabled...
    • chevron_right

      PHPStan Now 25~40% Faster For Static Analysis

      news.movim.eu / Phoronix • 10:40

    For those using the powerful PHPStan tool for static analysis on PHP code, this week's PHPStan 2.1.34 is promoting optimized performance with projects seeing around 25% to 40% faster analysis times...
    • chevron_right

      An Exciting Day With More Performance Optimizations Merged For RADV In Mesa 26.0

      news.movim.eu / Phoronix • 01:09

    Mesa 26.0 was due to be branched last week and in turn start its feature freeze but ended up being pushed back to tomorrow (21 January) to allow some lingering features to land. It's been beneficial for the Radeon Vulkan driver "RADV" with several interesting merge requests having landed in time for Mesa 26.0...
    • chevron_right

      New Linux Patch Improved NVMe Performance +15% With CPU Cluster-Aware Handling

      news.movim.eu / Phoronix • Yesterday - 22:51

    Intel Linux engineers have been working on enhancing the NVMe storage performance with today's high core count processors. Due to situations where multiple CPUs could end up sharing the same NVMe IRQ(s), performance penalties can arise if the IRQ affinity and the CPU's cluster do not align. There is a pending patch to address this situation. A 15% performance improvement was reported with the pending patch...
    • chevron_right

      Linux 6.19 ATA Fixes Address Power Management Regression For The Past Year

      news.movim.eu / Phoronix • Yesterday - 20:15

    It's typically rare these days for the ATA subsystem updates in the Linux kernel to contain anything really noteworthy. But today some important fixes were merged for the ATA code to deal with a reported power management regression affecting the past number of Linux kernel releases over the last year. ATAPI devices with dummy ports weren't hitting their low-power state and in turn preventing the CPU from reaching low-power C-states but thankfully that is now resolved with this code...
    • chevron_right

      System76 Continues Driving More Improvements Into The COSMIC Desktop

      news.movim.eu / Phoronix • Yesterday - 19:13

    Following the December launch of Pop!_OS 24.04 LTS and the first major COSMIC desktop release, System76 software engineers have continued making improvements to their Rust-based desktop environment...
    • chevron_right

      AMD Making It Easier To Install vLLM For ROCm

      news.movim.eu / Phoronix • Yesterday - 18:01

    Deploying vLLM for LLM inference and serving on NVIDIA hardware can be as easy as pip3 install vllm. Beautifully simple just as many of the AI/LLM Python libraries can deploy straight-away and typically "just work" on NVIDIA. Running vLLM atop AMD Radeon/Instinct hardware though has traditionally meant either compiling vLLM from source yourself or AMD's recommended approach of using Docker containers that contain pre-built versions of vLLM. Finally there is now a blessed Python wheel for making it easier to install vLLM without Docker and leveraging ROCm...
    • chevron_right

      LLVM Adopts "Human In The Loop" Policy For AI/Tool-Assisted Contributions

      news.movim.eu / Phoronix • Yesterday - 17:13

    Following recent discussions over AI contributions to the LLVM open-source compiler project, they have come to an agreement on allowing AI/tool-assisted contributions but that there must be a human involved that is first looking over the code before opening any pull request and similar. Strictly AI-driven contributions without any human vetting will not be permitted...