• Pl chevron_right

      Mathieu Pasquet: Poezio 0.16.1

      news.movim.eu / PlanetJabber • 8 hours ago

    Here is 0.16.1, a bugfix and cleanup release for poezio, with exactly one new feature as an improvement over an existing plugin.

    Poezio is a terminal-based XMPP client which aims to replicate the feeling of terminal-based IRC clients such as irssi or weechat; to this end, poezio originally only supported multi-user chats and anonymous authentication.

    Features

    • Handle redacted/moderated messages in /display_corrections as well

    Fixes

    • A bug in occupant comparison introduced in 0.16, leading to inconsistent MUC state in some cases.
    • Several fixes to MAM syncing and history gaps in MUC, which should be more reliable (but not perfect yet).
    • Issues around MUC self-ping ( XEP-0410 ) where you could not ever leave a room if it was enabled.
    • A bug in the new tcp-reconnect plugin which would traceback when disconnected.
    • Loading issues with plugins defined in entrypoints, of which the most common is poezio-omemo .

    Internal

    • Introduction of a harsher linting pipeline.
    • Plenty of typing and linting fixes.
    • Added poezio-omemo to the plugins list in pyproject, for easier installation.
    • Pl chevron_right

      Erlang Solutions: What Breaks First in Real-Time Messaging?

      news.movim.eu / PlanetJabber • 15 hours ago • 6 minutes

    Real-time messaging sits at the centre of many modern digital products. From live chat and streaming reactions to betting platforms and collaborative tools, users expect communication to feel immediate and consistent.

    When pressure builds, the system doesn’t typically collapse. It starts to drift. Messages arrive slightly late, ordering becomes inconsistent, and the platform feels less reliable. That drift is often the first signal that chat scalability is under strain.

    Imagine a live sports final going into overtime. Millions of viewers react at once. Messages stack up, connections remain open, and activity intensifies. For a moment, everything appears stable. Then delivery slows. Reactions fall slightly out of sync. Some users refresh, unsure whether the delay is on their side or yours.

    Those moments reveal whether the system was designed with fault tolerance in mind. If it was, the platform degrades predictably and recovers. If it wasn’t, small issues escalate quickly.

    This article explores what breaks first in real-time messaging and how early architectural decisions determine whether users experience resilience or visible strain.

    Real-Time Messaging in Live Products

    Live products place real-time messaging under immediate scrutiny. On sports platforms, streaming services, online games, and live commerce sites, messaging is visible while it happens. When traffic spikes, performance issues are exposed immediately.

    User expectations are already set. WhatsApp reports more than 2 billion monthly active users , shaping what instant communication feels like in practice. That expectation carries into every live experience, whether it is chat, reactions, or collaborative interaction.

    Statista source real-time messaging

    Source: Statista

    Live environments concentrate demand rather than distribute it evenly. Traffic clusters around specific moments. Concurrency can double within minutes, and those users remain connected while message volume increases sharply. That concentration exposes limits in chat scalability far more quickly than steady growth ever would.

    The operational impact tends to follow a familiar pattern:

    Live scenario System pressure Business impact
    Sports final Sudden surge in concurrent users Latency becomes public
    Product launch Burst of new sessions Onboarding friction
    Viral stream moment Rapid fan-out across channels Inconsistent experience
    Regional spike Localised traffic surge Infrastructure imbalance

    For live platforms, volatility comes with the territory.

    When delivery slows or behaviour becomes inconsistent, engagement drops first. Retention follows. Users rarely blame infrastructure. They blame the platform.

    Designing for unpredictable load requires architecture that assumes spikes are normal and isolates failure when it occurs. If you operate a live platform, that discipline determines whether users experience seamless interaction or visible strain.

    High Concurrency and Chat Scalability

    In live environments, real-time messaging operates under sustained concurrency rather than occasional bursts. Users remain connected, they interact continuously, and activity compounds when shared moments occur.

    High concurrency is not simply about having many users online. It involves managing thousands, sometimes millions, of persistent connections sending and receiving messages at the same time. Every open connection consumes resources, and messages may need to be delivered to large groups of active participants without delay.

    This is where chat scalability really gets tested.

    In steady conditions, most systems appear stable. When demand synchronises, message fan-out increases rapidly, routing paths multiply, and coordination overhead grows. Small inefficiencies that were invisible during testing begin to surface. Response times drift. Ordering becomes inconsistent. Queues expand before alerts signal a problem.

    High concurrency does not introduce entirely new issues. It reveals architectural assumptions that only become visible at scale. Concurrency increases are predictable in live systems. The risk lies in whether the messaging layer can sustain that pressure without affecting user experience.

    Messaging Architecture Limits

    The pressure created by high concurrency does not stay abstract for long. It shows up in the messaging architecture .

    When performance degrades under load, the root cause usually sits there. At scale, every message must be routed, processed, and delivered to the correct subscribers. In distributed systems, that requires coordination across servers, and coordination carries cost. Under sustained traffic, small inefficiencies compound quickly.

    Routing layers can become bottlenecks when messages must propagate across multiple nodes. Queues expand when incoming traffic outpaces processing capacity. Latency increases as backlogs grow. If state drifts between nodes, messages may arrive late or appear out of sequence.

    This is where the earlier discussion of chat scalability becomes tangible. It is not only about supporting more users. It is about how efficiently the architecture distributes load and maintains consistency when concurrency remains elevated.

    These limits rarely appear during controlled testing with predictable traffic. They emerge under real usage, where concurrency is sustained and message patterns are uneven.

    Well-designed systems account for this from the outset. They reduce coordination overhead, isolate failure domains, and scale horizontally without introducing fragility. When they do not, performance drift becomes visible long before a full outage occurs, and users feel the impact immediately.

    Fault Tolerance and Scaling

    If you operate a live platform, this is where design choices become visible.

    Once architectural limits are exposed, the question is how your system behaves as demand continues to rise.

    Scaling real-time messaging is about making sure that when components falter, the impact is contained. Distributed systems are built on a simple assumption: things break. You will see restarts, reroutes and unstable network conditions. But the real test is whether your architecture absorbs the shock or amplifies it.

    Systems built with fault isolation in mind tend to recover locally. Load shifts across nodes. Individual components stabilise without affecting the wider service. Systems built around central coordination points are more vulnerable to ripple effects.

    In practical terms, the difference shows up as:

    • Localised disruption rather than cascading instability
    • Brief slowdown instead of prolonged degradation
    • Controlled recovery rather than platform-wide interruption

    These behaviours define whether users experience resilience or instability.

    Fault tolerance determines how the system behaves when conditions are at their most demanding.

    Real-Time Messaging in Entertainment

    Entertainment platforms expose weaknesses in real-time messaging quickly because traffic converges rather than building steadily over time.

    When a live event captures attention, users respond together. Demand rises sharply within a short window, and those users remain connected while interaction increases. The stress on the system comes not from gradual growth, but from concentrated activity.

    Take the widespread Cloudflare outage in November 2025 . As a core infrastructure provider handling a significant share of global internet traffic, its disruption affected major platforms simultaneously. The issue was due to underlying infrastructure, but the impact was immediate and highly visible because so many users were active at once.

    Live gaming environments operate under comparable traffic patterns by design. During competitive matches on FACEIT , large numbers of players remain connected while scores, rankings, and in-game events update continuously. Activity intensifies around key moments, increasing message throughput while persistent connections stay open.

    FACE IT real-time messaging

    Across these environments, the pattern is consistent. Users connect simultaneously, interact continuously, and expect immediate feedback. When performance begins to drift, the impact is shared rather than isolated.

    A Note on Architecture

    This is where architectural choices begin to matter.

    Platforms that manage sustained concurrency and recover predictably under pressure tend to share certain structural characteristics. In messaging environments, MongooseIM is one example of infrastructure designed around those principles.

    In practical terms, that means:

    • Supporting large numbers of persistent connections without central bottlenecks
    • Distributing load across nodes to reduce coordination overhead
    • Containing failure within defined boundaries rather than allowing it to cascade
    • Maintaining message consistency even when traffic intensifies

    These design choices do not eliminate volatility. They determine how the system behaves when it does.

    In live entertainment platforms, that distinction shapes whether pressure remains internal or becomes visible to users.

    Conclusion

    Real-time messaging raises expectations that are easy to meet under steady conditions and far harder to sustain when attention converges.

    What breaks first is rarely availability. It is timing. It is the subtle drift in delivery and consistency that users notice before any dashboard signals a failure.

    Live environments make that visible because traffic arrives together and interaction compounds quickly. Concurrency is not the exception. It is the operating model. Whether the experience holds depends on how the architecture distributes load and contains failure.

    Designing for that reality early makes scaling more predictable and reduces the risk of visible strain later.If you are building or modernising a platform where real-time interaction matters, assess whether your messaging architecture is prepared for sustained concurrency. Get in touch to continue the conversation.

    The post What Breaks First in Real-Time Messaging? appeared first on Erlang Solutions .

    • Pl chevron_right

      ProcessOne: Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

      news.movim.eu / PlanetJabber • 19 hours ago • 3 minutes

    Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

    We have shipped Fluux Messenger 0.15, and it is packed with major advancements. After the excitement of the FOSDEM launch, the team put their heads down on three things users were asking for most: the ability to find past messages, a messenger that looks and feels different from Discord, and richer collaboration tools for group rooms.

    Find anything, instantly

    Full-text search is now built into Fluux Messenger, every conversation, every room, searchable, locally and instantly.

    The search engine runs entirely on your device using an IndexedDB inverted index with prefix matching and highlighted result snippets. No messages leave your machine to power this. For rooms with deep history on the server, results are supplemented with MAM (Message Archive Management) server-side queries, giving you the best of both worlds: speed from local cache, completeness from the archive.

    A find-in-page mode (Cmd+F) lets you scan within any conversation, mirroring the browser experience developers expect.

    Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

    Make it yours: a full theme system

    Fluux Messenger now ships with a proper design token system, three-tier (Foundation → Semantic → Component), and a theme picker bundled in the Appearance settings.

    Twelve themes are available out of the box: Fluux, Dracula, Nord, Gruvbox, Catppuccin Mocha, Solarized, One Dark, Tokyo Night, Monokai, Rosé Pine, Kanagawa, and GitHub. If none of them are quite right, you can import your own theme or inject CSS snippets directly. A global accent color picker with theme-specific presets lets you go further without writing a single line of code.

    Font size adjustment is now in Appearance settings as well.

    Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

    Polls in group rooms

    Reaction-based polls are now a first-class feature in MUC rooms. Create a poll with custom emojis, set a deadline, and let participants vote. The room enforces voting rules server-side, so results are trustworthy. Polls can be closed and reopened, and an "unanswered" banner nudges participants who haven&apost voted yet. Results are visualized inline and captured in the activity log.

    Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

    Faster connections with FAST authentication

    Fluux Messenger now supports XEP-0484: FAST (Fast Authentication Streamlining Tokens) alongside SASL2. Reconnection after a network interruption or wake from sleep is now possible in the web version.

    Media cache

    Downloaded images are now cached to the filesystem, so they load instantly on revisit and don&apost count against your bandwidth twice. A new storage management screen in settings lets you see and clear the cache.

    Polish and fixes worth noting

    • Emoji picker (emoji-mart) with dynamic viewport positioning, so it never clips off-screen
    • Last Activity (XEP-0012) — offline contacts now show how long ago they were last seen
    • Syntax highlighting for code blocks, lazily loaded per language, with theme integration and a fullscreen modal for long snippets
    • IRC-style mention detection with consistent per-user colors (XEP-0392)
    • VCard popovers on occupant and member list nicks
    • Scroll-to-bottom button now shows an unread badge and implements two-step scroll: first click jumps to the new message marker, second click goes to the very bottom
    • Particle burst animation when adding a reaction, and a message send slide-up animation
    • Upgraded to React 19 with the React Compiler for automatic memoization, and Vite 8 with lazy-loaded views for a faster startup

    A note on Windows signing

    Starting with 0.15, the Windows binary is no longer signed. This is a temporary decision while we work through the signing infrastructure — full details are in issue #290 . On install, Windows will prompt you to manually trust the application. We know this isn&apost ideal and are working to restore signed builds.


    Get it

    Fluux Messenger 0.15 is available now. Download it here or check the full changelog on GitHub for every detail.

    As always, feedback and bug reports are welcome. The community is the best part of building open-standard messaging software.

    • Pl chevron_right

      Ignite Realtime Blog: Experimenting with MariaDB, Firebird and CockroachDB Support in Openfire

      news.movim.eu / PlanetJabber • 1 day ago • 1 minute

    I have recently started experimenting with adding support for three additional databases in Openfire: MariaDB, Firebird and CockroachDB.

    This work is still exploratory. Before committing to this direction, I would like to get a better understanding of whether this is actually valuable to the Openfire community.

    I have prepared initial pull requests for each database:

    These are not production-ready, but intended to validate feasibility and surface any obvious issues.

    Why these databases?

    MariaDB is widely used as a drop-in replacement for MySQL. Although Openfire supports MySQL, MariaDB is not explicitly treated as a first-class option. Given how often it is used in practice, formal support could provide more confidence for administrators.

    Firebird represents a more niche but still relevant ecosystem. It is commonly found in long-lived, on-premise systems where changing the database is not realistic. Supporting it could make Openfire easier to adopt in those environments.

    CockroachDB targets modern, distributed deployments. With its PostgreSQL compatibility and focus on resilience and scalability, it could make Openfire more attractive for cloud-native and multi-region setups.

    Trade-offs

    Supporting additional databases comes with a cost: more code paths, more testing, and more long-term maintenance. The key question is whether the added flexibility justifies that complexity.

    Feedback wanted

    Before taking this further, I would really appreciate feedback from the community:

    Are you using (or considering) MariaDB, Firebird or CockroachDB with Openfire? Would official support influence your deployment decisions? Do you see this as valuable, or as unnecessary complexity?

    Please share your thoughts on the pull requests or through the usual community channels!

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • Pl chevron_right

      Erlang Solutions: Avoiding Platform Lock-In in Regulated Environments

      news.movim.eu / PlanetJabber • 27 March 2026 • 6 minutes

    Platform lock-in is often discussed as a commercial issue. Organisations adopt infrastructure that works well initially and later realise that moving away from those services becomes expensive or operationally disruptive.

    For platforms that run continuously under heavy demand, the consequences appear somewhere else first. They appear in architecture.

    Infrastructure choices influence how systems scale, how faults are contained, and how easily the platform can evolve as requirements change. In regulated environments those decisions often remain in place for years, which means architectural flexibility matters as much as technical capability.

    When infrastructure becomes tightly coupled to a particular provider, systems may still perform well day to day. The real impact usually surfaces later when workloads grow, regulations change, or operational expectations increase. At that point platform lock-in risks begin to affect reliability as well as flexibility.

    Why Platform Lock-in Matters in Regulated Environments

    These architectural constraints become particularly visible in regulated industries where infrastructure decisions cannot be changed casually.

    Financial services platforms must maintain traceable transactions and strict audit trails. Betting platforms process large volumes of activity during live sporting events. Streaming platforms deliver real-time content to global audiences who expect uninterrupted interaction.

    Systems supporting these environments often remain active for long periods, which means infrastructure decisions made early in the system’s lifecycle can shape how the platform changes years later.

    The Operational Impact of Vendor Lock-in

    Many organisations already recognise the risks associated with vendor lock-in. The 2024 Flexera State of the Cloud Report found that 89% of organisations now operate multi-cloud strategies, with reducing infrastructure dependency and avoiding vendor concentration cited as key motivations.

    The concern goes beyond procurement strategy. When platforms rely heavily on provider-specific services for messaging, orchestration, or event processing, those dependencies begin shaping how the system behaves under load.

    In regulated environments that dependency can become a reliability concern. Infrastructure decisions that once simplified development may later restrict how systems scale, evolve, or respond to operational change.

    Distributed Systems Architecture and Long-Running Platforms

    The reason platform lock-in becomes particularly serious in regulated environments is tied to how many of these platforms operate: as long-running distributed systems.

    Large-scale entertainment services rarely behave like short-lived workloads that restart frequently. Messaging layers, real-time interaction systems, and event pipelines maintain persistent connections while processing continuous streams of activity.

    Why Long-Running Systems Behave Differently

    Gaming platforms illustrate this clearly. Competitive environments host thousands of players interacting simultaneously, all of whom expect consistent state across the system. Betting platforms experience similar behaviour during major sporting events when users react instantly to changing odds. Streaming platforms see comparable spikes as audiences interact during live broadcasts.

    These platforms rely on distributed systems architecture that must coordinate large numbers of connections and events while remaining continuously available.

    Research published by ACM Queue examining large-scale distributed systems highlights how persistent connections and real-time workloads increase coordination pressure across system components, particularly during sudden spikes in concurrency.

    When coordination layers rely heavily on platform-specific services, architectural dependency gradually builds. Over time the system begins to inherit those infrastructure constraints.

    Reliability Requirements in High Reliability Systems

    Systems operating under these conditions often prioritise stability over rapid iteration. Platforms designed as high reliability systems must remain available while managing constant traffic, evolving workloads, and unpredictable user behaviour.

    Infrastructure decisions therefore have long-term consequences. When coordination, messaging, or state management rely on proprietary platform services, architectural flexibility narrows over time.

    Why Gaming, Betting and Streaming Platforms Reveal Infrastructure Limits

    Systems built as long-running distributed environments face their toughest tests during moments of concentrated demand. Entertainment platforms provide a clear example.

    Large audiences often react simultaneously. A football match entering extra time can trigger thousands of betting transactions within seconds. A major esports tournament can bring large numbers of players online at once. Streaming platforms experience bursts of interaction as viewers respond together during live broadcasts.

    Traffic Spikes and Scalable Distributed Systems

    Systems supporting these environments must function as scalable distributed systems capable of handling sudden increases in activity without losing consistency or responsiveness.

    Instead of steady growth, activity often arrives in waves. Large numbers of users connect, interact, and generate events within very short timeframes. The system must coordinate these interactions across multiple nodes while maintaining reliable communication between services.

    Infrastructure that appears sufficient under normal conditions can struggle during these spikes if the surrounding architecture relies too heavily on provider-specific services.

    Real-World Example: BET Software

    These architectural pressures are particularly visible in betting platforms where activity surges during live sporting events.

    BET Software operates large-scale betting technology platforms where thousands of users interact with markets simultaneously. During major sporting events systems must process rapid updates, recalculate market information, and distribute new data to users in real time.

    BET Software:  Avoiding platform lock-in in regulated environments

    Their distributed systems illustrate how reliability and responsiveness become essential in environments where activity concentrates around shared moments.

    Architectures designed with flexibility across infrastructure layers tend to scale and recover more predictably than those tightly coupled to provider-specific services.

    Architectural Patterns to Avoid Vendor Lock-in

    Recognising the risks of vendor lock-in is useful only if it leads to better architectural decisions. Systems that remain adaptable across infrastructure layers often share several structural characteristics.

    Decoupling Infrastructure Dependencies

    Architectures designed to avoid vendor lock-in typically separate application logic from infrastructure services wherever possible. This allows teams to evolve system components independently without redesigning the entire system,

    Designing Fault Tolerant Systems

    Platforms that must operate continuously also benefit from architectures designed as fault tolerant systems , where failures can be contained locally rather than cascading across the entire platform.

    Common patterns include:

    • Decoupled services that scale independently
    • Communication through open protocols rather than proprietary messaging layers
    • Distributed state management instead of provider-specific coordination services
    • Horizontal scaling across nodes
    • Infrastructure abstraction layers separating application logic from provider-specific implementations
    • These approaches help ensure that infrastructure choices support the system rather than define its limitations.

    These patterns help ensure that infrastructure choices support the system rather than define its limitations.

    Where Elixir Supports High Reliability Systems

    Technology choices also influence how easily distributed systems can maintain reliability while remaining adaptable.

    Languages built on the Erlang virtual machine, including Elixir, were designed for environments where systems must remain available while handling large numbers of concurrent processes. The runtime emphasises process isolation and supervision structures that allow failures to be contained locally rather than cascading across the system.

    Building Fault Tolerant Systems for Long-Running Platforms

    These characteristics make the platform particularly well suited for high reliability systems that must remain active while managing heavy concurrency.

    The advantage lies in the runtime model rather than any single infrastructure provider. Systems built around resilient distributed behaviour are easier to evolve because they remain stable even as infrastructure decisions change around them.

    Designing Systems That Reduce Platform Lock-in

    Looking across these examples reveals a consistent pattern.

    Platform lock-in becomes most visible in systems that must operate continuously while adapting to changing demand. Regulated environments amplify the challenge because infrastructure decisions often remain in place for years while platforms continue to evolve.

    Gaming, betting, and streaming services make these limits easier to see. Sudden spikes in activity quickly expose architectural weaknesses, and systems designed with flexible infrastructure tend to scale and recover more predictably.

    If you are building platforms where reliability and long-running distributed workloads matter, it may be worth assessing how your architecture handles platform lock-in. To explore these challenges further, get in touch with the Erlang Solutions team.

    The post Avoiding Platform Lock-In in Regulated Environments appeared first on Erlang Solutions .

    • Pl chevron_right

      ProcessOne: ejabberd 26.03

      news.movim.eu / PlanetJabber • 25 March 2026 • 4 minutes

    ejabberd 26.03

    If you are upgrading from a previous version, there is a change in the SQL schemas, please read below. There are no changes in configuration, API commands or hooks.

    Contents:

    Changes in SQL schema

    This release adds a new column to the rosterusers table in the SQL database schemas to support roster pre-approval. This task is performed automatically by ejabberd by default.

    However, if your configuration file has disabled update_sql_schema toplevel option, you must perform the SQL schema update manually yourself. Those instructions are valid for MySQL, PostgreSQL and SQLite, both default and new schemas:

    ALTER TABLE rosterusers ADD COLUMN approved boolean NOT NULL AFTER subscription;
    

    SASL channel binding changes

    This version adds the ability to configure the handling of the client flag &aposwanted to use channel-bindings but was not offered one&apos . By default, ejabberd aborts connections that present this flag, as this could indicate the presence of a rogue MITM proxy between the server and the client that strips the exchanged data of information required for this.

    This can cause problems for servers that use a proxy server which terminates the TLS connection (i.e. there is a MITM proxy, but it is approved by the server administrator). To handle this situation, we have added code to ignore this flag if the server administrator disables channel binding handling by disabling the -PLUS authentication mechanisms in the configuration file:

    disable_sasl_mechanisms:
      - SCRAM-SHA-1-PLUS
      - SCRAM-SHA-256-PLUS
      - SCRAM-SHA-512-PLUS
    

    We also ignore this flag for SASL2 connections if offered authentication methods filtered by available user passwords did disable all -PLUS mechanisms.

    ChangeLog

    Core

    • Fix MySQL authentication for TLS connections that required auth plugin switch
    • Improve handling of scram &aposwanted to use channel-bindings but was not offered one&apos flag
    • Add ability for mod_options values to depend on other options
    • Don&apost fail to classify stand-alone chat states
    • Fix some warnings compiling with Erlang/OTP 29 ( #4527 )
    • ejabberd_ctl : Document how to set empty lists in ejabberdctl and WebAdmin
    • ejabberd_http : Add handling of Etag and If-Modified-Since headers to files served by mod_http_upload
    • ejabberd_http : Ignore whitespaces at end of host header
    • SQL: Add ability to mark that column can be null in e_sql_schema
    • Tests: Add tests for SASL2
    • Tests: Make table cleanup in test more robust

    Modules

    • mod_fast_auth : Offered methods are based on available channel bindings
    • mod_http_api : Always hide password in log entries
    • mod_mam : Call store_mam_message hook for messages that user_mucsub_from_muc_archive was filtering out
    • mod_mam_sql : Only provide the new XEP-0431 fulltext field, not old custom withtext
    • mod_muc_room : Fix duplicate stanza-id in muc mam responses generated from local history ( #4544 )
    • mod_muc_room : Fix hook name in commit 7732984 ( #4526 )
    • mod_pubsub_serverinfo : Don&apost use gen_server:call for resolving pubsub host
    • mod_roster : Add support for roster pre-approval ( #4512 )
    • mod_roster : Fix display of groups in WebAdmin when it&aposs a list
    • mod_roster : in WebAdmin page, first execute SET actions, later GET
    • mod_roster_mnesia : Improve transformation code

    mod_invites

    • Makefile: Run invites-deps only when files are missing
    • Fix path to bootstrap files
    • Check at start time the syntax of landing_page option ( #4525 )
    • Send &aposLink&apos http header ( #4531 )
    • Set meta.pre-auth to skip redirect_url if token validated ( #4535 )
    • Many security fixes ( #4539 )
    • Add favicon and change color to match ejabberd branding
    • Enable dark mode
    • Add support for webchat_url
    • Migrate to bootstrap5 and update jquery
    • No inline scripts
    • Make format csrf token
    • Add csrf token to failed post
    • Include js/css deps in static dir
    • Correct hashes for bootstrap 4.6.2
    • Hint at type for landing_page opt
    • Many more security fixes ( #4538 )
    • Check CSRF token in register form
    • Add integrity hashes to scripts and css
    • Comment unused resources
    • Add security headers
    • Remove debug log of whole query parameters (including pw)
    • Don&apost crash on unknown host from http host header
    • Make creating invite transactional
    • Set overuse limits ( #4540 )
    • Fix broken path when behind proxy with prefix ( #4547 )

    Container and Installers

    • Bump Erlang/OTP 28.4.1
    • make-binaries: Bump libexpat to 2.7.5
    • make-binaries: Bump zlib to 1.3.2
    • make-binaries: Enable missing crypto features ( #4542 )

    Translations

    • Update Bulgarian translation
    • Update Catalan and Spanish translations
    • Update Chinese Simplified translation
    • Update Czech translation
    • Update French translation
    • Update German translation

    Acknowledgments

    We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

    And also to all the people contributing in the ejabberd chatroom, issue tracker...

    Improvements in ejabberd Business Edition

    Customers of the ejabberd Business Edition , in addition to all those improvements and bugfixes, also get the following changes:

    • Add p1db backend for mod_auth_fast
    • Fix issue when cleaning MAM messages stored in p1db
    • mod_unread fixes
    • Web push fixes

    Full Changelog

    https://github.com/processone/ejabberd/compare/26.02...26.03

    ejabberd 26.03 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues .

    • Pl chevron_right

      Mathieu Pasquet: Poezio 0.16

      news.movim.eu / PlanetJabber • 25 March 2026 • 1 minute

    Almost exactly one year since the last release, here is poezio 0.16.

    Poezio is a terminal-based XMPP client which aims to replicate the feeling of terminal-based IRC clients such as irssi or weechat; to this end, poezio originally only supported multi-user chats and anonymous authentication.

    Features

    A screenshot of poezio showing several test messages, with two of them moderated, one with a reason and the other without.

    • Receiving side of moderation ( XEP-0425 ) and retraction ( XEP-0424 ) events
    • New /report plugin to report spam ( XEP-0377 )
    • New /tcp-reconnect to kill TCP connections on faulty networks ( #3406 ).
    • Due to the CA store usage default on 0.15, the historical TOFU way of managing certs in poezio was broken. There is now a tls_verify_cert option that can be set to false if the user wishes so.

    Fixes

    • Several fixes around carbons ( XEP-0280 ) ( #3626 , #3627 )
    • The roster is now kept until the XMPP session ends (which means on a disconnect with smacks ( XEP-0198 ) enabled, the roster is kept until the session timeouts) ( #3614 )
    • Traceback in /affiliations due to slixmpp’s rust JID rewrite
    • Traceback in the /bookmarks tab
    • Infinite syncing of MAM history due to bad paging
    • Saving remote MUC bookmarks no longer force the JID as bookmark name
    • pkg_resources is no longer used.
    • Plenty of deprecationwarnings have been removed.
    • /set option=value would handle the = wrong and display the wrong thing.

    Internal

    • Plenty of typing updates and fixes

    Removals

    • Removal of uv as a first-party target for launch/update.sh (no vcware).
    • Removal of remaining origin-id usage.
    • Pl chevron_right

      Erlang Solutions: Meet the Team: Viktoria Laufer

      news.movim.eu / PlanetJabber • 23 March 2026 • 2 minutes

    Intro
    In this edition of our Meet the Team series, we’d like to introduce Viktoria, Project Manager at Erlang Solutions.

    She shares what it’s like to work across complex, multilingual projects, reflects on the highlights of her role so far, and gives a glimpse into the experiences that have shaped her journey.


    What does your role as Project Manager at Erlang Solutions involve, and what have been some highlights so far?


    Working as a Project Manager at Erlang Solutions is never dull. I had the opportunity to join a fascinating project focused on developing an intelligent virtual assistant for MUSE , the Italian science museum. My role spans multiple responsibilities, combining project management, Scrum Master duties, and product management.


    Beyond building something innovative, we also faced the added complexity of a language barrier. Optimising a chatbot for accuracy required us to rethink how retrieval and validation function when the primary knowledge base is not in English, but in highly specialised Italian scientific language curated by museum experts.


    In close collaboration with EbitMax and MUSE, we ensured that the system accurately reflects a carefully curated and continuously evolving knowledge base, while supporting interactions in Italian, German, and English.

    What do you enjoy most about working at Erlang Solutions?


    What I enjoy most is working closely with my team—exchanging ideas, solving problems, and learning together. I feel fortunate to collaborate across all business units, where everyone contributes to improving Tourio, our product. I also appreciate the opportunity to create new processes, continuously learn, and grow while tackling new challenges.

    Most importantly, I value the flexibility of remote work. I used to travel extensively as a digital nomad; now I’m more settled, but living in a place with limited job opportunities makes this role even more meaningful to me.

    Outside of work, how do you like to spend your time?


    Outside of work, whenever the wind picks up, you’ll find me kitesurfing with my friends and my boyfriend. Before joining Erlang Solutions, I worked as a kite instructor, and I still occasionally teach on weekends.

    When it’s not windy, I stay active with calisthenics or spinning at the gym. Sundays are usually spent cheering on my boyfriend at his local football matches. I also travel frequently to Hungary to spend time with my family and my dog—or to any destination where I can get back on the water.

    Final thoughts

    A big thank you to Viktoria for sharing her story with us. From leading collaborative projects to embracing new challenges, her approach reflects the curiosity and drive that shape our work at Erlang Solutions.

    Stay tuned for more Meet the Team stories, where we continue to spotlight the people behind the technology.

    The post Meet the Team: Viktoria Laufer appeared first on Erlang Solutions .

    • Pl chevron_right

      Mathieu Pasquet: slixmpp v1.14

      news.movim.eu / PlanetJabber • 14 March 2026

    This release brings supports for three new XEPs, fixes one specific bug that was breaking poezio, and has a lot of internal improvements including typing hings and documentation.

    Thanks to everyone involved for this release!

    Removals

    • The obsolete UnescapedJID class was removed.

    Features

    • New plugin: XEP-0462 (Pubsub Type Filtering)
    • New plugin: XEP-0511 (Link Metadata)
    • New plugin: XEP-0478 (Stream Limits Advertisment)
    • StanzaBase objects now have a pretty_print() method, for better readability.
    • XEP-0050 : use of the slixmpp "internal API" to handle commands
    • Added the missing unescape_node() method on the rust JID implementation

    Fixes

    • Fixed a bug in XEP-0410 when changing nicks

    Documentation

    • Added a lot of docstrings, and improved other ones
    • Lots of typing updates as well

    Internal, CI & build

    • Doctests are now run
    • Doc dependencies are now part of the pyproject file
    • pyo3 update
    • performance micro-optimization on session establishment

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    Previous version: 1.13.0 .