call_end

    • Pl chevron_right

      Mathieu Pasquet: slixmpp v1.14

      news.movim.eu / PlanetJabber • 5 days ago

    This release brings supports for three new XEPs, fixes one specific bug that was breaking poezio, and has a lot of internal improvements including typing hings and documentation.

    Thanks to everyone involved for this release!

    Removals

    • The obsolete UnescapedJID class was removed.

    Features

    • New plugin: XEP-0462 (Pubsub Type Filtering)
    • New plugin: XEP-0511 (Link Metadata)
    • New plugin: XEP-0478 (Stream Limits Advertisment)
    • StanzaBase objects now have a pretty_print() method, for better readability.
    • XEP-0050 : use of the slixmpp "internal API" to handle commands
    • Added the missing unescape_node() method on the rust JID implementation

    Fixes

    • Fixed a bug in XEP-0410 when changing nicks

    Documentation

    • Added a lot of docstrings, and improved other ones
    • Lots of typing updates as well

    Internal, CI & build

    • Doctests are now run
    • Doc dependencies are now part of the pyproject file
    • pyo3 update
    • performance micro-optimization on session establishment

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    Previous version: 1.13.0 .

    • Pl chevron_right

      Erlang Solutions: What Breaks First in Real-Time Messaging?

      news.movim.eu / PlanetJabber • 12 March 2026 • 6 minutes

    Real-time messaging sits at the centre of many modern digital products. From live chat and streaming reactions to betting platforms and collaborative tools, users expect communication to feel immediate and consistent.

    When pressure builds, the system doesn’t typically collapse. It starts to drift. Messages arrive slightly late, ordering becomes inconsistent, and the platform feels less reliable. That drift is often the first signal that chat scalability is under strain.

    Imagine a live sports final going into overtime. Millions of viewers react at once. Messages stack up, connections remain open, and activity intensifies. For a moment, everything appears stable. Then delivery slows. Reactions fall slightly out of sync. Some users refresh, unsure whether the delay is on their side or yours.

    Those moments reveal whether the system was designed with fault tolerance in mind. If it was, the platform degrades predictably and recovers. If it wasn’t, small issues escalate quickly.

    This article explores what breaks first in real-time messaging and how early architectural decisions determine whether users experience resilience or visible strain.

    Real-Time Messaging in Live Products

    Live products place real-time messaging under immediate scrutiny. On sports platforms, streaming services, online games, and live commerce sites, messaging is visible while it happens. When traffic spikes, performance issues are exposed immediately.

    User expectations are already set. WhatsApp reports more than 2 billion monthly active users , shaping what instant communication feels like in practice. That expectation carries into every live experience, whether it is chat, reactions, or collaborative interaction.

    Statistica real-time messaging

    Source: Statista

    Live environments concentrate demand rather than distribute it evenly. Traffic clusters around specific moments. Concurrency can double within minutes, and those users remain connected while message volume increases sharply. That concentration exposes limits in chat scalability far more quickly than steady growth ever would.

    The operational impact tends to follow a familiar pattern:

    Live scenario System pressure Business impact
    Sports final Sudden surge in concurrent users Latency becomes public
    Product launch Burst of new sessions Onboarding friction
    Viral stream moment Rapid fan-out across channels Inconsistent experience
    Regional spike Localised traffic surge Infrastructure imbalance

    For live platforms, volatility comes with the territory.

    When delivery slows or behaviour becomes inconsistent, engagement drops first. Retention follows. Users rarely blame infrastructure. They blame the platform.

    Designing for unpredictable load requires architecture that assumes spikes are normal and isolates failure when it occurs. If you operate a live platform, that discipline determines whether users experience seamless interaction or visible strain.

    High Concurrency and Chat Scalability

    In live environments, real-time messaging operates under sustained concurrency rather than occasional bursts. Users remain connected, they interact continuously, and activity compounds when shared moments occur.

    High concurrency is not simply about having many users online. It involves managing thousands, sometimes millions, of persistent connections sending and receiving messages at the same time. Every open connection consumes resources, and messages may need to be delivered to large groups of active participants without delay.

    This is where chat scalability really gets tested.

    In steady conditions, most systems appear stable. When demand synchronises, message fan-out increases rapidly, routing paths multiply, and coordination overhead grows. Small inefficiencies that were invisible during testing begin to surface. Response times drift. Ordering becomes inconsistent. Queues expand before alerts signal a problem.

    High concurrency does not introduce entirely new issues. It reveals architectural assumptions that only become visible at scale. Concurrency increases are predictable in live systems. The risk lies in whether the messaging layer can sustain that pressure without affecting user experience.

    Messaging Architecture Limits

    The pressure created by high concurrency does not stay abstract for long. It shows up in the messaging architecture .

    When performance degrades under load, the root cause usually sits there. At scale, every message must be routed, processed, and delivered to the correct subscribers. In distributed systems, that requires coordination across servers, and coordination carries cost. Under sustained traffic, small inefficiencies compound quickly.

    Routing layers can become bottlenecks when messages must propagate across multiple nodes. Queues expand when incoming traffic outpaces processing capacity. Latency increases as backlogs grow. If state drifts between nodes, messages may arrive late or appear out of sequence.

    This is where the earlier discussion of chat scalability becomes tangible. It is not only about supporting more users. It is about how efficiently the architecture distributes load and maintains consistency when concurrency remains elevated.

    These limits rarely appear during controlled testing with predictable traffic. They emerge under real usage, where concurrency is sustained and message patterns are uneven.

    Well-designed systems account for this from the outset. They reduce coordination overhead, isolate failure domains, and scale horizontally without introducing fragility. When they do not, performance drift becomes visible long before a full outage occurs, and users feel the impact immediately.

    Fault Tolerance and Scaling

    If you operate a live platform, this is where design choices become visible.

    Once architectural limits are exposed, the question is how your system behaves as demand continues to rise.

    Scaling real-time messaging is about making sure that when components falter, the impact is contained. Distributed systems are built on a simple assumption: things break. You will see restarts, reroutes and unstable network conditions. But the real test is whether your architecture absorbs the shock or amplifies it.

    Systems built with fault isolation in mind tend to recover locally. Load shifts across nodes. Individual components stabilise without affecting the wider service. Systems built around central coordination points are more vulnerable to ripple effects.

    In practical terms, the difference shows up as:

    • Localised disruption rather than cascading instability
    • Brief slowdown instead of prolonged degradation
    • Controlled recovery rather than platform-wide interruption

    These behaviours define whether users experience resilience or instability.

    Fault tolerance determines how the system behaves when conditions are at their most demanding.

    Real-Time Messaging in Entertainment

    Entertainment platforms expose weaknesses in real-time messaging quickly because traffic converges rather than building steadily over time.

    When a live event captures attention, users respond together. Demand rises sharply within a short window, and those users remain connected while interaction increases. The stress on the system comes not from gradual growth, but from concentrated activity.

    Take the widespread Cloudflare outage in November 2025 . As a core infrastructure provider handling a significant share of global internet traffic, its disruption affected major platforms simultaneously. The issue was due to underlying infrastructure, but the impact was immediate and highly visible because so many users were active at once.

    Live gaming environments operate under comparable traffic patterns by design. During competitive matches on FACEIT , large numbers of players remain connected while scores, rankings, and in-game events update continuously. Activity intensifies around key moments, increasing message throughput while persistent connections stay open.

    FACEIT real time messaging

    Across these environments, the pattern is consistent. Users connect simultaneously, interact continuously, and expect immediate feedback. When performance begins to drift, the impact is shared rather than isolated.

    A Note on Architecture

    This is where architectural choices begin to matter.

    Platforms that manage sustained concurrency and recover predictably under pressure tend to share certain structural characteristics. In messaging environments, MongooseIM is one example of infrastructure designed around those principles.

    In practical terms, that means:

    • Supporting large numbers of persistent connections without central bottlenecks
    • Distributing load across nodes to reduce coordination overhead
    • Containing failure within defined boundaries rather than allowing it to cascade
    • Maintaining message consistency even when traffic intensifies

    These design choices do not eliminate volatility. They determine how the system behaves when it does.

    In live entertainment platforms, that distinction shapes whether pressure remains internal or becomes visible to users.

    Conclusion

    Real-time messaging raises expectations that are easy to meet under steady conditions and far harder to sustain when attention converges.

    What breaks first is rarely availability. It is timing. It is the subtle drift in delivery and consistency that users notice before any dashboard signals a failure.

    Live environments make that visible because traffic arrives together and interaction compounds quickly. Concurrency is not the exception. It is the operating model. Whether the experience holds depends on how the architecture distributes load and contains failure.

    Designing for that reality early makes scaling more predictable and reduces the risk of visible strain later.If you are building or modernising a platform where real-time interaction matters, assess whether your messaging architecture is prepared for sustained concurrency. Get in touch to continue the conversation.

    The post What Breaks First in Real-Time Messaging? appeared first on Erlang Solutions .

    • Pl chevron_right

      Erlang Solutions: Reliability is a Product Decision

      news.movim.eu / PlanetJabber • 9 March 2026 • 6 minutes

    Reliability is often treated as something that can be improved once a system is live. When things break, the focus shifts to monitoring, incident response, and recovery, with the belief that resilience can be strengthened over time as scale reveals weaknesses.

    In reality, most of it is set much earlier.

    Long before a system faces sustained demand, its underlying design has already shaped how it will respond under pressure. Choices about service boundaries, data handling, deployment models, and fault management influence whether a problem stays contained or spreads.

    The conversation is gradually moving from reliability to resilience because distributed systems rarely operate without failure. The more useful question is how a platform continues running when parts of it inevitably fail. The sections that follow explore how early architectural decisions shape that outcome, why their impact becomes more visible at scale, and what it means to build resilience from the beginning rather than react to it later.

    Early Decisions Create Long-Term Behaviour

    Large-scale failures rarely emerge without warning. What appears sudden at scale is often the predictable outcome of structural decisions made earlier, when different commercial pressures shaped priorities.

    In the early stages of a product, the focus is understandably on delivering value quickly, reducing development friction, and validating the market. These are rational business decisions. However, architecture chosen primarily for speed can quietly define the operational ceiling of the system, setting limits that only become visible once demand increases.

    Systems Behave as They Were Built to Behave

    Outages are often described as “unexpected events,” but distributed systems typically respond to pressure in ways that reflect their design. How services communicate, how state is shared, where dependencies sit, and how failure is managed all influence whether disruption remains contained within a single component or spreads across the wider platform.

    Research from Google’s Site Reliability Engineering work shows that around 70% of outages are caused by changes to a live system, such as configuration updates, deployments, or operational changes, rather than by hardware failures. Similarly, the Uptime Institute’s Annual Outage Analysis identifies configuration errors and dependency failures as leading causes of major disruption.

    These findings are unsurprising. In distributed environments, dependencies increase and recovery paths become harder to trace, which means that architectural shortcuts that once seemed minor can have disproportionate impact under sustained load. Systems tend to fail along the structural lines already drawn into them, and those lines are shaped by early design decisions, even when those decisions were commercially sensible at the time.

    Trade-offs That Compound Over Time

    Architectural decisions are rarely made under ideal conditions. Early on, speed to market matters, simplicity reduces friction, and shipping is the priority. A tightly coupled service can help teams move faster, a single-region deployment keeps things straightforward, and limited observability may feel acceptable when traffic is still modest.

    But overtime, these trade-offs compound.

    • Limited isolation between services makes it easier for problems in one area to affect others.
    • Shared infrastructure can create hidden dependencies that only become visible under heavy demand.
      Concentrated regional deployments increase the impact of a local outage or cloud disruption.
    • Observability that felt sufficient at launch can fall short when trying to understand complex behaviour at scale.

    At a smaller scale, these constraints can go largely unnoticed. As usage increases and demand becomes less predictable, they start to shape how the system responds under pressure. What once felt manageable begins to show its limits.

    This is rarely about a lack of technical ability. It is simply what happens as complexity builds over time. Every system reflects the trade-offs made in its early stages, whether those choices were deliberate or just practical at the time.

    When Architecture Becomes Business Exposure

    As systems grow in scale and complexity, the way they are built starts to show up in practical ways. When services are tightly connected, recovery takes longer. When failures are not well contained, a problem in one area can disrupt others. Incidents become harder to resolve and more expensive to manage.

    The cost of disruption is not abstract. ITIC’s 2023 Hourly Cost of Downtime Survey reports that more than 90% of mid-size and large enterprises estimate a single hour of downtime costs over $300,000, and roughly 41% place that figure between $1 million and $5 million per hour. At that level, even short-lived incidents carry material financial impact.

    For organisations that rely on digital platforms to generate revenue, those numbers represent missed transactions, operational strain, and damage to customer trust. At that point, system design is no longer just an engineering decision. It becomes a business decision with measurable financial consequences.

    When Failure Is Public

    Some systems fail quietly, disrupting internal workflows or back-office processes with limited external visibility. Others operate in real time, where performance issues are experienced directly by customers, investors, and partners.

    In sectors such as entertainment, demand is often synchronised and predictable. Premieres, sporting events, ticket releases, and major launches concentrate traffic into specific windows, placing simultaneous pressure on application layers, databases, and third-party services. These moments are not unusual spikes; they are built into the operating model. Platforms designed for large-scale engagement are expected to handle peak demand as part of normal business activity.

    That expectation changes the stakes. When performance degrades in these environments, it is noticed immediately and often publicly. Frustration spreads quickly, confidence can shift in hours, and what might have been an operational issue becomes a visible business problem.

    In this context, resilience shapes whether a high-demand event reinforces confidence in the platform or exposes its limits. When failure is experienced directly by users, it moves beyond internal metrics and becomes part of the customer experience itself.

    Designing for Resilience

    If failure is inevitable in distributed systems, then resilience has to be built in from the start. It cannot be something added later when the first serious incident forces the issue.

    Resilient systems are structured so that problems stay contained. A fault in one component should not automatically take others down with it, and services should be able to keep operating even when parts of the system are degraded. External dependencies will fail. Traffic will spike. The design needs to account for that reality.

    This way of thinking shifts the focus. Instead of trying to prevent every possible issue, teams concentrate on limiting the impact when something goes wrong. Speed still matters, but so does the ability to grow without introducing instability.

    Technology choices can support that approach. Elixir programming language , running on the BEAM, was designed for environments where downtime had real consequences. Its structure reflects that:

    • Applications are made up of many small, independent processes rather than large, tightly connected components.
    • Failures are expected and handled locally.
    • Supervision and recovery are built into the runtime so the wider system keeps running.

    No language guarantees reliability, but tools built around fault tolerance make it easier to create systems that continue operating under pressure.

    To conclude

    By the time serious issues appear at scale, most of the important decisions have already been made.

    Failure is part of running distributed systems. What matters is whether problems stay contained and whether the platform keeps operating when something goes wrong.

    Thinking about resilience early makes growth easier later. It helps protect revenue, maintain trust, and avoid the instability that forces costly redesigns.If you are building distributed platforms where reliability directly affects performance and reputation, now is the time to treat resilience as a core design decision. Get in touch to discuss how to build it into your architecture from the start.

    The post Reliability is a Product Decision appeared first on Erlang Solutions .

    • Pl chevron_right

      JMP: Google Wants to Control Your Device

      news.movim.eu / PlanetJabber • 24 February 2026 • 4 minutes

    Today we join with other organizations in signing the open letter to Google about their plans to require all Android app developers to register centrally with Google in order to distribute applications outside the Google Play Store.  You should go read the letter, it is quite well done. We want to talk a little bit additionally about why sideloading (aka installing apps on your own device, or “directly installing” as the letter puts it) is important to us

    In early fall of 2024 Google Play decided to remove and ban the Cheogram app from their store.  Worse, since it had been listed by Play as “malware” Google’s Play Protect system began warning existing users that they should uninstall the existing app from their devices.  This was, as you might imagine, very bad for us.  No new customers could get the app and existing customers were contacting support unsure how to get back into their account after being tricked into removing it.

    After a single submission to Google Play appealing this decision, they came back very quickly affirming “yes, this app seems to be malware.” No indication of why they thought that, just a decision. At this point the box we could use to submit new appeals also went away.  With no appeals process available and requests to what little support Google has going totally ignored, it was not clear if we were ever going to be able to distribute our app in the Play Store again.  After months of being delisted we finally got a lucky break.  In talking with some of our contacts at the Software Freedom Conservancy , they offered to write to some of their contacts deep inside Google and ask if there was anything that could be done. When their contact pushed internally at Google to get more information, suddenly the app was re-activated in the Play Store

    I want to be clear here. We did not change the app. We did not upload a new build. This app Google had been so very, very sure was “malware” was fully reinstated and available to customers again the moment a human being bothered to actually look at it. It was of course obvious to any human looking at the app that it is not malware and so they restored it to the store immediately. They never replied to that final request, and no details about what happened were ever made available. From that point on Google has essentially pretended that this never happened and the app was always great and in the Play Store. If we had not been able to get in contact with a human and really push them to take a look, however, we would never have been reinstated.

    Despite our good fortune, we still lost months of potential business over this. Of course you’ve heard stories like this before. Stories of Play Store abuse are a dime a dozen and most of them don’t have the “happy” ending ours does. What does this have to do with “sideloading” and the open letter? Well, despite all the months of lost business, and despite all the existing customers being told to uninstall their app if they had got it from Play Store, we lost no more than 10% of our customers and continued to onboard new ones during the entire time.  How is this possible?  The main reason is direct installs (“sideloading”).  The majority of our customers get the app from our preferred sources on F-Droid and Itch . These customers were not told by Play Protect to remove their app. During the time we were delisted from Play Store we removed the link to Play Store from our website and new customers were instructed to use F-Droid. Of course we still lost some business here, some people were unable or unwilling to use F-Droid or other direct install options, but the damage was far, far less than it might have otherwise been.

    What Google is proposing would allow them to ban anyone from creating apps which may be directly installed without their approval. One of the reasons they say they need to do this is to protect people from malware! Yet even if this was the narrow purpose of a ban it would still routinely catch apps which are not nefarious in any way, just as ours wasn’t. Furthermore, with all apps and developers registered in their system, a ban under these new rules could result in everyone being told to uninstall the app by Play Protect, and not just those who got it from Play Store to begin with. This would leave app developers who are erroneously marked by Google as malware with no options, no recourse, no way to appeal, and praying there is a friend of a friend who knows someone deep in Google who can poke the right button. This is just not an acceptable future for the world’s largest mobile platform.

    • Pl chevron_right

      ProcessOne: Fluux Messenger 0.13.0 - Native TCP Connection & Complete EU Language Coverage

      news.movim.eu / PlanetJabber • 12 February 2026 • 2 minutes

    Fluux Messenger 0.13.0 - Native TCP Connection & Complete EU Language Coverage

    We&aposre excited to announce Fluux Messenger 0.13.0, featuring native TCP connections, complete European language coverage, and significant performance improvements.

    Also, we recently passed the first 100 stars on GitHub. Thank you for your support and for believing in open, sovereign messaging !

    Fluux Messenger 0.13.0 - Native TCP Connection & Complete EU Language Coverage

    What&aposs New

    Native TCP Connection Support on Desktop

    Desktop users can now connect directly to XMPP servers via native TCP through our WebSocket proxy implementation. This means lower latency, better reliability, and native protocol handling. No more browser limitations.

    We believe that&aposs a nice milestone worth a blog post . Until now, desktop users needed their XMPP server to support WebSocket connections. With v0.13.0, you can connect to any standard XMPP server. We estimate this will enable 80% of users who couldn&apost connect before to finally use Fluux Messenger with their existing servers.

    Complete European Union Language Coverage

    Fluux Messenger now supports all 26 EU languages, making it truly pan-European. From Bulgarian to Swedish, Croatian to Maltese, we&aposve got you covered. Languages include:

    • Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Icelandic, Irish, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish.

    Dynamic locale loading means faster initial startup while maintaining comprehensive language support.

    If you spot any translation issues, feel free to contribute on our GitHub repository .

    Clipboard Image Paste

    Paste images directly from your clipboard with Cmd+V (macOS) or Ctrl+V (Windows/Linux). Copy from anywhere, paste into Fluux. It (should) just work ;). Tested and confirmed with Safari&aposs "Copy Image" feature and system clipboard operations so far.

    Clear Local Data on Logout

    New privacy option to completely clear local data when logging out. Perfect for shared devices or when you need a fresh start.

    Performance & Reliability Improvements

    • Smarter Message History Loading - We&aposve completely redesigned our Message Archive Management (MAM) strategy. Message history now loads intelligently based on your scrolling behavior and available data, reducing unnecessary server requests.

    • Better Resource Management - Fixed duplicate avatar fetches when hashes haven&apost changed, reducing bandwidth usage and improving profile picture loading times.

    • Rock-Solid Scroll Behavior - Media loading no longer disrupts your scroll position. The scroll-to-bottom feature now works reliably, even when images and files are loading.

    • Better Windows Tray - Improved tray behavior on Windows for a more native experience.

    macOS Sleep Recovery - Fixed layout corruption that could occur after your Mac woke from sleep.

    UI & UX Polish

    • Consistent attachment styling across light and dark themes
    • Fixed sidebar switching with Cmd+U keyboard shortcut
    • Improved new message markers - position correctly maintained when switching conversations
    • Better context menus - always stay within viewport bounds, no more cut-off menus
    • Markdown preview accuracy - bold and strikethrough now properly shown in message previews

    Linux Packaging

    Improved Linux packaging using native distribution tools for better integration with your system package manager.

    Developer Experience

    Centralized notification state with viewport observer provides better performance and more reliable notification handling across the application.


    Get Fluux Messenger

    Download for Windows , macOS , or Linux in the latest Release page.

    Source code is available at : GitHub


    Your messages, your infrastructure : no vendor lock-in.
    Sovereign by design. Built in Europe, for everyone.

    • Pl chevron_right

      ProcessOne: rocket ejabberd 26.02

      news.movim.eu / PlanetJabber • 11 February 2026 • 2 minutes

    🚀 ejabberd 26.02

    Contents:

    ChangeLog

    • Fixes issue with adding hats data in presences send by group chats ( #4516 )
    • Removes mod_muc_occupantid modules, and integrates its functionality directly into mod_muc ( #4521 )
    • Fixes issue with reset occupant-id values after restart of ejabberd ( #4521 )
    • Improves handling of mediated group chat invitations in mod_block_stranger ( #4523 )
    • Properly install mod_invites templates in make install call ( #4514 )
    • Better errors in mod_invites ( #4515 )
    • Accessibility improvements in mod_invites ( #4524 )
    • Improves handling of request with invalid url encoded values in request handled by ejabberd_http
    • Improves handling of invalid responses to disco queries in mod_pubsub_serverinfo
    • Fixes conversion of MUC room configs from ejabberd older than 21.12
    • Fixes to autologin in WebAdmin

    If you are upgrading from a previous version, there are no changes in SQL schemas, configuration, API commands or hooks.

    Notice that mod_muc now incorporates the feature from mod_muc_occupantid , and that module has been removed. You can remove mod_muc_occupantid in your configuration file as it is unnecessary now, and ejabberd simply ignores it.

    Check also the commit log: https://github.com/processone/ejabberd/compare/26.01...26.02

    Acknowledgments

    We would like to thank the contributions to the source code and translations provided by:

    And also to all the people contributing in the ejabberd chatroom, issue tracker...

    Improvements in ejabberd Business Edition

    Customers of the ejabberd Business Edition , in addition to all those improvements and bugfixes, also get the following change:

    • Change default_ram_db from mnesia to p1db when using p1db cluster_backend

    ejabberd 26.02 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues .

    • Pl chevron_right

      Mathieu Pasquet: slixmpp v1.13.2 (and .1)

      news.movim.eu / PlanetJabber • 8 February 2026

    Version 1.13.0 has shipped with a packaging bug that affects people trying to build wheels using setuptools. 1.13.1 is an attempt to fix that (made packaging from git work somehow), and 1.13.2 is the correct fix thanks to one single additional character.

    There are no other changes, and pypi wheels are not affected because they are built in CI with uv .

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    Previous version: 1.13.0 .

    • Pl chevron_right

      JMP: Newsletter: Referral Event!

      news.movim.eu / PlanetJabber • 3 February 2026 • 2 minutes

    Hi everyone

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    This month begins JMP’s 2026 referral event! From now until June 1st, everyone you refer to JMP rewards you triple ! That’s three free months of service for every person who signs up and pays.

    For those who haven’t explored the referral system much, a short refresher on how it works. In your account settings there is a command Refer a friend for free credit . You will be presented with a message that reads like so:

    This code will provide credit equivalent to one month of service to anyone after they sign up and pay: CODE These remaining codes are single use and give the person using them a free month of JMP service. You will receive credit equivalent to one month of service after someone uses any of your codes and their payment clears.

    When someone uses this code during their signup process with JMP, they will sign up and pay as usual. But after they pay, they will get a free month deposited to their initial balance as well . Once their payment clears (about 90 days for a credit card transaction) your account will also get a month’s worth of credit deposited. Except for referrals which reward during this event, it will be three months! This code is appropriate for publishing publicly and can be used over and over again.

    Under this message in the command result is a list of invite codes. If you are out of invite codes, you can always ask JMP support for more as well. These codes are each single-use and give the one person who uses them a free month of JMP service right away. They don’t pay anything. Then, if they pay, and once their payment clears (about 90 days for a credit card transaction) your account will also get a month’s worth of credit deposited. Except for referrals which reward during this event, it will be three months! These codes are appropriate for giving to trusted friends or family members, and each code can only be used once.

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • Pl chevron_right

      ProcessOne: Introducing Fluux Messenger: A Modern XMPP Client Born from a Holiday Coding Session

      news.movim.eu / PlanetJabber • 28 January 2026 • 4 minutes

    Introducing Fluux Messenger: A Modern XMPP Client Born from a Holiday Coding Session

    It was mid-December 2025, just before the Christmas break. My favorite XMPP client had broken on the main branch I was using. Frustrated, rather than waiting for a fix, I decided I wanted to give another try at working on a client. This time, I would be using the opportunity to explore a new set of tools I was curious about: TypeScript, React, and Tauri.

    What started as a weekend experiment quickly turned into something more. Using AI tools to accelerate the initial setup, I found myself totally absorbed in the work. Days blurred together as features took shape. By early January, what had been a personal scratch project had become a surprisingly capable desktop client.

    When I demonstrated it to my coworkers at ProcessOne, their reaction was unanimous: this was worth pursuing seriously. We decided to put the entire company behind the effort. Today, we&aposre sharing the first public release of Fluux Messenger .

    More Than Just Another Chat Client

    At ProcessOne, we&aposve spent over two decades building messaging infrastructure. Our XMPP server, ejabberd, has powered some of the world&aposs largest messaging deployments. But we&aposve always approached messaging from the server side. Fluux Messenger represents something new for us, bringing that same engineering philosophy to the client.

    The key innovation isn&apost in the UI (though we&aposre proud of it). It&aposs in what lies beneath: the Fluux SDK .

    The Fluux SDK: Bridging Two Worlds

    Anyone who has built an XMPP client knows the challenge. XMPP is event-driven, signal-based, asynchronous. Modern UI frameworks expect reactive state, typed data, and predictable updates. There&aposs an impedance mismatch between these two worlds.

    The Fluux SDK is our answer. It provides a high-level TypeScript API that handles all XMPP complexity internally. Developers work with clean types and reactive state, not XML stanzas. The SDK maintains local cache, manages reconnection, and handles synchronization intelligently. It&aposs not a passive pipe between server and UI; it&aposs an active participant that makes the client easier to develop and more resilient.

    As an example, the client is not telling the SDK which presence it wants to publish when stepping away from the computer, but it will signal to the SDK that it is inactive. The SDK is responsible for doing the right thing.

    This three-tiered architecture, Server -> Headless client (SDK) -> UI, lets us apply the same design principles that made ejabberd scalable to the client side. Distributed logic, local-first responsiveness, server-side efficiency.

    What Fluux Messenger Offers Today

    The current release (v0.11.1) already includes substantial functionality:

    • Reliable connection and message management
    • Cross-platform desktop app for Windows, macOS, and Linux, built on Tauri for a lightweight native experience
    • Web version : It works great on the Web as well.
    • 40+ XMPP extensions (XEPs) implemented, including message archive management, multi-user chat, HTTP file upload, message carbons, and reactions
    • MUC support with @mentions notification and bookmarks
    • Local message cache storage using IndexedDB with automatic sync on reconnect
    • Built-in XMPP console for developers and power users who want to see what&aposs happening under the hood
    • Preliminary admin capabilities letting you manage your ejabberd server directly from the client (But this is still mostly a work in progress).
    • 8 languages supported out of the box
    • Light and dark modes with theme system to come
    Introducing Fluux Messenger: A Modern XMPP Client Born from a Holiday Coding Session

    What I envision for Fluux Messenger is to follow the steps of major projects like Obsidian or VSCode . In my wildest dreams, I expect Fluux Messenger to become as configurable and versatile as those tools.

    The Road Ahead

    This is an ambitious project. The roadmap stretches far into the future, and we&aposre just getting started. Our plans include mobile support through PWA and eventually native apps, expansion to other frameworks (Vue, Svelte), and Kotlin Multiplatform for Android and iOS.

    But ambitious doesn&apost mean closed. Fluux Messenger is open source under AGPL-3.0. We believe that modern, privacy-respecting messaging shouldn&apost require vendor lock-in. Connect to any XMPP server. Host it yourself.

    I used AI to bootstrap this project and accelerate my learning phase. Many developers do now. But we are crafters at ProcessOne, and we want to own the responsibility for great code. Those who know me will tell you I&aposll be relentless until I master React and TypeScript, and they know I will. Fast.

    A Continuation of ejabberd&aposs Vision

    Fluux Messenger represents the client-side continuation of what we started with ejabberd over twenty years ago. The same principles, scalability, reliability, clean architecture, now flow from server to client. If you&aposve trusted ejabberd to power your messaging infrastructure, we hope you&aposll trust Fluux Messenger to be the interface your users deserve.

    This is the beginning of an exciting journey. We hope you&aposll join us.


    Get started

    If you share our vision for clean, reactive messaging APIs, we&aposd love to collaborate. Reach out and let&aposs grow the Fluux community together.