call_end

    • Pl chevron_right

      Erlang Solutions: ElixirConf US 2025: Highlights from My First ElixirConf

      news.movim.eu / PlanetJabber • 8 September • 3 minutes

    Joining conferences is one of the best perks of working as a Developer at Erlang Solutions. Despite having attended multiple Code BEAM conferences in Europe, ElixirConf US 2025 was my first. The conference had 3 tracks, filled with talks from 45+ speakers and 400+ attendees, both in-person and virtual.

    ElixirConf is one of the great occasions to connect with other Elixir enthusiasts in the community and get to learn what others have been doing as well as what the Elixir core team is planning for the future of the language.

    The Atmosphere


    Most of the faces were unfamiliar to me, but as expected from the BEAM community, everyone was super friendly. Most were not shy about approaching others, sharing about their own or their company’s experiences. The “hallway track” was always lively during the coffee break or during the talks.

    Before the conference began, I had a tough time deciding which talk to attend. At other conferences I’ve been to, most of the talks were interesting, but not all were relevant to my daily work as an Elixir developer. That made it easier to prioritise which talks to attend live and which ones I could catch up on later if they overlapped.

    ElixirConf was different. Many of the talks were not only interesting but also directly relevant to my work, and several were scheduled at the same time. This made it very difficult to decide which session to attend in person and which to leave for the recordings.

    Some standout talks

    Chris McCord’s Keynote: Elixir’s AI Future

    One of the talks I was most looking forward to was the keynote by Chris McCord. I had previously watched the recording of his ElixirConfEU keynote about phoenix.new and was eager to see what new ideas he would bring to Elixir and the Phoenix framework, especially in terms of AI.

    ElixirConf US 2025: Chris McCord

    Chris talked about AI agents, how Elixir is well-suited for building them, and what the future of Elixir and AI might look like. He emphasised that it is not about chasing hype but about staying on the bleeding edge of technology: “building the things we want to build, building the things we want to see.”

    He also shared his perspective on code generation, noting that it has made it easier for newcomers to get started with Elixir and Phoenix. In his view, the community is now in a strong position to attract developers from outside the ecosystem to give it a try.

    Panel: Building Careers, Balancing Life

    Another talk I was eager to hear was the panel discussion hosted by my Erlang Solutions colleague Lorena, together with Allison Randal, Savannah Manning, and Anna Sherman, three women from different backgrounds and stages of their careers.

    ElixirConf US 2025: Panel

    It was great hearing stories from other women in the tech community and feeling inspired. The three panellists shared the stories and challenges they had faced in their careers. They also talked about the importance of having mentors, the community, and knowing the big picture, which helped them grow. The advice they gave during the talk was both very relatable and inspiring.

    Joe Harrow: Beyond Safe Migrations

    I also found Beyond Safe Migrations to be great food for thought. This talk was a very practical and solid example of what could go wrong in a live database migration, and the tool Cars Commerce was using to prevent that. Over the span of my career, I have written many database migrations, from small startup projects altering a table with only a few hundred rows to large-scale projects where the tables had millions of rows.

    My team sometimes discussed strategies for altering existing tables, but most of the time we would just go ahead and make the changes. Listening to Joe, I learned about things that could have gone really wrong and that there is a systematic way of mitigating those risks.

    Key Takeaways

    All in all, Elixir Conf US was a great conference, packed with talks about experiences and challenges, new and upcoming technologies, and also about growing the community. There was, of course, a surge in AI-related talks, both from early adopters to the future of Elixir with AI.

    ElixirConf US 2025: Highlights from my first ElixirConf

    I found that the main theme running through the conference was the growth and sustainability of the Elixir community. ElixirConf is well worth attending, whether you are just starting out or already an experienced developer.

    The post ElixirConf US 2025: Highlights from My First ElixirConf appeared first on Erlang Solutions .

    • Pl chevron_right

      ProcessOne: Spotify’s Direct Messaging Gambit

      news.movim.eu / PlanetJabber • 3 September • 5 minutes

    Spotify’s Direct Messaging Gambit

    Last week, Spotify quietly launched direct messaging across its platform in selected areas, allowing users to share tracks and playlists through private conversations within the app. The feature was rolled out with minimal fanfare but significant media coverage, positioning itself as a complement to existing sharing mechanisms through Facebook, WhatsApp, and TikTok.

    When I read this, I immediately wondered why they were bothering. We do not especially lack communication channels these days. So, Let’s take a step back and examine what Spotify is actually trying to accomplish here.

    The Strange Case of Another Messaging App

    We already have too many messaging apps to choose, either on mobile or mobile phones. Before I try to initiate a conversation with someone I do not often chat with, I find myself trying to remember what is her preferred messaging platform. So, adding to an app some sorts of real time messaging and live interaction features can bring value, but it has to serve a purpose and respond to some user needs.

    In that context, Spotify’s decision to roll out direct messaging support feels odd. Users can already share music through established platforms where their friends actually are. They can post discoveries on social media, send links through WhatsApp, or create collaborative playlists. Why would anyone choose to message someone specifically within Spotify when they’re already connected elsewhere?

    The problem is that Spotify failed to make a compelling argument for why users should discuss with friends through yet another messaging system, even if this is to talk about music. Launching a special purpose communication service is risky. When Apple Music attempted to build Ping, a social network of music fans, it failed spectacularly. Spotify’s own social experiments haven’t fared much better. Remember Greenroom, their audio-focused social platform that quietly disappeared?

    This initiative becomes even more puzzling when we consider Spotify’s own history. The company built its initial viral growth through Facebook integration, leveraging social connections to drive adoption.

    And now, seemingly, they are trying to reclaim that social layer for themselves?

    What’s Really Happening Under the Hood

    The technical implementation reveals interesting choices. According to available reports, the messaging system relies on a RESTful API over HTTPS with TLS 1.3 encryption and JSON Web Tokens for session authentication. Notably absent? End-to-end encryption.

    And this absence tells us that the feature is not considered as a standard messaging service yet, but simply an alternative way to share favorite tracks and discuss them, and a possible a move to reduce the amount of data exposed to other social networks and messaging.

    The Data Intelligence Play

    Messaging features can provide enormous value when you have a strong daily user base, but only when they address a clear user need. Spotify’s messaging doesn’t seem designed for users. It feels designed for Spotify’s recommendation algorithms.

    Every shared track, every reaction, every conversation thread becomes a new data point in Spotify’s machine learning models. Who shares what with whom? Which songs generate discussion? How do musical tastes spread through social networks? This intelligence is pure gold for a recommendation engine that already struggles to compete with YouTube Music’s discovery capabilities.

    Private messaging amplifies this data collection while keeping the intelligence proprietary, unlike public social sharing, where competitors might also benefit.

    Strategic Confusion or Calculated Move?

    So, is this really all about data collection and control?

    This is where Spotify’s European identity becomes relevant. As a Swedish company competing against American tech giants, there may be strategic value in reducing dependence on US or controlled Chinese social platforms. Every track shared through WhatsApp (Meta) or TikTok (ByteDance) represents data flowing to potential competitors or partners with their own agenda.

    Building an internal messaging system allows Spotify to capture that social intelligence directly while reducing what they share with other platforms. From a data sovereignty perspective, this makes sense, especially for a European player navigating an increasingly fragmented global tech landscape.

    And they may hope at some point to play a larger role in messaging platforms in general, as we deeply miss a large player in the messaging field in Europe. It may be a play to test the waters.

    As we help companies reclaim their independence by building their own messaging service, this goal resonates strongly with us. However, building a successful messaging platform requires being able to create momentum around the service if it wants to attract enough users and traffic. It cannot be launched halfheartedly.

    The Missing Strategic Vision

    The fundamental problem isn’t technical. It’s strategic clarity. Spotify has a recommendation engine that could benefit from social signals, a creator platform focused on podcasts and videos, and a user base that already shares music socially. The ingredients for a compelling set of social features exist.

    But launching messaging without addressing the basic question of “Why would I message someone here instead of where we already talk?” suggests a feature developed in isolation from user needs. It resembles the countless platform features that launch with media coverage but die quietly when adoption numbers disappoint.

    What would make this feature compelling? Integration with Spotify’s creator tools, perhaps allowing artists to connect directly with fans. Or collaborative listening live sessions where messaging enhances shared musical experiences. Or leveraging Spotify’s podcast ecosystem to enable discussion around episodes.

    Instead, we get generic messaging that competes with platforms where users’ friends actually are.

    So, what’s Spotify’s real goal?

    I see two possible options here: a pessimistic and a more optimistic one.

    Perhaps the most interesting aspect of this launch is what it reveals about Spotify’s growth concerns. A mature platform doesn’t typically add generic social features unless it’s worried about engagement metrics or looking for new growth vectors.

    They may want their users to spend more time in its interface, instead of most of the time, passively using that app through a player exposed as a widget in the mobile operating system.

    The timing suggests Spotify sees either limited growth ahead or a competitive threat that requires better user data. Given the AI revolution in music generation and the ongoing battles over royalty structures, capturing more nuanced data about user preferences and social music behavior could be crucial for maintaining relevance.

    But there’s a more optimistic reading: this could represent a European tech company trying to assert more independence from American social platforms. In a world where data is power, controlling your own social graph has strategic value.

    The execution, however, suggests Spotify hasn’t quite figured out how to articulate this vision to users. Until they do, this messaging feature risks joining the graveyard of platform additions that made sense to product managers but never found their audience.

    In a world already oversaturated with communication channels, every new messaging system needs to answer a simple question: Why here instead of everywhere else users are already talking? Spotify hasn’t answered that question yet.

    • Pl chevron_right

      Mathieu Pasquet: slixmpp v1.11

      news.movim.eu / PlanetJabber • 2 September • 1 minute

    This new version includes a few new XEP plugins as well as fixes, notably for some leftover issues in our rust JID code, as well as one for a bug that caused issues in Home Assistant.

    Thanks to everyone who contributed with code, issues, suggestions, and reviews!

    CI and build

    Nicoco put in a lot of work in order to get all possible wheels built in CI. We now have manylinux and musl builds of everything doable within codeberg, published to the codeberg pypi repo, and published on pypi.org as well.

    Python version under 3.11 are no longer tested, and starting from the next version, the wheels will no longer be provided (no clue if they work, I just forgot to remove them).

    New plugins

    Fixes

    • Issues in the JID comparison code that do not return the expected value when comparing with an empty JID
    • Removal of another blocking call in the hot path when creating an XMLStream object
    • In the Blocking ( XEP-0191 ) plugin, allow the presence of Spam Reporting ( XEP-0377 ) elements
    • In the Privileged Components ( XEP-0356 ) plugin, allow access to the inner error stanza
    • In the SIMS ( XEP-0385 ) plugin, allow multiple sources
    • In the Multi-User Chat ( XEP-0045 ) plugin, fix a bug when declining an invitation
    • Doc fixes: add dependencies to the build so that specific pages get built
    • Crash early in Iq.send() if the timeout is not provided as an integer or float

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    Previous version: 1.10.0 .

    • Pl chevron_right

      Erlang Solutions: Healthcare Blog Round-Up

      news.movim.eu / PlanetJabber • 29 August • 3 minutes

    Healthcare is moving quickly, and technology is playing a big part in that shift. The way information is collected, the way patients are cared for, and the way hospitals run are all changing.

    Over the past year, our team has written about some of the most important trends shaping the future of healthcare. In this round-up, we bring together three of those articles: remote patient monitoring, big data, and generative AI.

    Maybe you have been following along, or maybe one or two of these slipped past you. Either way, this is a chance to catch up on the ideas that are influencing healthcare right now.

    What is Remote Patient Monitoring?

    Remote Patient Monitoring (RPM) is already changing everyday care. With the help of connected devices, clinicians can see what’s happening with patients at home, step in earlier when something changes, and prevent unnecessary hospital stays.

    What is Remote Patient Monitoring?

    What is Remote Patient Monitoring? sets out how RPM works, the devices that make it possible (from blood pressure monitors to smart inhalers), and why it is now a priority for healthcare leaders. The article shows how RPM is transforming chronic disease management, post-operative recovery, elderly care, and clinical trials, while driving down system costs by up to 40 per cent.

    Digital care models like virtual wards are no longer experiments. They are reshaping the NHS and health systems worldwide, and this guide explains why.

    Understanding Big Data in Healthcare

    Healthcare produces enormous amounts of information every day. Patient records, medical scans, wearables, and research all add to the mix, and the real challenge is turning it into something useful. Done well, big data can improve care, reduce costs, and even speed up medical breakthroughs.

    Big Data in Healthcare

    Understanding Big Data in Healthcare” breaks down the fundamentals, including the three V’s that define it: volume, velocity, and variety. You’ll see how providers are already using data to personalise treatments, predict health trends, and cut readmissions by up to 20 per cent. The article also shows how it can shorten clinical trial times by 30 per cent and reduce costs by as much as 50 per cent.

    But with opportunity comes risk. The average cost of a healthcare data breach now stands at $9.77 million, making security a top priority for every provider. The article looks at the biggest threats, the regulations shaping data use, and how technologies like Erlang, Elixir, and SAFE can help keep information secure and systems reliable.

    How Generative AI is Transforming Healthcare

    With adoption already valued at more than $1.6 billion, generative AI is fast becoming one of the biggest drivers of change in healthcare. The global AI in healthcare market is projected to hit $45.2 billion by 2026, reflecting the scale of its potential to improve patient outcomes, support clinicians, and make systems more efficient.

    How Generative AI is Transforming Healthcare

    In “How Generative AI is Transforming Healthcare” , we look at how it differs from traditional AI and why its flexibility makes it such a powerful tool for the industry. The article explores real-world applications such as personalised treatment plans, predictive analytics, enhanced diagnostics, virtual health assistants, and even accelerating drug discovery.

    It also considers the future of AI in healthcare, including the need to address challenges around privacy, regulation, and patient trust. With the right planning and technologies like Elixir supporting scalable and reliable systems, generative AI could help shape a new era of patient-centred care.

    To conclude

    That wraps up our latest healthcare round-up. We hope this guide helps you cut through the noise and get a clear picture of the trends that matter most right now.

    If something here has sparked your interest, whether it’s the possibilities of remote monitoring, the power of data, or the promise of generative AI, we would love to keep the conversation going. So get in touch .

    Here’s to smarter systems, healthier outcomes, and more confident decision-making in healthcare.

    The post Healthcare Blog Round-Up appeared first on Erlang Solutions .

    • Pl chevron_right

      ProcessOne: rocket ejabberd 25.08

      news.movim.eu / PlanetJabber • 22 August • 8 minutes

    🚀 ejabberd 25.08

    Release Highlights:

    This release includes the support for Hydra rooms in our Matrix gateway, which fixes high severity protocol vulnerabilities.

    If you are upgrading from a previous version, there are no changes in SQL schemas, configuration, API commands or hooks.

    Other contents:

    Below is a detailed breakdown of the improvements and enhancements:

    Improvements in Matrix gateway

    The ejabberd Matrix gateway now supports Hydra rooms (Matrix room version 12). This fix some high severity protocol vulnerabilities . The state resolution has been partially rewritten in our gateway.

    A double colon is used for separating a matrix server from a room ID in JID with Hydra rooms.

    Other changes to the matrix gateway:

    • The new option notary_servers of mod_matrix_gw can now be used to set a list of notary servers.
    • Add leave_timeout option to mod_matrix_gw (#4386)
    • Don&apost send empty direct Matrix messages (thanks to snoopcatt) (#4420)

    Fixed ACME in Erlang/OTP 28.0.2

    The ejabberd 25.07 release notes mentioned that Erlang/OTP 28.0.1 was not yet fully supported because there was a problem with ACME support.

    Good news! this problem with ACME is fixed and tested to work when using Erlang/OTP 28.0.2, the latest p1_acme library, and ejabberd 25.08.

    If you are playing with ejabberd and Erlang/OTP 28, please report any problem you find. If you are running ejabberd in production, better stick with Erlang/OTP 27.3, this is the one used in installers and container images.

    New mod_providers to serve XMPP Providers file

    mod_providers is a new module to serve easily XMPP Providers files.

    The standard way to perform this task is to first generate the Provider File , store in the disk with the proper name, and then serve the file using an HTTP server or mod_http_fileserver . And repeat this for each vhost.

    Now this can be replaced with mod_providers, which automatically sets some values according to your configuration. Try configuring ejabberd like:

    listen:
      -
        port: 443
        module: ejabberd_http
        tls: true
        request_handlers:
          /.well-known/xmpp-provider-v2.json: mod_providers
    
    modules:
      mod_providers: {}
    

    Check the URL https://localhost:443/.well-known/xmpp-provider-v2.json , and finetune it by setting a few mod_providers options.

    Improved Unicode support in configuration

    When using non-latin characters in a vhost served by ejabberd, you can write it in the configuration file as unicode, or using the IDNA/punycode. For example:

    hosts:
      - localhost1
      - locälhost2
      - xn--loclhost4-x2a
      - 日本語
    
    host_config:
      "locälhost2":
        modules:
          mod_disco: {}
          mod_muc:
            host: "conference3.@HOST@"
      "xn--loclhost4-x2a":
        modules:
          mod_disco: {}
          mod_muc:
            host: "conference4.@HOST@"
    

    This raises a problem in mod_http_upload if the option put_url contains the @HOST@ keyword. In that case, please use the new predefined keyword HOST_URL_ENCODE .

    This change was also applied to ejabberd.yml.example .

    New option conversejs_plugins to enable OMEMO

    mod_conversejs gets a new option conversejs_plugins that points to additional local files to include as scripts in the homepage.

    Right now this is useful to enable OMEMO encryption .

    Please make sure those files are available in the path specified in conversejs_resources option, in subdirectory plugins/ . For example, copy a file to path /home/ejabberd/conversejs-x.y.z/package/dist/plugins/libsignal-protocol.min.js and then configure like:

    modules:
      mod_conversejs:
        conversejs_resources: "/home/ejabberd/conversejs-x.y.z/package/dist"
        conversejs_plugins: ["libsignal-protocol.min.js"]
    

    If you are using the public Converse client, then you can set \"libsignal\" , which gets replaced with the URL of the public library. For example:

    modules:
      mod_conversejs:
        conversejs_plugins: ["libsignal"]
        websocket_url: "ws://@HOST@:5280/websocket"
    

    Easier erlang node name change with mnesia_change

    ejabberd uses by default the distributed Mnesia database. Being distributed, Mnesia enforces consistency of its file, so it stores the Erlang node name, which may include the hostname of the computer.

    When the erlang node name changes (which may happen when changing the computer name, or moving ejabberd to another computer), then mnesia refused to start with an error message like this:

    2025-08-21 11:06:31.831594+02:00 [critical]
      Erlang node name mismatch:
      I&aposm running in node [ejabberd2@localhost],
      but the mnesia database is owned by [ejabberd@localhost]
    2025-08-21 11:06:31.831782+02:00 [critical]
      Either set ERLANG_NODE in ejabberdctl.cfg
      or change node name in Mnesia
    

    To change the computer hostname in the mnesia database, it was required to follow a tutorial with 10 steps that starts ejabberd a pair of times and runs the mnesia_change_nodename API command.

    Well, now all this tutorial is implemented in one single command for the ejabberdctl command line script. When mnesia refuses to start due to an erlang node name change, it mentions that new solution:

    $ echo "ERLANG_NODE=ejabberd2@localhost" >>_build/relive/conf/ejabberdctl.cfg
    
    $ ejabberdctl live
    2025-08-21 11:06:31.831594+02:00 [critical]
      Erlang node name mismatch:
      I&aposm running in node [ejabberd2@localhost],
      but the mnesia database is owned by [ejabberd@localhost]
    2025-08-21 11:06:31.831782+02:00 [critical]
      Either set ERLANG_NODE in ejabberdctl.cfg
      or change node name in Mnesia by running:
      ejabberdctl mnesia_change ejabberd@localhost
    

    Let&aposs use the new command to change the erlang node name stored in the mnesia database:

    $ ejabberdctl mnesia_change ejabberd@localhost
    
    ==> This changes your mnesia database from node name &aposejabberd@localhost&apos to &aposejabberd2@localhost&apos
    
    ...
    
    ==> Finished, now you can start ejabberd normally
    

    Great! Now ejabberd can start correctly:

    $ ejabberdctl live
    ...
    2025-08-21 11:18:52.154718+02:00 [info]
      ejabberd 25.07.51 is started in the node ejabberd2@localhost in 1.77s
    

    Notice that the command mnesia_change must start and stop ejabberd a pair of times. For that reason, it cannot be implemented as an API command . Instead, it is implemented as an ejabberdctl command directly in the ejabberdctl command line script.

    Colorized interactive log

    When ejabberd starts with an erlang shell using Mix, it prints error lines in a remarkable color: orange for warnings and red for errors. This helps to detect those lines when reading the log interactively.

    Now this is also supported when using Rebar3. To test it, start ejabberd either:

    • ejabberdctl live : to start interactive mode with erlang shell
    • ejabberdctl foreground : to start in server mode with attached log output

    You will see log lines colorized with:

    • green+white for informative log messages
    • grey for debug
    • yellow for warnings
    • red for errors
    • magenta for messages coming from other Erlang libraries (xmpp, OTP library), not ejabberd itself

    Document API Tags in modules

    Many ejabberd modules implement their own API commands , and now the documentation of those modules mention which tags contain their commands.

    See for example at the end of modules mod_muc_admin , mod_private or mod_antispam .

    Unfortunately, many early API commands were implemented in mod_admin_extra , which includes commands related to account management, vcard, roster, private, ... and consequently those are not mentioned in their corresponding modules documentation.

    Acknowledgments

    We would like to thank the contributions to the source code provided for this release by:

    • mod_matrix_gw: Don&apost send empty direct Matrix messages (thanks to snoopcatt) (#4420)
    • Holger Weiß for improvements in the installers, HTTP file upload and mod_register
    • marc0s for the improvement in MUC

    And also to all the people contributing in the ejabberd chatroom, issue tracker...

    Improvements in ejabberd Business Edition

    Customers of the ejabberd Business Edition , in addition to all those improvements and bugfixes, also get the following changes:

    New module mod_dedup

    This module removes duplicates of read receipts sent by concurrent sessions of single user, this will prevent both delivery and storage in archive of duplicates.

    Limits in mod_unread queries

    Queries issued to mod_unread can now declare maximum number and age of returned results. This can also be tweaked with new options of that module.

    ChangeLog

    This is a more complete list of changes in this ejabberd release:

    API Commands

    • ban_account : Run sm_kick_user event when kicking account ( #4415 )
    • ban_account : No need to change password ( #4415 )
    • mnesia_change : New command in ejabberdctl script that helps changing the mnesia node name

    Configuration

    • Rename auth_password_types_hidden_in_scram1 option to auth_password_types_hidden_in_sasl1
    • econf : If a host in configuration is encoded IDNA, decode it ( #3519 )
    • ejabberd_config : New predefined keyword HOST_URL_ENCODE
    • ejabberd.yml.example : Use HOST_URL_ENCODE to handle case when vhost is non-latin1
    • mod_conversejs : Add option conversejs_plugins ( #4413 )
    • mod_matrix_gw : Add leave_timeout option ( #4386 )

    Documentation and Tests

    • COMPILE.md : Mention dependencies and add link to Docs ( #4431 )
    • ejabberd_doc : Document commands tags for modules
    • CI: bump XMPP-Interop-Testing/xmpp-interop-tests-action ( #4425 )
    • Runtime: Raise the minimum Erlang tested to Erlang/OTP 24

    Installers and Container

    • Bump Erlang/OTP version to 27.3.4.2
    • Bump OpenSSL version to 3.5.2
    • make-binaries : Disable Linux-PAM&aposs logind support

    Core and Modules

    • Bump p1_acme to fix &aposAttributePKCS-10&apos and OTP 28 ( processone/p1_acme#4 )
    • Prevent loops in xml_compress:decode with corrupted data
    • ejabberd_auth_mnesia : Fix issue with filtering duplicates in get_users()
    • ejabberd_listener : Add secret in temporary unix domain socket path ( #4422 )
    • ejabberd_listener : Log error when cannot set definitive unix socket ( #4422 )
    • ejabberd_listener : Try to create provisional socket in final directory ( #4422 )
    • ejabberd_logger : Print log lines colorized in console when using rebar3
    • mod_conversejs : Ensure assets_path ends in / as required by Converse ( #4414 )
    • mod_conversejs : Ensure plugins URL is separated with / ( #4413 )
    • mod_http_upload : Encode URLs into IDNA when showing to XMPP client ( #3519 )
    • mod_matrix_gw : Add support for null values in is_canonical_json ( #4421 )
    • mod_matrix_gw : Don&apost send empty direct Matrix messages ( #4420 )
    • mod_matrix_gw : Matrix gateway updates
    • mod_muc : Report db failures when restoring rooms
    • mod_muc : Unsubscribe users from members-only rooms when expelled ( #4412 )
    • mod_providers : New module to serve easily XMPP Providers files
    • mod_register : Don&apost duplicate welcome subject and message
    • mod_scram_upgrade : Fix format of passwords updates
    • mod_scram_upgrade : Only offer upgrades to methods that aren&apost already stored

    Full Changelog

    https://github.com/processone/ejabberd/compare/25.07...25.08

    ejabberd 25.08 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues .

    • Pl chevron_right

      Erlang Solutions: MongooseIM 6.4: Simplified and Unified

      news.movim.eu / PlanetJabber • 21 August • 7 minutes

    MongooseIM is a scalable and efficient instant messaging server. With the latest release 6.4.0, it has become more powerful yet easier to use and maintain. Thanks to the internal unification of listeners and connection handling, the configuration is easier and more intuitive, while numerous new options are supported.

    New features include support for TLS 1.3 with optional channel binding for improved security, single round-trip authentication with FAST (which can be even faster with 0-RTT or more secure with channel binding), reduced start-up time and many other improvements and bug fixes. Let’s take a deeper look at some of the most important improvements.

    Reworked XMPP interfaces

    MongooseIM uses the open, proven, extensible and constantly evolving XMPP protocol, which is an excellent choice when it comes to instant messaging. To communicate with other XMPP entities, the server uses three main types of interfaces, listed in the table below.

    XMPP Interface Purpose Connection type Reworked in version
    C2S (client-to-server) Accept connections from XMPP clients inbound 6.1.0 – 6.4.0
    S2S (server-to-server) Federate with other XMPP servers inbound/outbound 6.4.0 (latest)
    Component Accept connections from external components inbound

    The C2S interface was reworked and improved already in version 6.1.0 (see the blog post ), making it more modern, organised and extensible. In version 6.4.0, this trend is continued by reworking the S2S and component interfaces while unifying the whole connection handling logic.

    Simplified, unified and more complete

    Connection accepting and handling was simplified and unified in multiple ways, allowing the addition of new features along the way. All connections are now accepted by ranch , a state-of-the-art socket acceptor pool for Erlang. Modern gen_statem behaviour is then used to handle open connections using state machines. These changes allowed for improved handling of various corner cases, removing unexpected limitations and mishandled error conditions. There is also improved separation of concerns, resulting in easier extensibility and configurability.

    Another unified and improved system aspect is the TLS encryption . The legacy fast_tls library is now fully replaced with the Erlang implementation, resulting in much more straightforward configuration and implementation without a drop in performance (as evidenced by rigorous load tests). This change made it possible to fill in some gaps in functionality, such as the support for channel binding as required for TLS 1.3 (see RFC 9266 for details). Additionally, TLS is now supported for component connections. Moreover, CA certificates can be provided by the OS, reducing the need for manual certificate provisioning. As a result of these changes, your MongooseIM installation will be more secure and robust.

    All these changes are reflected in the TOML configuration file. As an example, the following snippet presents the configuration of S2S and component connections:

    [[listen.component]]
      port = 8190
      shaper = "fast"
      ip_address = "127.0.0.1"
      password = "secret"
      tls.certfile = "priv/ssl/cert.pem"
      tls.keyfile = "priv/ssl/key.pem"
    
    
    [[listen.s2s]]
      port = 5269
      shaper = "fast"
      tls.certfile = "priv/ssl/cert.pem"
      tls.keyfile = "priv/ssl/key.pem"
    
    
    [s2s]
      default_policy = "allow"
    
      [s2s.outgoing]
        port = 5269
        shaper = "fast"
        tls.certfile = "priv/ssl/cert.pem"
        tls.keyfile = "priv/ssl/key.pem"
    
    

    When compared with the previous version of MongooseIM, there are the following improvements:

    • Components can benefit from TLS connections.
    • S2S options are separate for the incoming ( listen.s2s ) and outgoing ( s2s.outgoing ) connections. Common options are placed directly in the s2s section (e.g. default_policy ).
    • Traffic shapers are configured the same way for all types of connections, and are always referenced by their names.
    • S2S outgoing connections can have traffic shaping enabled.
    • All TLS options follow the same pattern throughout the configuration file. This also affects multiple options that were omitted from this example for simplicity.

    Another improved layer is the instrumentation . Most changes affect XMPP traffic events – they are summarised in the table below:

    Events in version 6.3.* Events in version 6.4.0 Measurements
    Names Labels
    c2s_element_inc
    2s_element_out
    xmpp_element_in
    xmpp_element_out
    connection_type
    (c2s, s2s, component)
    host_type
    (if known)
    count
    stanza_count
    message_count
    iq_count(...)
    c2s_xmpp_element_size_in
    c2s_xmpp_element_size_out
    s2s_xmpp_element_size_in
    s2s_xmpp_element_size_out
    component_xmpp_element_size_in
    component_xmpp_element_size_out
    byte_size
    c2s_tcp_data_in
    c2s_tcp_data_out
    s2s_tcp_data_in
    s2s_tcp_data_out
    component_tcp_data_in component_tcp_data_out
    tcp_data_in
    tcp_data_out
    connection_type
    (c2s, s2s, component)
    c2s_tls_data_in
    c2s_tls_data_out
    s2s_tls_data_in
    s2s_tls_data_out
    component_tls_data_in component_tls_data_out
    tls_data_in
    tls_data_out

    We can see that there are fewer event names now, and they are more concise, making them easier to remember and reason about. This was possible due to the use of labels like connection_type and host_type , and actually, the event coverage got improved – see the element_in and element_out events, which now cover s2s and component connections. Also, the events are emitted more consistently. For incoming data, there is always one event emitted as soon as an XML element (most often a stanza) is parsed. For outgoing data, there is always one event emitted just before sending an XML element out. As a result, the metrics are consistent with actual network traffic.

    Another addition is the auth_failed event for s2s and component connections. For more information, such as the translation of event names and measurements to actual metric names, see the documentation on metrics . Our blog post about release 6.3.0 can also give you more details about the instrumentation layer and its integration with Prometheus.

    SASL 2, Bind 2, FAST

    Over the past few releases, we have been implementing extensions, speeding up the XMPP stream negotiation and authentication:

    • XEP-0388 : Extensible SASL Profile (SASL2) allows a client to authenticate, resume its session and more in one round-trip.
    • XEP-0386 : Bind 2 allows a client to bind the resource and enable selected extensions (SM, carbons, CSI) as part of SASL2.
    • XEP-0484 : FAST allows a client to authenticate with a token as a part of SASL2. According to the specification, this is a token-based method for streamlining authentication in XMPP, enabling fully authenticated stream establishment within a single round-trip.

    As an example, let’s assume that a user with the JID alice@localhost already has a FAST token obtained during previous authentication. When opening a new connection, the client sends the following:

    <authenticate mechanism='HT-SHA-256-NONE' xmlns='urn:xmpp:sasl:2'>
      <initial-response>YWxpY0VfY2xpZW50X2F1dGhlbnRpY2F0ZXNfdXNpbmdfZmFzdF80MDYA5u2CpST(...)</initial-response>
      <bind xmlns='urn:xmpp:bind:0'/>
      <user-agent id='d4565fa7-4d72-4749-b3d3-740edbf87770'/>
    </authenticate>
    
    

    As a response, it receives:

    <success xmlns='urn:xmpp:sasl:2'>
      <authorization-identifier>alice@localhost/1750-684128-573793-695021f052e299fe</authorization-identifier>
      <bound xmlns='urn:xmpp:bind:0'/>
    </success>
    
    

    This way, the whole connection and authentication process is performed in a single round-trip.

    FAST features supported in MongooseIM

    In version 6.4, additional advanced features of FAST are supported. One of them is token rotation : the server invalidates tokens after a configurable period, and provides a new token on reconnection if the current one is close to expiry (this period is also configurable). Additionally, a client can request immediate token invalidation – with or without requesting a new one.

    Another addition is the support for 0-RTT (zero round-trip time) data in TLS 1.3 (see RFC 8446, section 2.3 ). The FAST token can be provided by the client during a subsequent handshake after reconnection, meaning that there is no additional round-trip needed after the handshake – hence the name “0-RTT”. A different extension is the tls-exporter channel binding in TLS 1.3 (see RFC 9266 for details) – it can be used with FAST, resulting in the channel binding data being passed with the FAST token, increasing the security. Note that currently it cannot be used with 0-RTT data. See the documentation for mod_fast_auth_token and the Hashed Token SASL Mechanism specification for more information.

    Summary

    MongooseIM 6.4.0 offers numerous improvements in various areas – from configuration to instrumentation. New features such as FAST tokens make it easier for your clients and services to connect, while TLS 1.3 makes it more protected against malicious attacks. You can discover much more in the release notes .

    Don’t hesitate to visit our product page , try it online and contact us if you are considering using it in your business – we will be happy to help you install, configure, maintain, and, if necessary, customise it to fit your particular needs

    The post MongooseIM 6.4: Simplified and Unified appeared first on Erlang Solutions .

    • Pl chevron_right

      Erlang Solutions: Supporting the BEAM Community with Free CI/CD Security Audits

      news.movim.eu / PlanetJabber • 31 July • 3 minutes

    At Erlang Solutions, our support for the BEAM community is long-standing and built into everything we do. From contributing to open-source tools and sponsoring events to improving security and shaping ecosystem standards, we’re proud to play an active role in helping the BEAM ecosystem grow and thrive.

    One way we’re putting that support into action is by offering free CI/CD-based security audits for open-source Erlang and Elixir projects. These audits help maintainers identify and fix vulnerabilities early, integrated directly into modern development workflows.

    What the Free CI/CD Audits Offer

    Our free CI/CD security audits for open source projects are powered by SAFE (Security Audit for Erlang/Elixir) , a dedicated solution built to detect vulnerabilities in Erlang and Elixir code that could leave systems exposed to cyber attacks.

    The CI/CD version of SAFE integrates directly into your development pipeline (e.g. GitHub Actions, CircleCI, Jenkins), enabling you to scan for vulnerabilities automatically every time code is committed or updated. This helps projects:

    • Detect issues early, before they reach production
    • Maintain a more secure and resilient codebase
    • Improve visibility of risks within day-to-day workflows

    Results are delivered quickly– typically within a few minutes. For larger codebases, it may take up to 20–30 minutes. The feedback is designed to be clear, actionable, and minimally disruptive.

    Open source maintainers can request a free license by emailing safe@erlang-solutions.com and including a link to their repository. Once approved, we provide a SAFE license for one month or up to a year, depending on the project’s needs, at no cost.

    For more information, read our full terms and conditions .

    Expert-Led Audits for Production BEAM Systems

    SAFE is just one way we help teams build secure, resilient systems. We also offer hands-on audit services for production systems, including:

    • Code reviews focused on clarity, maintainability, and best practices
    • Architecture assessments to help ensure systems are scalable and fault-tolerant
    • Performance audits to identify bottlenecks and optimise how systems behave under load

    These services are delivered by our in-house experts and are a great fit for teams working on complex or business-critical systems. They also pair well with SAFE for a full picture of how systems are running and how they could be even better.

    Of course, supporting the BEAM community goes beyond security and audits. Our involvement spans education, events, and long-term ecosystem development.

    “We’re proud to support the BEAM ecosystem not just with code, but with the infrastructure and insights that help it grow stronger,” says Zoltan Literati, Business Unit Leader at Erlang Solutions Hungary.

    “Our free audits offer real, practical value to maintainers working in open source. It’s one of the ways we’re giving back to the community.”

    A Broader Commitment to the BEAM Community

    The BEAM ecosystem continues to grow across languages like Erlang, Elixir and Gleam, driven by a global community of developers, maintainers, educators and advocates. Erlang Solutions is proud to contribute across multiple fronts, including:

    • Sponsoring various conferences, including Code Sync
    • Supporting the Erlang Ecosystem Foundation (EEF) , including participation in working groups focused on security, documentation, interoperability, and tooling
    • Backing inclusion-focused initiatives such as Women in BEAM
    • Sharing learning resources, contributing to open source libraries, and facilitating knowledge exchange through meetups, blogs and webinars

    Our role is to support the ecosystem not only through expertise, but through action, and to help ensure that BEAM-based systems are not only scalable and reliable, but secure.

    To learn more about our free CI/CD security audits or how we support the BEAM community, visit erlang-solutions.com .

    The post Supporting the BEAM Community with Free CI/CD Security Audits appeared first on Erlang Solutions .

    • Pl chevron_right

      ProcessOne: XMPP: When a 25-Year-Old Protocol Becomes Strategic Again

      news.movim.eu / PlanetJabber • 24 July • 3 minutes

    After twenty-five years, XMPP (Extensible Messaging and Presence Protocol) is still here. Mature, proven, modular, and standardized, it may well be the most solid foundation available today to build the future of messaging.

    And now, XMPP is more relevant than ever: its resurgence is driven by European digital sovereignty efforts, renewed focus on interoperability, and the growing need for long-term, vendor-independent infrastructure.

    Against this backdrop, the recent funding round around XMTP (Extensible Message Transport Protocol) , a newly launched blockchain-based protocol marketed as a universal messaging layer, raises questions. The name clearly evokes XMPP, yet there is no technological or community connection. And while XMPP could easily serve as a transport layer for blockchain-integrated messaging, XMTP chooses to ignore this legacy and start anew .

    So the real question is:
    Why rebuild from scratch when a solid, extensible foundation already exists?

    A Protocol That Never Went Away

    XMPP is an open protocol for real-time messaging, designed from the start for federation and decentralization. Standardized by the IETF (RFC 6120, 6121, 7622…), it has powered mission-critical systems for decades: enterprise communication, mobile apps at scale, online games, IoT control platforms.

    What makes XMPP especially powerful is not just its architectural simplicity, but its modular extensibility . The protocol evolves through an ecosystem of open specifications (XEPs), covering:

    • End-to-end encryption (OMEMO, OTR)
    • Multi-device synchronization (Message Archive Management)
    • Group chat with subscriptions (MUC and MUCSub)
    • PubSub (XEP-0060) for real-time data and events
    • Interoperability bridges (SIP, MQTT, Matrix)
    • And more…

    XMPP has never stopped evolving . Dozens of new extensions are proposed every year. It remains one of the most adaptable foundations for building secure, federated, and future-ready messaging systems.

    XMTP: A New Protocol with a Familiar Name, but a Different Approach

    XMTP is a blockchain-native messaging protocol developed by Ephemera. It aims to connect wallets and dApps, leveraging decentralized infrastructure (libp2p, IPFS-style storage) and cryptographic identities.

    The ambition is clear: to build a censorship-resistant, peer-to-peer messaging layer for Web3, rooted in crypto-native identity and cryptography.

    However, the naming is misleading. In an older interview , XMTP co-founder Matt Galligan said the name is a blend of SMTP and XMPP. It was chosen to evoke familiarity, perhaps even as a tribute. But the result is confusing : XMTP is not an extension, evolution, or even distant cousin of XMPP. There is no shared architecture, no interoperability, no community overlap.

    Why This Matters Right Now

    This naming issue would be minor if it weren’t happening at a critical time for protocol design . Governments, especially in Europe, are actively exploring how to regain control over digital infrastructure. Messaging is central to this effort, especially with upcoming interoperability mandates, data sovereignty requirements, and the need for long-term maintainability.

    XMPP is uniquely well-positioned to meet these needs. It is mature, open, extensible, and governed through transparent standards. It has a community of engineers, operators, and developers actively maintaining and evolving it.

    Instead of inventing closed messaging stacks around new ecosystems, the more pragmatic move would be to build on robust, extensible layers like XMPP :

    • Need to integrate blockchain identities? XMPP can map public keys or wallet identifiers through custom namespaces or JIDs.
    • Need cryptographic message-level guarantees? XMPP already supports message metadata, signatures, and encryption.
    • Need better privacy ? XMPP can be run over privacy-preserving transports like Tor.

    In short: XMPP can serve as a transport layer for Web3 communication without discarding two decades of protocol maturity.

    I understand that the main focus of XMTP is to prevent censorship, by your own server, but this really a situation that can be mitigated efficiently with XMPP. You can for example run your own server or develop a fully decentralized approach, that you can leverage as needed (e.g. xmpp-overlay ).

    Yes, there is still work to be done. For example, integrating MLS (Messaging Layer Security) into XMPP would provide a strong foundation for interoperable, end-to-end encrypted group messaging. But that only reinforces the point: Why ignore what’s already working and extensible?

    Use What Works

    New ideas are always welcome. Innovation matters. But messaging protocols are infrastructure. Reinventing them lightly, is not harmless, especially when it is done without acknowledging existing efforts.

    Instead of multiplying disconnected stacks, we should double down on what works .

    XMPP is here. It works. It evolves. It can be extended, adapted, and integrated, even into blockchain-native systems, without sacrificing openness or interoperability.

    That may be its most valuable trait today: Still standing, while so many overengineered protocols have come and gone.

    • Pl chevron_right

      Erlang Solutions: What is Remote Patient Monitoring?

      news.movim.eu / PlanetJabber • 14 July • 6 minutes

    Remote Patient Monitoring (RPM) is changing how care is delivered. By tracking health data through connected devices outside traditional settings, it helps clinicians act sooner, reduce readmissions, and focus resources where they’re most needed. With rising NHS pressures and growing demand for digital care, RPM is becoming central to how both public and private providers support long-term conditions, recovery, and hospital-at-home models. This guide explores how RPM works, where it’s gaining ground, and why healthcare leaders are paying attention.

    What is Remote Patient Monitoring?

    RPM refers to systems that collect patient data remotely using at-home or mobile devices, which clinicians then review. These systems can work in real time or at scheduled intervals and are often integrated with a patient’s electronic medical record (eMR) or practice management system (PAS). The goal is to monitor patients without needing in-person visits, while still keeping clinical oversight.

    Devices Commonly Used in RPM

    The success of any RPM programme depends on the devices that power it. These tools collect, track, and transmit key health data- either in real time or at regular intervals. Whether issued by clinicians or connected through a patient’s tech, they underpin the delivery of safe, responsive remote care.

    These devices support the management of a wide range of conditions, including diabetes, heart disease, COPD, asthma, sleep disorders, high-risk pregnancies, and post-operative recovery.

    Device Type Primary Function
    Blood pressure monitors Measure systolic/diastolic pressure for hypertension monitoring
    Glucometers Track blood glucose levels for diabetes management
    Pulse oximeters Monitor oxygen saturation (SpO2) and heart rate
    ECG monitors Detect heart rhythm abnormalities such as arrhythmias
    Smart inhalers Track usage and technique for asthma or COPD
    Wearable sensors Monitor movement, sleep, temperature and heart rate
    Smart scales Measure weight trends, often linked to fluid retention or post-op care
    Sleep apnoea monitors Detect interrupted breathing patterns during sleep
    Maternity tracking devices Monitor fetal heart rate, maternal blood pressure, or contractions

    These tools can either be prescribed by clinicians or integrated with consumer health tech like smartphones or smartwatches.

    For example, a cardiologist may use a mobile ECG app paired with a sensor to track arrhythmias from home.

    Safety and Regulation

    The boundary between wellness wearables and clinical devices is still being defined. While some tools simply gather data, others have therapeutic applications, such as managing pain or respiratory issues. This matters for compliance. Devices that influence treatment decisions must meet higher regulatory standards, particularly around safety, accuracy, and data security. Developers and suppliers need to stay aligned with MHRA or equivalent guidance to avoid risk to both patients and business continuity.

    How Remote Patient Monitoring Works

    RPM follows a structured process:

    1. Data collection from connected medical devices
    2. Secure transmission to a clinical platform
    3. Integration with existing systems
    4. Analysis and alerting via algorithms or clinician review
    5. Intervention where thresholds are breached
    6. Feedback to patients through apps or direct communication

    RPM Adoption is Accelerating

    Globally, the uptake of RPM is increasing. In the US, patient usage rose from 23 million in 2020 to 30 million in 2024 and is forecast to reach over 70 million by the end of 2025 ( HealthArc ). The NHS is also scaling digital pathways. Over 100,000 patients have been supported by virtual wards in England , with NHS England increasing capacity to 50,000 patients per month by winter 2024. RPM is central to this shift.

    Core Technologies in RPM

    These technologies work behind the scenes to capture, transfer, and make sense of patient data, so that clinicians have timely, accurate insights to act on.

    Wearables and sensors
    Track vital signs like heart rate, oxygen levels, and movement patterns.

    Mobile health apps
    Used by patients to report symptoms, manage medications, and receive support.

    Telemedicine platforms
    Enable direct communication between patients and clinicians through chat, phone, or video.

    Analytics engines
    Help identify risk trends or changes in condition using automated flagging systems.

    Why RPM Matters for Healthcare Leaders

    The NHS is under sustained pressure. According to the NHS Confederation , over 7.6 million people are currently on elective care waiting lists, while ambulance delays and A&E overcrowding persist. RPM supports care outside the hospital by freeing up beds, reducing readmissions, and improving patient flow. At a system level, RPM:

    • Cuts avoidable admissions
    • Shortens hospital stays
    • Reduces time-to-intervention
    • Frees up staff capacity
    • Lowers infection risk

    Cost savings are also significant. Some estimates suggest RPM can reduce total healthcare expenditure by 20–40%, particularly for chronic conditions.

    RPM in Action: Key Use Cases

    The real impact of RPM is seen in the way it supports different stages of the care journey. Here are some of the most common and most effective use cases.

    Chronic Disease Management

    RPM allows patients with diabetes, COPD, or hypertension to track metrics like blood pressure, oxygen levels or glucose and share results with care teams. Interventions can be made earlier, reducing the chance of deterioration or escalation.

    Mental Health Monitoring

    Wearables can capture signs of stress or low mood by tracking heart rate variability, sleep patterns, and daily activity. RPM helps clinicians spot early signs of relapse in conditions like anxiety and depression, particularly when patients are less likely to reach out themselves.

    Post-Operative Recovery

    Patients recovering from surgery can be monitored for wound healing, temperature spikes, or pain trends. A 2023 BMC Health Services Research study showed RPM helped reduce six-month mortality rates in patients discharged after heart failure or COPD treatment.

    Elderly Care

    For older adults, RPM supports safety without constant in-person contact. Devices with fall detection, medication reminders, and routine tracking can help carers respond quickly to changes, reducing emergency visits and supporting independent living.

    Clinical Trials

    RPM speeds up trials by reducing the need for travel, offering more continuous data, and improving patient adherence.

    Pandemic and Emergency Response

    During COVID-19, RPM enabled safe monitoring of symptoms like oxygen saturation or fever, supporting triage and resource allocation when systems were overwhelmed.

    Benefits Across the System

    RPM not only benefit patients, but it also improves outcomes and operations across every part of the health and care system. Here’s how you can gain from its use.

    Key Benefits
    Patients Greater independence, faster recovery, fewer hospital visits
    Clinicians Real-time data visibility, increased capacity, and better focus on complex cases
    Carers Peace of mind, early alerts, and less reliance on manual checks
    ICBs & Providers Lower readmissions, improved resource use, and more coordinated care

    Where Tech Comes In

    Behind every reliable RPM system is a reliable tech stack. In high-risk, high-volume environments like healthcare, platforms need to be built for stability, security and scalability.

    That’s why some platforms use programming languages such as Erlang and Elixir, trusted across the healthcare sector for their ability to manage high volumes and maintain uptime. These technologies are being adopted in healthcare systems that prioritise performance, security, and scalability.

    When built correctly, RPM infrastructure allows providers to:

    • Maintain continuous monitoring across patient groups
    • Respond quickly to emerging clinical risks
    • Scale services confidently as demand increases
    • Minimise risk from tech failure or data breach

    To conclude

    Patients recover better when they’re in a familiar place, supported by the right tools and professionals. Hospitals function best when their time and space are reserved for those who truly need them. Remote Patient Monitoring is not just a digital upgrade. It’s a strategic shift, towards smarter, more responsive care.

    Ready to explore how RPM could support your digital care strategy? Get in touch .

    The post What is Remote Patient Monitoring? appeared first on Erlang Solutions .