call_end

    • chevron_right

      ProcessOne: rocket ejabberd 26.01

      news.movim.eu / PlanetJabber • 17:50 • 19 minutes

    🚀 ejabberd 26.01

    This release is the result of three months of development, implementing those new features, and fixing bugs.

    Release Highlights:

    If you are upgrading from a previous version, there are no mandatory changes in SQL schemas, configuration, API commands or hooks. However new mod_invites uses a new table in databases, see below.

    Other contents:

    Below is a detailed breakdown of the improvements and enhancements:

    Database Serialization

    This feature adds new way for migrating data between database backends by exporting data from one backend to a file and importing that into different backend (or possibly to same backend but on different machine).

    Migrating data using this can be executed by first exporting all data with export_db command, changing configuration of ejabberd and switching modules to use new database backend, and then importing previously exported data using import_db command.

    This mechanism works by calling those new command from ejabberdctl, for exporting data:

    • ejabberdctl export_db <host name> <path to directory where exported files should be placed>
    • ejabberdctl export_db_abort <host name>
    • ejabberdctl export_db_status <host name>

    and for importing:

    • ejabberdctl import_db <host name> <path to directory with exported data>
    • ejabberdctl import_db_abort <host name>
    • ejabberdclt import_db_status <host name>

    Exporting and importing work in background after starting them from ejabberdctl (commands executed by ejabberdctl have time limit for how long they can work, with this setup there should be not issue with export or import getting aborted by that), and current progress of background operation can be tracked by calling corresponding *_db_status command. Operation in progress can be also aborted by executing *_db_abort command.

    Roster Invites and Invite-based Account Registration

    Until now the canonical method to register an account in ejabberd was to let anybody register accounts using In-Band Registration (IBR) ( mod_register ) or Web Registration ( mod_register_web ), and then try to limit abuse with access limitations or CAPTCHAs . Often this process got abused, with the result that account registration had to be disabled and rely on manual registration by administrators.

    The new mod_invites implements support for invite-based account registration: administrators can generate invitation URLs, and send them to the desired users (by email or whatever). Then the user that receives an invitation can visit this invitation URL to register a new account.

    On top of that, mod_invites lets you create Roster Invites: you can send a link to some other person so they can connect to you in a very user-friendly and intuitive way that doesn&apost require any further interaction. If account creation is allowed, these links will also allow to setup an account in case the recipient doesn&apost have one yet.

    Relevant links:

    Quick setup:

    1. If using SQL storage for the modules and have disabled the update_sql_schema toplevel option, then create manually the SQL table , see below.

    2. If you plan to use the landing page included with mod_invites , install JavaScript libraries jQuery version 3.7.1 and Bootstrap version 4.6.2 . This example configuration will assume they are installed in /usr/share/javascript . The ejabberd container image already includes those libraries. Some quick examples, in case you need some help:

      • Debian and related:

        apt install libjs-jquery libjs-bootstrap4
        
      • AlpineLinux and other operating systems where you can install npm , for example:

        apk -U add --no-cache nodejs npm ca-certificates
        
        npm init -y \
          && npm install --silent jquery@3.7.1 bootstrap@4.6.2
        
        mkdir -p /usr/share/javascript/jquery
        mkdir -p /usr/share/javascript/bootstrap4/{css,js}
        cp node_modules/jquery/dist/jquery.min.js /usr/share/javascript/jquery/
        cp node_modules/bootstrap/dist/css/bootstrap.min.css /usr/share/javascript/bootstrap4/css/
        cp node_modules/bootstrap/dist/js/bootstrap.min.js /usr/share/javascript/bootstrap4/js/
        
      • Generic method using the included script:

        tools/dl_invites_page_deps.sh /usr/share/javascript
        
    3. Configure ejabberd to serve the JavaScript libraries in path /share ; serve mod_invites in any path of your selection; and enable mod_invites with some basic options. Remember to setup mod_register to allow registering accounts using mod_invites. For example:

      listen:
        - port: 5443
          ip: "::"
          module: ejabberd_http
          tls: true
          request_handlers:
            /invites: mod_invites
            /share: mod_http_fileserver
      
       modules:
        mod_http_fileserver:
          docroot:
            /share: /usr/share/javascript
        mod_invites:
          access_create_account: configure
          landing_page: auto
        mod_register:
          allow_modules:
            - mod_invites
      
    4. There are many ways to generate invitations:

      • Login with an admin account, then you can execute Ad-Hoc Commands like "Invite User" and "Create Account"
      • If mod_adhoc_api is enabled, you can execute equivalent API Commands generate_invite and generate_invite_with_username
      • Run those API Commands from the command-line, for example: ejabberdctl generate_invite localhost
    5. All those methods give you an invitation URL that you can send it to the desired user, and looks like

      https://localhost:5443/invites/Yrw5nuC1Kpxy9ymbRzmVGzWQ
      
    6. The destination user (or yourself) can visit that invitation URL and follow the instructions to register the account and download a compatible client.

    If the user has installed already a compatible XMPP client, you don&apost no need to install JavaScript libraries and setup a landing page. In that case, when generating an invitation you will get only the XMPP URI; when the user opens that URI in a web browser, it will automatically open the XMPP client and the corresponding registration window.

    Probably you don&apost want to expose the port directly, then you need to setup Nginx or Apache to act as a "reverse" proxy and change your landing_page parameter accordingly, for example just https://@HOST@/invites/{{ invite.token }}

    Notice that the landing page can be fully translated using the existing ejabberd localization feature .

    🚀 ejabberd 26.01

    SQL table for mod_invites

    There is a new table invite_token in SQL schemas, used by the new mod_invites . If you want to use this module, there are two methods to update the SQL schema of your existing database:

    If using MySQL or PosgreSQL, you can enable the option update_sql_schema and ejabberd will take care to update the SQL schema when needed: add in your ejabberd configuration file the line update_sql_schema: true

    Notice that support for MSSQL in mod_invites has not yet been implemented or tested.

    If you are using other database, or prefer to update manually the SQL schema:

    • MySQL singlehost schema:

      CREATE TABLE invite_token (
          token text NOT NULL,
          username text NOT NULL,
          invitee text NOT NULL DEFAULT (&apos&apos),
          created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
          expires timestamp NOT NULL,
          type character(1) NOT NULL,
          account_name text NOT NULL,
          PRIMARY KEY (token(191))
      ) ENGINE=InnoDB CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
      
      CREATE INDEX i_invite_token_username USING BTREE ON invite_token(username(191));
      
    • MySQL multihost schema:

      CREATE TABLE invite_token (
          token text NOT NULL,
          username text NOT NULL,
          server_host varchar(191) NOT NULL,
          invitee text NOT NULL DEFAULT &apos&apos,
          created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
          expires timestamp NOT NULL,
          type character(1) NOT NULL,
          account_name text NOT NULL,
          PRIMARY KEY (token(191)),
      ) ENGINE=InnoDB CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
      
      CREATE INDEX i_invite_token_username USING BTREE ON invite_token(username(191));
      
    • PostgreSQL singlehost schema:

      CREATE TABLE invite_token (
          token text NOT NULL,
          username text NOT NULL,
          invitee text NOT NULL DEFAULT &apos&apos,
          created_at timestamp NOT NULL DEFAULT now(),
          expires timestamp NOT NULL,
          "type" character(1) NOT NULL,
          account_name text NOT NULL,
          PRIMARY KEY (token)
      );
      CREATE INDEX i_invite_token_username ON invite_token USING btree (username);
      
    • PostgreSQL multihost schema:

      CREATE TABLE invite_token (
          token text NOT NULL,
          username text NOT NULL,
          server_host text NOT NULL,
          invitee text NOT NULL DEFAULT &apos&apos,
          created_at timestamp NOT NULL DEFAULT now(),
          expires timestamp NOT NULL,
          "type" character(1) NOT NULL,
          account_name text NOT NULL,
          PRIMARY KEY (token)
      );
      
      CREATE INDEX i_invite_token_username_server_host ON invite_token USING btree (username, server_host);
      
    • SQLite singlehost schema:

      CREATE TABLE invite_token (
          token text NOT NULL,
          username text NOT NULL,
          invitee text NOT NULL DEFAULT &apos&apos,
          created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
          expires timestamp NOT NULL,
          type character(1) NOT NULL,
          account_name text NOT NULL,
          PRIMARY KEY (token)
      );
      
      CREATE INDEX i_invite_token_username ON invite_token(username);
      
    • SQLite multihost schema:

      CREATE TABLE invite_token (
          token text NOT NULL,
          username text NOT NULL,
          server_host text NOT NULL,
          invitee text NOT NULL DEFAULT &apos&apos,
          created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
          expires timestamp NOT NULL,
          type character(1) NOT NULL,
          account_name text NOT NULL,
          PRIMARY KEY (token)
      );
      
      CREATE INDEX i_invite_token_username_server_host ON invite_token(username, server_host);
      

    New replaced_connection_timeout toplevel option

    The new replaced_connection_timeout toplevel option enables new session to wait for termination of session that it replaces.

    This should mitigate problems where old session presences unavailable sometimes were delivered after new session sent it&aposs presence available.

    Improved mod_http_fileserver docroot option

    mod_http_fileserver is a module to serve files from the local disk over HTTP, useful if you just want to serve some HTML or binary files that don&apost need PHP, and don&apost need to setup or configure a separate full-blown web server.

    Now the docroot option may be a map of paths to serve, allowing this module to serve several paths with different directories for different purposes.

    For example, let&aposs serve some public content from /var/service/www , also shared JavaScript libraries, and for base URL let&aposs serve from /var/www :

    listen:
      -
        port: 5280
        module: ejabberd_http
        request_handlers:
          /pub/content: mod_http_fileserver
          /share: mod_http_fileserver
          /: mod_http_fileserver
    modules:
      mod_http_fileserver:
        docroot:
          /pub/content: /var/service/www
          /share: /usr/share/javascript
          /: /var/www
    

    Notice that ejabberd includes many modules that serve web services, that may be useful to save you from setting up a full-blown web server, see ejabberd_http .

    Supported XEP versions

    ejabberd supports close to 90 XMPP extension protocols (XEPs) , either directly implemented in ejabberd source code, in the libraries that implement many of the internal features, or in other ejabberd modules available in other repositories like ejabberd-contrib .

    Those XEPs are updated regularly to bring major improvements, minor changes, fixing typos, editorial or cosmetic changes... and this requires a regular review of those XEP updates in the software implementations to keep them up to date, or at least to document what exact version of the protocols are implemented.

    In this sense, we have reviewed all the XEP versions that ejabberd and erlang xmpp library were not up-to-date, and have identified which ones are already up-to-date in their DOAP files. Now the pages at XMPP.org describe more accurately the supported protocol versions, see ejabberd at XMPP.org and erlang-xmpp at XMPP.org .

    Erlang, Elixir and Container

    ejabberd can be compiled with Erlang/OTP from 25.0 up to the latest 28.3.1. Regarding Elixir support, ejabberd supports from Elixir 1.14.0 up to the latest 1.19.5

    Right now ejabberd compiles correctly with Erlang/OTP from git development branch and Elixir 1.20.0-RC1, so hopefully compatibility with the upcoming Erlang/OTP 29 and Elixir 1.20 will be easily achievable.

    Binary installers and the ejabberd container now include Erlang/OTP 28.3.1 instead of 27.3, and Elixir 1.19.5 instead of 1.18.4.

    Speaking of the container images , both ejabberd and ecs bring other minor changes: they expose more ports : 5478 UDP (STUN service), 7777 (SOCKS5 file transfer proxy) and 50000-50099 UDP (TURN service).
    The container images in their ejabberd.yml file use new macros PORT_TURN_MIN , PORT_TURN_MAX , and STARTTLS_REQUIRED that you can setup easily without modifying ejabberd.yml , see macros in environment .

    Improved ERL_DIST_PORT

    ejabberd uses Erlang Distribution in the ejabberdctl shell script and also for building a cluster of ejabberd nodes.

    That Erlang Distribution has historically used the epmd program to assign listening ports and discover ports of other nodes to connect. The problems of using epmd are that:

    • epmd must be running all the time in order to resolve name queries
    • epmd listens in port 4369
    • the ejabberd node listens in a random port number

    How to avoid epmd since Erlang/OTP 23.1 and ejabberd 22.10 ? Simply set the ERL_DIST_PORT environment variable in the ejabberdctl.cfg file:

    ERL_DIST_PORT=5210
    

    Since now, ejabberdctl passes arguments to Erlang to listen for erlang distribution connections in TCP port 5210, and establish erlang distribution connections to remote ports 5210. That way you know exactly, in advance, what port number to open in the firewall or container.

    This ejabberd release introduces some small improvements that facilitate using the ERL_DIST_PORT environment variable:

    • ejabberdctl prints an informative message when it detects that there may be a port number collision, which may happen if you start several ejabberd nodes in the same machine listening in the same ERL_DIST_PORT and same INET_DIST_INTERFACE .

    • When ejabberd is starting, now it prints the address and port number where it listens for erlang distribution:

      [info] ejabberd 26.01 is started in the node ejabberd@localhost in 1.90s
      [info] Elixir 1.19.4 (compiled with Erlang/OTP 28)
      [info] Erlang/OTP 29 [DEVELOPMENT] [erts-16.2] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit:ns]
      
      [info] Start accepting TCP connections at 0.0.0.0:5210 for erlang distribution
      [info] Start accepting TCP connections at [::]:5222 for ejabberd_c2s
      [info] Start accepting TCP connections at [::]:5269 for ejabberd_s2s_in
      ...
      
    • A new make relivectl is introduced, which uses ejabberdctl script to start ejabberd (not like make relive ), and starts ejabberd immediately without building a release (not like make dev ).

    • The ejabberd Documentation site has improved and updated pages about Security , Erlang Distribution , and Clustering .

    New make relivectl

    make relive was introduced in ejabberd 22.05 . It allows to start ejabberd in path _build/relive/ without installing it, by using rebar3 shell and mix run . It automatically compiles code at start and recompiles changed code at runtime . But it doesn&apost use the ejabberdctl script, and doesn&apost read ejabberdctl.cfg , this means that depending on what you are testing, it may not be useful and you had to switch to make dev .

    A new make target is now introduced: make relivectl . This is similar, as it starts ejabberd without requiring installation, using path _build/relivectl/ to store the configuration, database and log files. The benefit over relive is that relivectl uses ejabberdctl and reads ejabberdctl.cfg . The drawback is that it doesn&apost recompile code automatically. The benefit over make dev is that relivectl doesn&apost build an OTP release, so it&aposs faster to start.

    Let&aposs summarize all the make targets related to installation to determine their usage differences:

    make ... install install-rel prod dev relivectl relive
    Writes files in path / / _build/
    prod/
    _build/
    dev/
    _build/
    relivectl/
    _build/
    relive/
    Installs manually uncompress *.tar.gz - - -
    Uninstall with uninstall
    ⚠️ incomplete
    uninstall-rel
    manual remove - - -
    Start tool ejabberdctl ejabberdctl ejabberdctl ejabberdctl ejabberdctl rebar3/mix
    Reads ejabberdctl.cfg
    Recompiles -
    Starts ejabberd - - - -
    Recompiles at runtime - - - -
    Execution time (s.) 13 40 57 35 4 9

    As seen in the table, make install / uninstall seem unnecessary nowadays, because install-rel / uninstall-rel allow to install and uninstall correctly all the files. Maybe in the future the implementation of make install / uninstall could get replaced with make install-rel / uninstall-rel ...

    In the same sense, make dev now apparently falls short between make prod (which is absolutely indispensable to build a full OTP release) and make relive/relivectl (useful, featureful, and fast for development and testing purposes).

    WebAdmin Menu Links

    Do you remember that ejabberd 25.07 introduced a link in WebAdmin to the local Converse.js page in the left menu when the corresponding mod_conversejs is enabled?

    Technically speaking, until now the WebAdmin menu included a link to the first request_handler with mod_conversejs that the admin configured in ejabberd.yml . That link included the authentication credentials hashed as URI arguments if using HTTPS. Then process/2 extracted those arguments and passed them as autologin options to Converse.

    From now, mod_conversejs automatically adds a request_handler nested in WebAdmin subpath. The WebAdmin menu links to that converse URI; this allows to access the HTTP auth credentials, no need to explicitly pass them. process/2 extracts this HTTP auth and passes autologin options to Converse. Now scram password storage is supported too.

    In practice, what does all that mean? Just enable the mod_conversejs module, and WebAdmin will have a private Converse URL for the admin, linked in the WebAdmin menu, with autologin. No need to setup a public request_handler !

    For example, let&aposs configure conversejs only for admin usage:

    listen:
      -
        port: 5443
        module: ejabberd_http
        tls: true
        request_handlers:
          /admin: ejabberd_web_admin
          /ws: ejabberd_http_ws
    
    modules:
      mod_conversejs:
        conversejs_resources: "/home/conversejs/12.0.0/dist"
    

    Of course, you can setup a public request_handler and tell your users to access the corresponding URL:

    listen:
      -
        port: 5443
        module: ejabberd_http
        tls: true
        request_handlers:
          /admin: ejabberd_web_admin
          /public/web-client: mod_conversejs
          /ws: ejabberd_http_ws
    
    modules:
      mod_conversejs:
        conversejs_resources: "/home/conversejs/12.0.0/dist"
    

    With that configuration, what is the corresponding URL ?

    Login to the WebAdmin and look at the left menu: now it displays links to all the configured HTTP services: ejabberd_captcha , ejabberd_oauth , mod_conversejs , mod_http_fileserver , mod_register_web . Also mod_muc_log_http and mod_webpresence from ejabberd-contrib show links in the WebAdmin left menu. Additionally, the menu shows a lock if the page is encrypted HTTPS, and an ! if it is not.

    Acknowledgments

    We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

    And also to all the people contributing in the ejabberd chatroom, issue tracker...

    Improvements in ejabberd Business Edition

    Customers of the ejabberd Business Edition , in addition to all those improvements and bugfixes, also get the following changes:

    • New module mod_push_gate that can act as target service for mod_push , and which can deliver pushes using configured mod_gcm or mod_applepush instances.
    • New module mod_push_templates that can be used to have different push notifications for message class matching configured data patterns.
    • New database conversion routines targeting p1db backends

    ChangeLog

    This is a more complete list of changes in this ejabberd release:

    Compile and Start

    • Remove dependencies, macros and code for Erlang/OTP older than 25
    • Require Elixir 1.14 or higher, that&aposs the lowest we can test automatically
    • ejabberdctl : Support NetBSD and OpenBSD su ( #4320 )
    • ejabberdctl.template : Show meaningful error when ERL_DIST_PORT is in use
    • ejabberd_app : Print address and port where listens for erlang node connections
    • Makefile.in : Add make relivectl similar to relive but using ejabberdctl

    Databases

    • Add db_serialize support in mnesia modules
    • Add db serialization to mod_muc_sql
    • New database export/import infrastructure
    • Add commands for new database export/import
    • Apply timestamp pass in ?SQL_INSERT queries
    • Update p1_mysql to bring fix for timestamp decoding
    • Extend timestamp type handling in sql macros
    • Revert changes to conversion of pgsql int types

    Installer and Container

    • make-binaries : Bump Erlang/OTP 28.3.1 and Elixir 1.19.5
    • Dockerfile : Bump Erlang/OTP 28.3.1 and Elixir 1.19.5
    • Dockerfile : Expose also port 7777 for SOCKS5
    • Dockerfile : Configure TURN ports and expose 5478 50000-50099
    • Dockerfile : Try to fix error with recent freetds Alpine package
    • Container: Setup new macro STARTTLS_REQUIRED to allow easy disabling

    MUC

    • Add muc_online_rooms_count API command
    • Set enable_hats room option true by default
    • Allow vcard queries even when IQ queries are disabled ( #4489 )
    • Announce stable-id feature from XEP-0045 1.31, supported since long ago
    • Fix preload_rooms in case of SQL database ( #4476 )
    • Run new hooks: registering_nickmuc and registered_nickmuc ( #4478 )
    • When deleting account, unregister account&aposs nicks in all MUC hosts ( #4478 )
    • mod_muc_log : Crash in terminate/2 when stopping module ( #4486 )
    • mod_muc_occupantid : Keep salt per MUC service, not individual rooms
    • mod_muc_room : Rewrite hats code that gets xdata values
    • mod_muc_room : Handle hats without definition ( #4503 )
    • mod_muc_room : When user has no hats, don&apost store in hats_users

    WebAdmin

    • ejabberd_http : Run new http_request_handlers_init fold hook
    • ejabberd_http : Add helper get_auto_urls/2 that returns all URLs and TLS
    • ejabberd_web_admin : Add helper functions make_menu_system
    • ejabberd_web_admin : Show menu system only when can view vhosts
    • ejabberd_web_admin : Pass Level in webadmin_menu_system_post and inside hooks
    • mod_conversejs : Improve link to conversejs in WebAdmin ( #4495 )
    • When epmd isn&apost running show explanation in Clustering WebAdmin page
    • Use improved WebAdmin menu system in more modules
    • When building WebAdmin menu system, {URLPATH} in link text is substituted

    Web Services

    • rest : Use separate httpc profile
    • ejabberd_captcha : Use mod_host_meta:get_auto_url/2
    • ejabberd_http : Support repeated module in request_handlers
    • ejabberd_http : Get back handling when BOSH or WS are disabled
    • mod_host_meta : Move get_url functions from mod_host_meta to ejabberd_http
    • mod_host_meta : Allow calling get_url/2 for other modules, not only WebSocket
    • mod_host_meta : Cosmetic rename Module to Handler
    • mod_http_upload : New content_type option similar to mod_http_fileserver ( #4488 )
    • mod_http_upload : Pass ServerHost, not Host which may be "upload.HOST"
    • mod_http_upload : Amend the fix for #4450 to support IDNA correctly ( #3519 )
    • mod_http_fileserver : Support map of paths in docroot option
    • mod_conversejs : Add new Conversejs Paths and ContentTypes ( #4511 )
    • mod_conversejs : Use ContentType functions from mod_http_fileserver ( #4511 )
    • Use /websocket URL in default configuration like mod_conversejs , it&aposs more meaningful

    Core and Modules

    • Add replaced_connection_timeout toplevel option
    • Fix nasty SSL warnings ( #4475 )
    • ejabberd_commands : Show meaningul error message when problem executing command ( #4506 )
    • ejabberd_logger : Append "color clean" only in console template, not file
    • ejabberd_oauth : Log error if oauth_list_tokens executed with unsupported DB ( #4506 )
    • misc : Get back functions and mark them as deprecated
    • mod_adhoc_api : Show nice command name, as WebAdmin already does
    • mod_pubsub : Deliver pubsub notifications to remote servers for nodes with presence based delivery
    • mod_scram_update : Don&apost hard-code iteration count
    • Bump many XEPs versions that are already supported
    • Improve documentation of install_contrib_modules ( #4487 )

    Full Changelog

    https://github.com/processone/ejabberd/compare/25.10...26.01

    ejabberd 26.01 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues .

    • chevron_right

      Ignite Realtime Blog: IgniteRealtime Heads to Brussels: XSF Summit & FOSDEM 2026

      news.movim.eu / PlanetJabber • 12:05 • 1 minute

    I am excited to share that members of the IgniteRealtime community will be heading to Brussels, Belgium, from January 29th to February 1st to take part in two important events for the open-source and real-time communications ecosystem: the annual XSF Summit and FOSDEM.

    XSF Summit: Connecting the XMPP community

    The XSF Summit , organized by the XMPP Standards Foundation, brings together developers, maintainers, and contributors from across the XMPP ecosystem. IgniteRealtime community members will be there to collaborate with peers, discuss the current state and future direction of XMPP, and share insights from ongoing IgniteRealtime projects.

    These face-to-face discussions are invaluable for aligning on standards, exchanging implementation experience, and strengthening the relationships that keep open, decentralized communication thriving.

    FOSDEM: Celebrating open source at scale

    Following the XSF Summit, our community members will also attend FOSDEM (Free and Open source Software Developers’ European Meeting), one of the largest and most influential open-source conferences in the world.

    FOSDEM is a unique opportunity to:

    • Learn about the latest developments across a wide range of open-source technologies
    • Meet contributors and users from diverse projects and communities
    • Represent IgniteRealtime and real-time communication technologies within the broader open-source ecosystem

    You’ll likely find us at the XMPP & Realtime Lounge (building AW, level 1).

    Meet us in Brussels

    If you’re attending the XSF Summit, FOSDEM, or will be in Brussels during this time, we’d love to connect. These events are a great chance to meet fellow IgniteRealtime users and contributors, exchange ideas, and talk about where we’re headed next.

    We look forward to productive discussions, new connections, and bringing fresh energy and ideas back to the IgniteRealtime community. See you in Brussels! :belgium:

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: MongooseIM 6.5: Open for Integration

      news.movim.eu / PlanetJabber • Yesterday - 12:04 • 6 minutes

    MongooseIM 6.5: Open for Integration is now available. This release focuses on easier integration with your applications while continuing to deliver a scalable and reliable XMPP-based messaging server.

    For such integration, you can use the Admin or User GraphQL API. Another example is the interaction of MongooseIM with a database. A relational DB such as PostgreSQL, MySQL or CockroachDB can be used to store user accounts, message archives, multi-user chatrooms, publish-subscribe nodes, cluster nodes and much more. Other supported databases and storage services include Cassandra or Elastic Search for message archive, Redis for user sessions and LDAP for authentication.

    You can see the complete list in our documentation.
    There is another important way to let MongooseIM interact with your distributed system: event pushing . The core of this functionality is the event pusher module, which can push selected events to services like Amazon SNS , RabbitMQ or MongoosePush . In the most recent release 6.5.0, we have improved the event pushing mechanism, focusing on better integration with RabbitMQ.

    Event pushing in MongooseIM 6.5.0

    The diagram below explains the event pushing architecture in the latest version of MongooseIM:

    MongooseIM 6.5:


    Events utilize the concept of hooks and handlers . Currently, events are triggered for a few selected hooks, e.g. when a user logs in/out or sends/receives a message. There are almost 140 different hooks defined in MongooseIM though, so there is much room for adding custom events. A module called mod_event_pusher_hook_translator handles the selected hooks, creating events and passing them to the main event pusher API function, mod_event_pusher:push_event/2 .

    Then, a hook called push_event is triggered, leveraging the hooks-and-handlers mechanism for the second time. This hook is handled by configurable service-specific backend modules, such as:

    • sns – for sending notifications to Amazon SNS,
    • rabbit – for sending notifications to RabbitMQ or CloudAMQP,
    • http – for sending notifications to a custom service,
    • push – for sending push notifications to mobile devices with the help of MongoosePush.

    These modules act as interfaces to the corresponding external services. A key feature of this design is its extensibility. For example, one can easily add a kafka backend module that would deliver the events to Apache Kafka as shown on the diagram above.

    Push notifications

    Let’s focus on the push module as it is a bit more complex. In general, it uses hooks and handlers once again – the push_notifications hook in particular. In order to actually deliver push notifications to a mobile device, a module called mod_push_service_mongoosepush handles that hook by sending the notifications to a separate service called MongoosePush . Finally, MongoosePush will dispatch the notification request and send the notifications to configured services such as APNs or Google FCM .


    Each stage of this process is configurable and extensible. Firstly, we have assumed that a virtual pubsub host is used, but there is also an option to use a separate pubsub node for push notifications as described in XEP-0357: Push Notifications , adding another step before running the push_notifications hook. Speaking about that hook, you can implement your own module handling it by calling your own service instead of MongoosePush. Additionally, if you need to extend the notification logic with additional filtering and processing, you can implement a plugin module to mod_event_pusher_push .

    Pushing events to RabbitMQ

    As the RabbitMQ backend module was improved and made production-ready in version 6.5.0, let’s use it as an example of how you can easily configure MongooseIM to push events to RabbitMQ. The simplest way to demonstrate this is with Docker containers. Firstly, we need a container for RabbitMQ:

    docker network create example
    docker run --rm -d --network example --name rabbit rabbitmq:management
    

    The management tag is used because we will need the rabbitmqadmin command. The default port 5672 will be used by MongooseIM. The connection is unencrypted for simplicity, but it is recommended to use TLS in a production environment.


    As the next step, let’s obtain the default mongooseim.toml configuration file from the MongooseIM 6.5.0 image:

    docker create --name mongooseim erlangsolutions/mongooseim:6.5.0
    docker cp mongooseim:/usr/lib/mongooseim/etc/mongooseim.toml .
    docker rm mongooseim
    

    This default configuration file uses the Mnesia internal database. Although we recommend using CETS and a relational database instead of Mnesia, in this example we will try to keep it as simple as possible – so you only need to add the following to mongooseim.toml :

    [outgoing_pools.rabbit.event_pusher]
      connection.host = "rabbit"
    
    [modules.mod_event_pusher.rabbit]
      chat_msg_exchange = {}
    
    

    The first section configures a pool of connections to RabbitMQ with the default settings , while the second one enables the event pusher module with the rabbit backend, and includes chat_msg_exchange in order to enable the exchange for one-to-one chat messages (called chat_msg by default – see all options ). If you are wondering where to add these lines to mongooseim.toml – you can put them anywhere, but it is recommended to group them by section (e.g. outgoing_pools, modules etc.) for readability.

    Having the configuration file ready, we can start the MongooseIM container:

    docker run --rm -d --network example -e JOIN_CLUSTER=false \
      -v ./mongooseim.toml:/usr/lib/mongooseim/etc/mongooseim.toml \
      --name mongooseim erlangsolutions/mongooseim:6.5.0
    

    After it starts, the chat_msg exchange should be automatically created in RabbitMQ. In order to see its messages, we need to bind a queue to it.

    docker exec -it rabbit rabbitmqadmin declare queue --name recv
    docker exec -it rabbit rabbitmqadmin declare binding --source chat_msg --destination recv --destination-type queue --routing-key "*.chat_msg_recv"
    
    

    The routing key *.chat_msg_recv means that only the events for received messages will be routed – in this example we skip all events for sent messages. Now let’s create two user accounts in MongooseIM:



    docker exec -it mongooseim mongooseimctl account registerUser --username alice --domain localhost --password secret
    docker exec -it mongooseim mongooseimctl account registerUser --username bob --domain localhost --password secret
    

    You should get the “User … successfully registered” message for each of them. Next, send the message from alice to bob – we could do that with an XMPP client , but the simplest way here is to use the admin CLI:

    docker exec -it mongooseim mongooseimctl stanza sendMessage --from alice@localhost --to bob@localhost --body hello
    

    Now we can check if the message event was sent to our RabbitMQ queue:


    docker exec -it rabbit rabbitmqadmin get messages --queue recv
    

    The payload should be:

    {"to_user_id": "bob@localhost", "from_user_id": "alice@localhost", "message": "hello"}
    

    This simple example shows how easy it is to extend the functionality of MongooseIM beyond XMPP – as soon as you have the events in a message queue like RabbitMQ, you can perform various actions on them such as calculating statistics for billing purposes, performing analytics or storing them in an external database separate from MongooseIM.

    Summary

    The most important improvement in MongooseIM 6.5.0 is the production-ready integration with RabbitMQ, allowing external services to process the events from the server. It is worth noting that the mechanism is highly extensible – you can craft such extensions yourself, but of course don’t hesitate to contact us if you are considering using it in your business – we will be happy to help you install, configure, maintain, and – if necessary – customise it to fit your particular needs. For more information about the latest improvements, see the release notes .

    The post MongooseIM 6.5: Open for Integration appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: The Always-On Economy: Fintech as Critical Infrastructure

      news.movim.eu / PlanetJabber • 13 January • 5 minutes

    ​​We are living in an economy that rarely sleeps. Payments clear late at night. Payroll runs in the background. Businesses expect every digital touchpoint to work when they need it.

    Most people do not think about this shift. They assume the systems behind it will hold.

    That assumption carries weight.

    Fintechs, including small and mid-sized ones, now sit inside the basic infrastructure of how money moves. Their uptime affects cash flow. Their stability affects trust. When something breaks, real businesses feel it immediately.

    Expectations changed faster than most systems did.This follows on from our earlier piece, From Prototype to Production , which explored how early technical shortcuts surface as systems scale. Here, we look at what happens next, when those systems become part of the infrastructure businesses rely on every day.

    When “tech downtime” becomes infrastructure failure

    Outages no longer feel contained. A single failure can affect services that millions of people and businesses depend on.

    Large providers experience this as much as small ones. When a shared platform falters, the impact spreads quickly. More than 60 percent of outages now come from third-party providers rather than internal systems, highlighting how tightly connected the ecosystem has become.

    The true cost is trust

    For fintechs, the impact is immediate because money is involved.

    A payment delay blocks cash flow. A failed identity check stops onboarding. A stalled platform damages credibility.

    For an SME, this can play out over the course of a single day. Payroll does not process in the morning. Supplier payments stall in the afternoon. Customer support queues fill up while teams wait for systems to recover. Even short interruptions create knock-on effects that last far longer than the outage itself.

    And the numbers reflect that risk:

    • £25,000 is the average cost of downtime for SMEs.
    • 40% of customers consider switching providers after a single outage.
    • Fintech revenue losses account for approximately US$37 million of downtime-related costs each year.

    At this level of dependency, downtime stops being a technical issue. It becomes an infrastructure failure.

    Fintechs have become infrastructure whether they intended to or not

    Fintech services have moved beyond convenience. They now underpin everyday economic activity for businesses that depend on constant access to money, credit, and financial data.

    This shift shows up in uptime expectations. Platforms that handle financial activity are measured against standards once reserved for mission-critical systems. Even brief disruption can have outsized consequences when services are expected to remain available throughout the day..

    Customers do not adjust expectations based on company size or stage. If money flows through a service, users expect it to be available around the clock. When it is not, the failure feels systemic rather than technical. What breaks is not just functionality, but confidence.

    That is the environment fintech leaders are operating in now.

    Where resilience typically breaks down

    Most fintech systems do not fail because the idea was weak. They fail because early decisions prioritised speed over durability.

    Teams optimise for launch. They prove demand. They ship quickly. That works early on, but systems designed for experimentation often struggle once demand becomes constant rather than occasional.

    Shortcuts that felt harmless early on start to surface under pressure. Shared components become bottlenecks. Manual processes turn into operational risk. Integrations that worked at low volume become fragile at scale. Recent analysis shows average API uptime has fallen year over year, adding more than 18 hours of downtime annually for systems dependent on third-party APIs.

    Common pressure points include:

    • Shared components that act as single points of failure
    • Manual operational work that cannot keep up with growth
    • Third-party dependencies with limited visibility or control
    • Architecture built for bursts of usage instead of continuous demand

    These are not accidental outcomes. They are the result of trade-offs made under pressure. Funding milestones, launch timelines, and growth targets shape architecture as much as technical skill does.

    When outages happen, they rarely trace back to a single bug. They trace back to earlier choices about what mattered and what could wait.

    From product thinking to infrastructure grade systems

    Product thinking is about features and speed. Infrastructure thinking is about continuity.

    Infrastructure-grade systems assume failure will happen. They are built to contain it, recover quickly, and keep the wider platform running. The goal is not perfection. The goal is staying available.

    The goal is not perfection. The goal is staying available.

    Continuous availability is now expected in financial services. Systems are updated and maintained without noticeable downtime because users do not tolerate interruptions when money is involved.

    This approach does not slow teams down. It reduces risk. Deployments feel routine instead of stressful. Engineering effort shifts away from incident response and toward steady improvement.

    Over time, this changes how organisations operate. Teams plan differently. Roadmaps become more realistic. Reliability becomes part of delivery rather than a separate concern.

    Elixir and the always-on economy

    Elixir programming language is designed for systems that are expected to stay available. It runs on the BEAM virtual machine, which was built in environments where downtime carried real consequences.

    That background shows up in how Elixir applications are structured. Systems are composed of many small, isolated processes rather than large, tightly coupled components. When something fails, it fails locally. Recovery is expected. The wider system continues to operate.

    Elixir in Fintech

    This matters in fintech, where failure is inevitable and interruptions are costly. External services misbehave. Load changes without warning. Elixir applications are built to absorb those conditions and recover quickly without cascading outages.

    Teams working in Elixir tend to spend less time managing fragile behaviour and more time improving core functionality. Systems evolve instead of being replaced. Reliability becomes part of the foundation rather than a promise teams struggle to maintain.

    For fintechs operating in an always-on economy, that approach aligns with the expectations already placed on them.

    Reliability as a competitive advantage

    Reliable systems can completely change how a fintech operates day to day.

    For growing fintechs, uptime supports trust and regulatory confidence. Customers stay because the platform behaves predictably. Growth becomes steadier because teams are not constantly reacting to incidents. Downtime costs make this real, especially for small businesses that lose revenue when systems are unavailable.

    For larger providers, reliability reduces operational strain. Fewer incidents mean fewer emergency fixes and fewer difficult conversations with partners and regulators. Teams spend more time improving core services and less time managing fallout.

    Reliability also shapes perception. Platforms that stay up become easier to trust with deeper integrations and higher volumes. Over time, that trust compounds and turns stability into a real advantage, even if it is rarely visible from the outside.

    To conclude

    The always-on economy creates real opportunity for fintechs, but it also raises expectations that many platforms were not originally built to meet.

    The question is whether your system has the resilience to operate as infrastructure day after day. If you are a fintech and want to build with reliability in mind, get in touch .

    Designing for resilience early makes it far easier to scale without introducing fragility later on.


    The post The Always-On Economy: Fintech as Critical Infrastructure appeared first on Erlang Solutions .

    • chevron_right

      ProcessOne: XMPP integration with n8n

      news.movim.eu / PlanetJabber • 12 January • 7 minutes

    n8n is a platform for workflow automation. Let&aposs see how interface n8n workflows with an XMPP network. As an example on how to use XMPP to receive data from n8n, let&aposs build a simple workflow that publishes into a MUC room some information about each commit pushed to a GitHub repository.

    Simple workflow to display Git commits in a MUC room

    You can browse and download this workflow here: https://share-n8n.net/shared/l6uiZZ54bPC3

    alt

    GitHub Trigger

    To receive the information about each Git push, we use the GitHub Trigger node . You need to set up some GitHub credentials with access to the repository you want to monitor. Then, set the Repository Owner and Repository Name accordingly. The Events you want to receive are of type Push . At this point, you can execute the workflow (just containing this unique node), do a couple of commits on your repository, and push them to test that it&aposs received by the GitHub Trigger node. When it works, pin the output data of the node so you don&apost have to push some more commits while you test the rest of the workflow. Don&apost forget to unpin it once you want to use the workflow for real.

    Information Formatting

    As you can see in the output data of the GitHub Trigger node, the list of commits in the push are in body.commits . We use a Split Out node on this field to process each one of these commits.

    We then use an Edit Fields (Set) node to format the information we will display about each commit in the final message posted in the MUC room, using a Manual Mapping . Just check the input data coming by the previous Split Out node to choose what data you want to send to the MUC room. We name the output field Commit (type: String ). Here is an example of such a template:

    "{{ $json.message.split(&apos\n&apos)[0] }}" by {{ $json.author.username }} ({{ $json.author.name }} <{{ $json.author.email }}>), {{ $json.modified.length }} modified file{{ $json.modified.length >1 ? "s" : "" }}
    

    The split at the beginning of this example is used to only keep the first line of the commit message.

    After that, an Aggregate node will combine the commits info (one per item) into a single item, with this setting:

    As a result, we&aposll have one item containing the list of commits into a Commit field.

    ejabberd credentials

    We can now send the formatted data to the XMPP server using a HTTP Request node . For that, we use the send_message command from the ejabberd API.

    We use an OAuth token to securely call ejabberd API using Bearer Authentication .

    Fluux

    If you are using a Fluux instance , you can get an OAuth token from your Fluux console, in the API Tokens section.

    Self-hosted ejabberd

    To create the credentials on your own ejabberd server, you&aposll need to use the oauth_issue_token command to get an OAuth token. Be sure you have an access rule that allows you to create such a token.

    For the scope parameter of oauth_issue_token command, use the one defined in your api_permissions acl ( ejabberd:admin in the example below) and be sure the user you issue the token for has the right to call the send_message command.

    Extract of an example of an ejabberd.yml file that allows to call send_message using OAuth:

    hosts:
      - "process-one.net"
    
    listen:
      -
        port: 5281
        ip: "::"
    	tls: true
        module: ejabberd_http
        request_handlers:
          "/api": mod_http_api
    
    acl:
      admin:
        user:
          - "admin@process-one.net"
    
    access_rules:
      muc:
        - allow: all
      muc_create:
        - allow: local
      muc_admin:
        - allow: admin
      oauth_access:
        - allow: admin
    
    api_permissions:
      "console commands":
        from:
          - ejabberd_ctl
        who: all
        what: "*"
      "admin access":
        who:
          - oauth:
            - scope: "ejabberd:admin"
            - access:
              - allow:
                - acl: admin
        what:
          - send_message
    
    modules:
    	  mod_http_api: {}
    	  mod_muc:
    		  access: muc
    		  access_create: muc_create
    		  access_persistent: muc_create
    		  access_admin: muc_admin
    

    Example of a command to create a token with the above config:

    ejabberdctl oauth_issue_token admin@process-one.net 360000 ejabberd:admin
    

    Sending the XMPP message using send_message

    Now you can create your credentials in the HTTP Request node:

    For the other fields in the HTTP Request node:

    • Method : POST
    • URL : Full URL of the send_message command as defined by your ejabberd_api listener, for example https://process-one.net:5281/api/send_message . If you are using Fluux , the URL should be https://NAME.m.in-app.io:5281/api/send_message where NAME is your Fluux instance name.
    • Send Body : enabled
    • Body Content Type : JSON
    • Specify Body : Using Fields Below
    • Body Parameters : set the parameters for the send_message command :
      • type : groupchat
      • from : bare JID of the bot that will post the message in the MUC room
      • to : bare JID of the MUC room
      • subject : empty (not relevant for a MUC message)
      • body : template to format the commit information. For example, to display the name of the repository followed by the list of commits, one per line:
    {{ $(&aposGithub Push&apos).item.json.body.repository.full_name }} repository:
    {{ $json.Commit.join("\n") }}
    

    The n8n workflow should now be working: it will publish all commits pushed on the repository defined in the first node into the MUC room defined in the last node (don&apost forget to unpin everything).

    Same workflow with a custom stanza

    You can browse and download this workflow here: https://share-n8n.net/shared/nxmUMbdNM0gH

    alt

    This is a variant of the above workflow to illustrate how to create and send a custom stanza instead of the simple message that send_message API permits. This is needed for example if you want to add an element in a custom namespace into your message stanza.

    The minimal stanza for a MUC message is:

    <message type="groupchat">
      <body>
    	List of commits
      </body>
    </message>
    

    We don&apost need to set the from , to or id attribute, it will be handled by the API call done in the last node. Let&aposs say we want to add a sub-tab containing the timestamp of the moment the message was sent by n8n. The stanza we want to create looks like this:

    <message type="groupchat">
      <body>
    	List of commits
      </body>
      <n8n xmlns="p1:custom:timestamp">2025-12-17T11:16:13.071-05:00</n8n>
    </message>
    

    Create a custom stanza

    We want to create that stanza just before the last one, between the Aggregate node and the HTTP Request node. To ensure that all strings from the git commit are correctly encoded for XML, we use the JSON to XML node . This means we have to construct the stanza in JSON first.

    This is done in the Make Stanza in JSON node of type Edit Fields (Set) . We use the Manual Mapping to ensure all special characters from the git commit are correctly escaped.

    • We set a $ field (which is the default Attribute Key ) to set the type attribute on the root element. The name of the root element, message , will be set in the next node, JSON to XML .
    • We create a body field of type String that will contain the list of commits, one per line. We can use the same template used in the last node of the previous workflow:
    {{ $(&aposGithub Push&apos).item.json.body.repository.full_name }} repository:
    {{ $json.Commit.join("\n") }}
    
    • We add a n8n field for the custom tag, with type Object , to be able to set a xmlns attribute on it:
    {
     "_": "{{ $now }}",
     "$": {"xmlns": "p1:custom:timestamp"}
    }
    

    Convert the stanza in XML

    The stanza is converted in XML in the Convert stanza in XML , XML node:

    • The Mode is JSON to XML
    • Property Name is the name of the field in the output data, let&aposs name it stanza .
    • We have to enable the Headless option to prevent the XML header to be added before the message.
    • The Root Name is set to message (the root element of the stanza) as the JSON received from the previous node is just the content of the XML document.

    Sending the XMPP message using send_stanza

    The configuration of the HTTP Request node is the same than for the previous workflow, except that the URL must call send_stanza instead of send_message . The body parameter is replaced by the stanza one, in which we just have to set the stanza field from the previous node:

    {{ $json.stanza }}
    

    Be sure to adapt your ejabberd configuration accordingly, specifically to allow send_stanza to be called.

    • chevron_right

      Mathieu Pasquet: XMPP and metadata

      news.movim.eu / PlanetJabber • 6 January • 14 minutes

    I had the pleasure of giving a talk on "XMPP and Metadata" during the last Chaos Communication Congress, in the Critical Decentralization Cluster area. It was my first public presentation in a very long while (also in english), so the talk went okay-ish at best. The end of the year was also hectic and I did not manage to prepare or rehearse as much as I would have liked to.

    This blog post will be a longer, more complete version of the talk. You can nonetheless find the talk slides on the CDC pretalx . Thanks a lot to the people who proofread the blog post to fix stuff or suggest additional content.

    This was about metadata, but also generally data retention and what the server sees in general.

    Metadata

    First I want to define the basics: metadata is data about the data. In a messaging system, data is usually considered to be the message contents ( what is being exchanged), and metadata is about the envelope:

    • who (sender and receiver)
    • when (timestamp)
    • hints about the what (e.g. payload size)

    Outside of messages, you can additionally have data/metadata about the users themselves, like online status or location, which can be either sent in the clear to the server, or deduced from user activity.

    Obvious message workflow

    This might be too obvious for most people, but for clarity’s sake, I want to assert that to send a message to another entity, you need:

    • a sender
    • a message
    • a receiver

    This is not technical, this is baked into the concept of sending a message. Those elements will always be present somewhere in the workflow. Assuming a working encryption system, the message data itself will not be considered.

    There are, however, some technical tricks that can hide a lot of things from the infrastructure layer.

    XMPP

    I cannot really make this an introduction to XMPP but to summarize, XMPP is an extensible federated protocol for messaging and presence. It is using XML for the most part but nobody should care (except trolls, I guess). It started in 1999 as Jabber, and grew to be an IETF standard under the name XMPP after Jabber got bought by Cisco (we can still use the Jabber name in many ways).

    The protocol started server-heavy with light clients - and in fact, you will read as much in the "XMPP, the definitive guide" book -, but the trend got reversed in the last decade due to the rise of mobile clients which can be updated very often and other circumstances.

    There are clients and servers, and it is therefore a protocol made of client-to-server interactions and server-to-server interactions, each with their own privacy implications.

    The key elements to remember in order to assess threats in the XMPP network fabric would be:

    • Your server is the only entity sending data to other servers. Every single bit of XML your XMPP client sends goes through your server.
    • Other servers will only see your interactions with their own users.

    Those two points are true in most non-P2P models. Centralized models can be thought as a specialization of this model, but with only one single server.

    That is why it is essential to rely on a server you can trust, either operated by people you trust, or at least who have some accountability in place, for example the services listed on providers.xmpp.net .

    Threats

    I can roughly point out four types of "passive" threats on metadata for XMPP:

    • A server compromise (present)
      • Correlation of data streams in real time
    • A server compromise (future)
      • Exploitation of the static data available on the server
    • An attacker present on the server network
      • Can see what the server does (both with clients and servers)
    • An attacker on the client (your) network
      • Can see what your client does

    Network metadata threats

    Client network attacker

    Only a bit of metadata is visible on the client network:

    • Timing and size of every network call
    • HTTP(s) interactions that cross the XMPP boundary, like file upload
    • Jingle interactions
    • Other services (STUN, TURN services, etc)

    On XMPP, one client - with only one account - will only ever connect to the XMPP server the account is living on. Some services might go through HTTP, which then cross the boundary and create networks calls outside of the XMPP TLS stream, like HTTP Upload (XEP-0363). Likewise, application like audio, video calls or direct file transfers will also cross that boundary.

    STUN and TURN servers are used to bypass NAT restrictions which are everywhere nowadays. STUN can be described as a way for clients to find out their reachable address from behind the NAT, and then use that to establish a peer-to-peer connection, and TURN is used in the more restricted networks where a relay is necessary to be able to communicate. In the case of TURN it creates yet another server overseeing the -encrypted- communication, and in most modern XMPP setups the server operator also operates a TURN server; a network attacker will also see the encrypted stream to the TURN server from the client. In the STUN case, clients will do a short network call to find out their external address, then establish the direct session to the other user, making that metadata available to an attacker as well.

    A diagram showing the client network interactions in XMPP. All data goes through a TLS stream to the server, except HTTP interactions that are handled separately.

    I do not know of any work related to fingerprinting encrypted client-to-server XMPP traffic, but I would guess that it is fairly easy if no precautions are taken.

    Some examples:

    • Ping ( XEP-0199 ) as connectivity check: periodical request/answer to your server, with a fixed size
    • Room join ( XEP-0045 ): send disco, receive disco info, send join request, receive plenty of presences in a short period of time
    • Sending messages: composing chat state (somewhat fixed size), sometimes followed by other chat states (e.g. paused), followed by a longer payload, the message, which size can often be predicted by the time since the chat state.
    • etc…

    This metadata will depend on the XMPP client used and how much it is customized, because while protocol workflows are mostly the same for all clients, some might do different or extra steps in common cases.

    Server network attacker

    A diagram showing the server network interactions in XMPP. An attacker can see both what data comes from/to client, and from/to servers.

    If an attacker can eavesdrop on the server network, it gets access to a lot more info than on the client network:

    • It has a view on both open client and server connections
    • It can then correlate quite easily the destination of stanzas sent by the client (since it will see something go from the client to the server, then immediately something from the server to another server)
    • If the server has lots of active users, it makes it harder to correlate, but metadata of single-user servers are a very easy target!

    A diagram showing the server network interactions in XMPP. An attacker can see both what data comes from/to client, and from/to servers.

    If there is plenty of activity on the server, it will be much harder to find out what the source of the activity is:

    A simple diagram showing the inside of an XMPP TLS stream, with various types of stanzas from different JIDs going to a specific server.

    All stanzas are going through a single stream for each destination server which makes it difficult for an observer to find out which payload is going where provided there is enough activity, and luckily XMPP is a somewhat chatty protocol.

    Server compromise

    On-disk data and metadata

    In an event where a server gets compromised later (or even a backup gets compromised), it is important to know what kind of data and metadata will be visible to the attacker.

    There is a lot of data available on servers with normal XMPP usage, and a few categories stand out.

    User accounts

    Accounts are tied to a specific server; they can in some ways be migrated to another server but the process is currently not fun.

    For a normal user, one of the important troves of metadata is the roster (contact list): it contains all the people you have added to the list, which allows you to put them in groups, as well as letting the server broadcast your online presence when you get online.

    Another important piece of information is your bookmarks list: this allows you to sync the chatrooms you are in between all your clients, but incidentally that information stays in cleartext on the server as well.

    The most identifying piece of information about the user on the server can be the vcard, as it is litterally a collection of personal data (including the user’s avatar).

    And the server will usually store some useful information it can know without asking the user: date of last connection, last seen IP address, etc… Some of this data can be very useful to assist manual account recovery!

    Groupchats

    Groupchats in XMPP are hosted on a specific server, which does not have to be (and often isn’t) the user’s server. The room identifiers are in the form room@a-groupchat-server.net , where a-groupchat-server.net gives the information of where the room is hosted.

    The server hosting a persistent room needs to keep, on-disk:

    • A message archive - not mandatory but very nice to have -
    • A members/admins/onwers/ban list; only the owner list is mandatory, as long as everyone else is fine with being a "normal" participant
    • Data related to the room, like configuration, access control (which can be tied to membership), topic, etc

    There are two types of rooms:

    • Semi-anonymous rooms, where only room admins (and their servers, as well as the room server) can see the real addresses of the users exchanging messages, creating a pseudonymous situation.
    • Non-anonymous rooms, where everyone sees everyone’s addresses; here all participants and their servers can know who is in the room and authors messages.

    In terms of live data, due to how XMPP is designed, all the server of all the active participants will be able to see who is sending messages:

    A diagram showing how a message goes from a client to all the other participants clients, through the user’s server, then the groupchat server, then their servers, and also back to the original sender.

    But in semi-anonymous rooms, they only get a nickname, and not the real address.

    Live stream interception

    As stated earlier, choosing a server you trust is the very first step, and if you do not trust your server (and operators) at all, why are you there?

    If the server itself is compromised, the point-to-point TLS encryption between client and servers, and between servers, becomes very much useless, which means "server network attacker" scenarios can now be matched with 100% precision, and more:

    • Correlation between sender and receiver is exact
    • The user’s XMPP address can be mapped to an IP and port
    • Stanza type is exposed, whether , , stanzas are sent, everything is known
    • Activity patterns can get a lot more detailed
    • E2EE still works to protect data - but can get disrupted -

    Some solutions for XMPP

    Remediation of network metadata leaks

    I do not see magic bullets here, and everything has a cost. The idea would be to add noise that is not easy to filter out when parsing network activity at scale.

    • Inserting random padding in server-to-server and client-to-server communications would help, of course, but network data is not always cheap, and processing XML also has a cost.
    • Another solution would be "garbage" stanzas sent in various manners to other servers, but again there is a cost here, and since very little litterature exists on such attacks, it remains to be seen that there is an upside to this.
    • Another one would be to allow stanzas to be grouped on the wire when possible (e.g. presence updates or groupchat messages)
    • The last one I can see would be to induce random delays in stanza delivery when processing them on a server. This has of course a usability cost, but for large servers, adding random delays < 0.3s would probably be enough to make correlation quite hard, but of course ordering must be preserved.

    Remediation of roster issues

    Presence-based messaging has been going downhill for a while thanks to a smartphone-centric population and apps, which means the roster is not as useful as it could once be.

    It still provides useful information in my opinion, and is also very useful for abuse prevention, since you do not receive abuse from your contacts - hopefully -.

    The roster could be stored on devices instead of servers, and we could find a way to share it encrypted and cross-signed with the user’s other devices. This does not change the fact that your server will see who you are chatting with or sending presence to.

    The bookmarks could likely be shared in the same way, and removed from the server as well.

    Fixing groupchat metadata

    As mentioned earlier, if we phase out presences in many cases, being passively inside a room would at least prevent being exposed to servers looking at the traffic, as long as nobody requests the participants list.

    I do not see a good way out of this for metadata.

    Fixing many types of metadata issues

    The most deployed end-to-end encryption solution on the network is OMEMO , but not in its latest version, which allows for Stanza Content Encryption . The version most people use only encrypts the message’s body tag, which is the text sent by the user. By extension, uploads are also encrypted there because they are encrypted using OMEMO Media sharing which is really just encrypted using AES-GCM with a key and IV shared inside the URL sent through the OMEMO session.

    This means people wanting to use nice features like composing chat states and the like have the choice to either generate this metadata in the clear, or disable the feature, which is not ideal.

    Other interesting solutions

    Serverless messaging

    XEP-0174 describes how to operate XMPP inside a trusted network, which bypasses the need for a server by using mDNS on this network and advertising your address with your local IP.

    I liked it a lot and had fun at the time, but it was from a blissful era where encryption was not seen as the foundation upon which everything should be built. It means that everything transits unencrypted on the network, both metadata and data are unprotected. Data could be encrypted using OTR, but some required bricks for the modern XMPP experience like the Personal Eventing Protocol are unavailable, making OMEMO in its current form a no-go.

    Implementations are scarce nowadays, as an incomplete XMPP layer inside a normal client is usually pretty convoluted to maintain and after a while it was removed from most clients where it existed.

    XTLS

    XTLS describes a way to create a direct and encrypted TLS stream between two entities using jingle, which could be then used to exchange stanzas in a secure manner, without going through the server.

    That means that the metadata threats shift from server-based to client-based, which can be an upside or not at all. The layer used for the channel is also quite important, as it could be In-Band Bytestreams (which means the data goes through the usual client-server-server-client route) which would then provide additional E2EE cloaking of all data exchanged between clients, but still going through all the entities to route data.

    Other services/protocols

    Signal

    Signal is a pioneer in encryption systems at scale, and keeps pushing the boundaries of what is possible to do securely. Nonetheless, their messenger is centralized, with systems running on AWS and Azure (as far as I can tell), which makes them very dependent on the US political wasteland as well as the tech landlords’ whims.

    They do a lot of things to ensure things are as secure as it can be within the constraints they imposed on themselves , and as such while I trust them for now, their servers certainly have a lot of opportunities to collect and store a ton of metadata, simply due to the fact that this is a centralized system. While their cryptography work is class-leading, which makes my data secure (as long as someone does not bust the secure enclave which protects my "recovery code" I guess), keeping my metadata volatile and secure there is only a leap of faith on my part, as I can have no guarantees.

    A diagram showing every client connects to a single entity, because signal is centralized.

    Matrix

    Matrix is a federated protocol which has many of the same flaws as XMPP with regards to metadata.

    One notable difference is that matrix is more like a distributed database with built-in conflict resolution, which leads to every participant’s server replicating the state (data and metadata) of the rooms they are in. This creates a more difficult situation for metadata than XMPP, because while XMPP servers can see what goes through them, in Matrix the servers are required to store this information.

    Matrix also has two different sets of APIs for client-to-server and server-to-server communication, which should allow it to batch messages when appropriate.

    SimpleX

    SimpleX is a protocol with a lot of cryptography baked in, and has interesting properties such as the absence of user accounts and therefore identifiers (which means very little data on the servers can be compromised).

    One of its more interesting properties is that it has 2-stage onion routing baked in, which allows it to sidestep many issues around metadata due to connecting to servers.

    Diagram showing how a message can be encapsulated several times using onion routing, so that only the final recipient knows of the message. (credit: Wikipedia )

    The whitepaper stresses that it is still important to choose your servers well, but that is still less critical than in XMPP since you can easily switch at very little cost.

    (P.S.: calling your protocol "SMP" is not nice if it is not based on the Socialist Millionaire Protocol , I haven’t checked but skimming did not reveal any mention of it)

    Conclusion

    As Daniel noted on mastodon, there are some low-hanging fruits to improving the metadata issues around XMPP (and some higher-hanging fruits as well). I agree that this is not going to be even a blip on XMPP adoption, but we should do what we can nonetheless to improve the situation, in order to improve the standard and the ecosystem. That said, I can perfectly understand that since a lot of the work is purely volunteer-driven and our time and energy is limited, it can appear to be a waste of time to dedicate them to removing bits of metadata here and there.

    • chevron_right

      ProcessOne: Enabling Fluux.io WebPush on a PWA on iOS

      news.movim.eu / PlanetJabber • 6 January • 1 minute

    Enabling Fluux.io WebPush on a PWA on iOS

    As we previously described here , your Fluux.io service can send push notifications to your xmpp user&aposs browsers when they are offline. It includes Safari on iOS devices. But before enabling it your users need to add your Progressive Web App (PWA) on their home screen.

    You can test the whole process with our "Test Client" from your fluux.io console. First, sign in fluux.io console and go to WebPush App list. Like in previous blog post open test client.

    Enabling Fluux.io WebPush on a PWA on iOS

    This page has a link tag :

    <link rel="manifest" href="/manifest.json">

    linking to a WebApp manifest file such this one https://fluux.io/manifest.json

    We reproduced here main fields :

    {
      "name": "FluuxIO",
      "icons": [
        {
          "src": "/logo.png",
          "type": "image/png",
          "sizes": "200x200"
        }
      ],
      "display": "standalone"
    }
    

    This file provides a name, an icon and so on to the PWA. It also fixes display mode to "standalone" which is required to enable push notification on a Progressive Web App on iOS.


    With such configuration, end-user has to tap on "share button" :

    Enabling Fluux.io WebPush on a PWA on iOS

    Then iOS user has to scroll and tap on "Add to Home Screen".

    Enabling Fluux.io WebPush on a PWA on iOS

    It will open a form prefilled with data from manifest.json

    Enabling Fluux.io WebPush on a PWA on iOS

    Once registration form submitted, a new icon will appear on iOS device home screen and will act as a new app.

    Enabling Fluux.io WebPush on a PWA on iOS

    Launch it, sign in your fluux console, and move back to test client and connect using a test user. You can now tap on "Enable webpush"

    Enabling Fluux.io WebPush on a PWA on iOS

    You will be prompted to allow your device to receive push notification from your fluux platform (through safari/apple server).

    Enabling Fluux.io WebPush on a PWA on iOS

    Like other push worflows a notification for new message will be sent to xmpp user&aposs device when offline :

    Enabling Fluux.io WebPush on a PWA on iOS