phone

    • chevron_right

      ProcessOne: ejabberd 24.12

      news.movim.eu / PlanetJabber • 19 December • 10 minutes

    ejabberd 24.12

    Here comes ejabberd 24.12, including a few improvements and bug fixes. This release comes a month and half after 24.10, with around 60 commits to the core repository alongside a few updates in dependencies.

    Release Highlights:

    Among them, the evacuate_kindly command is a new tool which gave the funny codename to this release. It lets you stop and rerun ejabberd without letting users reconnect to let you perform your maintenance task peacefully. So, this is not an emergency exit from ejabberd, but instead testimony that this releasing is paving the way for a lot of new cool stuff in 2025.

    Other contents:

    If you are upgrading from a previous version, there are no required changes in the SQL schemas, configuration or hooks. There are some Commands API v3 .

    Below is a detailed breakdown of the improvements and enhancements:

    XEP-0484: Fast Authentication Streamlining Tokens

    We added support for XEP-0484: Fast Authentication Streamlining Tokens . This allows clients to request time limited tokens from servers, which then can be later used for faster authentication by requiring less round trips. To enable this feature, you need to add mod_auth_fast module in modules section.

    Deprecation schedule for Erlang/OTP older than 25.0

    It is expected that around April 2025, GitHub Actions will remove Ubuntu 20 and it will not be possible to run automatically dynamic tests for ejabberd using Erlang/OTP older than 25.0.

    For that reason, the planned schedule is:

    • ejabberd 24.12

      • Usage of Erlang/OTP older than 25.0 is still supported, but discouraged
      • Anybody still using Erlang 24.3 down to 20.0 is encouraged to upgrade to a newer version. Erlang/OTP 25.0 and higher are supported. For instance, Erlang/OTP 26.3 is used for the binary installers and container images.
    • ejabberd 25.01 (or later)

      • Support for Erlang/OTP older than 25.0 is deprecated
      • Erlang requirement softly increased in configure.ac
      • Announce: no warranty ejabberd can compile, start or pass the Common Tests suite using Erlang/OTP older than 25.0
      • Provide instructions for anybody to manually re-enable it and run the tests
    • ejabberd 25.01+1 (or later)

      • Support for Erlang/OTP older than 25.0 is removed completely in the source code

    Commands API v3

    This ejabberd 24.12 release introduces ejabberd Commands API v3 because some commands have changed arguments and result formatting. You can continue using API v2; or you can update your API client to use API v3. Check the API Versions History .

    Some commands that accepted accounts or rooms as arguments, or returned JIDs, have changed their arguments and results names and format to be consistent with the other commands:

    • Arguments that refer to a user account are now named user and host
    • Arguments that refer to a MUC room are now named room and service
    • As seen, each argument is now only the local or server part, not the JID
    • On the other hand, results that refer to user account or MUC room are now the JID

    In practice, the commands that change in API v3 are:

    If you want to update ejabberd to 24.12, but prefer to continue using an old API version with mod_http_api , you can set this new option:

    modules:
      mod_http_api:
        default_version: 2
    

    Improvements in commands

    There are a few improvements in some commands:

    • create_rooms_file : Improved, now it supports vhosts with different config
    • evacuate_kindly : New command to kick users and prevent login ( #4309 )
    • join_cluster : Improved explanation: this returns immediately (since 5a34020, 24.06)
    • mod_muc_admin : Renamed arguments name to room for consistency, with backwards support (no need to update API clients)

    Use non-standard STUN port

    STUN via UDP can easily be abused for reflection/amplification DDoS attacks. Suggest a non-standard port to make it harder for attackers to discover the service in ejabberd.yml.example .

    Modern XMPP clients discover the port via XEP-0215, so there&aposs no advantage in sticking to the standard port.

    Disable the systemd watchdog by default

    Some users reported ejabberd being restarted by systemd due to missing watchdog pings despite the actual service operating just fine. So far, we weren&apost able to track down the issue, so we&aposll no longer enable the watchdog in our example service unit.

    Define macro as environment variable

    ejabberd allows you to define macros in the configuration file since version 13.10. This allows to define a value once at the beginning of the configuration file, and use that macro to setup options values several times during the file.

    Now it is possible to define the macro value as an environment variable. The environment variable name should be EJABBERD_MACRO_ + macro name .

    For example, if you configured in ejabberd.yml :

    define_macro:
      LOGLEVEL: 4
    
    loglevel: LOGLEVEL
    

    Now you can define (and overwrite) that macro definition when starting ejabberd. For example, if starting ejabberd in interactive mode:

    EJABBERD_MACRO_LOGLEVEL=5 make relive
    

    This is specially useful when using containers with slightly different values (different host, different port numbers...): instead of having a different configuration file for each container, now you can use a macro in your custom configuration file, and define different macro values as environment variable when starting each container. See some examples usages in CONTAINER&aposs composer examples

    Elixir modules for authentication

    ejabberd modules can be written in the Elixir programming language since ejabberd 15.02. And now, ejabberd authentication methods can also be written in Elixir!

    This means you can write a custom authentication method in Erlang or in Elixir, or write an external authentication script in any language you want.

    There&aposs an example authentication method in the lib/ directory. Place your custom authentication method in that directory, compile ejabberd, and configure it in ejabberd.yml :

    auth_method: &aposEjabberd.Auth.Example&apos
    

    For consistency with that file naming scheme, the old mod_presence_demo.ex has been renamed to mod_example.ex . Other minor changes were done on the Elixir example code.

    Redis now supports Unix Domain Socket

    Support for Unix Domain Socket was added to listener&aposs port option in ejabberd 20.07. And more recently, ejabberd 24.06 added support in sql_server when using MySQL or PostgreSQL.
    That feature is useful to improve performance and security when those programs are running on the same machine as ejabberd.

    Now the redis_server option also supports Unix Domain Socket.

    The syntax is similar to the other options, simply setup unix: followed with the full path to the socket file. For example:

    redis_server: "unix:/var/run/redis/redis.socket"
    

    Additionally, we took the opportunity to update from the wooga/eredis erlang library which hasn&apost been updated in the last six years, to the Nordix/eredis fork which is actively maintained.

    New evacuate_kindly command

    ejabberd has nowadays around 180 commands to perform many administrative tasks. Let&aposs review some of their usage cases:

    • Did you modify the configuration file? Reload the configuration file and apply its changes

    • Did you apply some patch to ejabberd source code? Compile and install it, and then update the module binary in memory

    • Did you update ejabberd-contrib specs, or improved your custom module in .ejabberd-module ? Call module_upgrade to compile and upgrade it into memory

    • Did you upgrade ejabberd, and that includes many changes? Compile and intall it, then restart ejabberd completely

    • Do you need to stop a production ejabberd which has users connected? stop_kindly the server, informing users and rooms

    • Do you want to stop ejabberd gracefully? Then simply stop it

    • Do you need to stop ejabberd immediately, without worrying about the users? You can halt ejabberd abruptly

    Now there is a new command, evacuate_kindly , useful when you need ejabberd running to perform some administrative task, but you don&apost want users connected while you perform those tasks.

    It stops port listeners to prevent new client or server connections, informs users and rooms, and waits a few seconds or minutes, then restarts ejabberd. However, when ejabberd is started again, the port listeners are stopped: this allows to perform administrative tasks, for example in the database, without having to worry about users.

    For example, assuming ejabberd is running and has users connected. First let&aposs evacuate all the users:

    ejabberdctl evacuate_kindly 60 \"The server will stop in one minute.\"
    

    Wait one minute, then ejabberd gets restarted with connections disabled.
    Now you can perform any administrative tasks that you need.
    Once everything is ready to accept user connections again, simply restart ejabberd:

    ejabberdctl restart
    

    Acknowledgments

    We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

    And also to all the people contributing in the ejabberd chatroom, issue tracker...

    Improvements in ejabberd Business Edition

    Customers of the ejabberd Business Edition , in addition to all those improvements and bugfixes, also get support for Prometheus.

    Prometheus support

    Prometheus can now be used as a backend for mod_mon in addition to statsd, influxdb, influxdb2, datadog and dogstatsd.

    You can expose all mod_mon metrics to Prometheus by adding a http listener pointing to mod_prometheus , for example:

      -
        port: 5280
        module: ejabberd_http
        request_handlers:
          "/metrics": mod_prometheus
    

    You can then add a scrape config to Prometheus for ejabberd:

    scrape_configs:
      - job_name: "ejabberd"
        static_configs:
          - targets:
              - "ejabberd.domain.com:5280"
    

    You can also limit the metrics to a specific virtual host by adding it&aposs name to the path:

    scrape_configs:
      - job_name: "ejabberd"
        static_configs:
          - targets:
              - "ejabberd.domain.com:5280"
         metrics_path: /metrics/myvhost.domain.com
    

    Fix

    • PubSub: fix issue on get_item_name with p1db storage backend.

    ChangeLog

    This is a more detailed list of changes in this ejabberd release:

    Miscelanea

    • Elixir: support loading Elixir modules for auth ( #4315 )
    • Environment variables EJABBERD_MACRO to define macros
    • Fix problem starting ejabberd when first host uses SQL, other one mnesia
    • HTTP Websocket: Enable allow_unencrypted_sasl2 on websockets ( #4323 )
    • Relax checks for channels bindings for connections using external encryption
    • Redis: Add support for unix domain socket ( #4318 )
    • Redis: Use eredis 1.7.1 from Nordix when using mix/rebar3 and Erlang 21+
    • mod_auth_fast : New module with support XEP-0484: Fast Authentication Streamlining Tokens
    • mod_http_api : Fix crash when module not enabled (for example, in CT tests)
    • mod_http_api : New option default_version
    • mod_muc : Make rsm handling in disco items, correctly count skipped rooms
    • mod_offline : Only delete offline msgs when user has MAM enabled ( #4287 )
    • mod_priviled : Handle properly roster iq
    • mod_pubsub : Send notifications on PEP item retract
    • mod_s2s_bidi : Catch extra case in check for s2s bidi element
    • mod_scram_upgrade : Don&apost abort the upgrade
    • mod_shared_roster : The name of a new group is lowercased
    • mod_shared_roster : Get back support for groupid@vhost in displayed

    Commands API

    • Change arguments and result to consistent names (API v3)
    • create_rooms_file : Improve to support vhosts with different config
    • evacuate_kindly : New command to kick users and prevent login ( #4309 )
    • join_cluster : Explain that this returns immediately (since 5a34020, 24.06)
    • mod_muc_admin : Rename argument name to room for consistency

    Documentation

    • Fix some documentation syntax, add links to toplevel, modules and API
    • CONTAINER.md : Add kubernetes yaml examples to use with podman
    • SECURITY.md : Add security policy and reporting guidelines
    • ejabberd.service : Disable the systemd watchdog by default
    • ejabberd.yml.example : Use non-standard STUN port

    WebAdmin

    • Shared group names are case sensitive, use original case instead of lowercase
    • Use lowercase username and server authentication credentials
    • Fix calculation of node&aposs uptime days
    • Fix link to displayed group when it is from another vhost

    Full Changelog

    https://github.com/processone/ejabberd/compare/24.10...24.12

    ejabberd 24.12 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues .

    • chevron_right

      JMP: Newsletter: JMP at SeaGL, Cheogram now on Amazon

      news.movim.eu / PlanetJabber • 18 December • 2 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    JMP at SeaGL

    The Seattle GNU/Linux Conference (SeaGL) is happening next week and JMP will be there!  We’re going to have a booth with some of our employees, and will have JMP eSIM Adapters and USB card readers for purchase (if you prefer to save on shipping, or like to pay cash or otherwise), along with stickers and good conversations. :)  The exhibition area is open all day on Friday and Saturday, November 8 and 9, so be sure to stop by and say hi if you happen to be in the area.  We look forward to seeing you!

    Cheogram Android in Amazon Appstore

    We have just added Cheogram Android to the Amazon Appstore !  And we also added Cheogram Android to Aptoide earlier this month.  While F-Droid remains our preferred official source, we understand many people prefer to use stores that they’re used to, or that come with their device.  We also realize that many people have been waiting for Cheogram Android to return to the Play Store, and we wanted to provide this other option to pay for Cheogram Android while Google works out the approval process issues on their end to get us back in there.  We know a lot of you use and recommend app store purchases to support us, so let your friends know about this new Amazon Appstore option for Cheogram Android if they’re interested!

    New features in Cheogram Android

    As usual, we’ve added a bunch of new features to Cheogram Android over the past month or so.  Be sure to update to the latest version ( 2.17.2-1 ) to check them out!  (Note that Amazon doesn’t have this version quite yet, but it should be there shortly.)  Here are the notable changes since our last newsletter: privacy-respecting link previews (generated by sender), more familiar reactions, filtering of conversation list by account, nicer autocomplete for mentions and emoji, and fixes for Android 15, among many others.

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/october-newsletter-2024

    • chevron_right

      JMP: Newsletter: Year in Review, Google Play Update

      news.movim.eu / PlanetJabber • 18 December • 3 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    As we approach the close of 2024, we want to take a moment to reflect on a year full of growth, innovation, and connection. Thanks to your support and engagement, JMP has continued to thrive as a service that empowers you to stay connected with the world using open standards and flexible technology. Here’s a look back at some of the highlights that made this year so special:

    Cheogram Android

    Cheogram Android, which we sponsor, experienced significant developments this year. Besides the preferred distribution channel of F-Droid , the app is also available on other platforms like Aptoide and the Amazon Appstore . It was removed from the Google Play Store in September for unknown reasons, and after a long negotiation has been restored to Google Play without modification.

    Cheogram Android saw several exciting feature updates this year, including:

    • Major visual refresh
    • Animated custom emoji
    • Better Reactions UI (including custom emoji reactions)
    • Widgets powered by WebXDC for interactive chats and app extensions
    • Initial support for link previews
    • The addition of a navigation drawer to show chats from only one account or tag
    • Allowing edits to any message you have sent

    This month also saw the release of 2.17.2-3 including:

    • Fix direct shares on Android 12+
    • Option to hide media from gallery
    • Do not re-notify dismissed notifications
    • Experimental extensions support based on WebXDC
    • Experimental XEP-0227 export support

    Of course nothing in Cheogram Android would be possible without the hard work of the upstream project, Conversations , so thanks go out to the devs there as well.

    eSIM Adapter Launch

    This year, we introduced the JMP eSIM Adapter —a device that bridges the gap for devices without native eSIM support, and adds flexibility for devices with eSIM support. Whether you’re travelling, upgrading your device, or simply exploring new options, the eSIM Adapter makes it seamless to transfer eSIMs across your devices.

    Engaging with the Community

    This year, we hosted booths at SeaGL, FOSSY, and HOPE, connecting with all of you in person. These booths provided opportunities to learn about our services, pay for subscriptions, or purchase eSIM Adapters face-to-face.

    Addressing Challenges

    In 2024, we also tackled some pressing industry issues, such as SMS censorship . To help users avoid censorship and gain access to bigger MMS group chats, we’ve added new routes that you can request from our support team .

    As part of this, we also rolled out the ability for JMP customers to receive calls directly over SIP .

    Holiday Support Schedule

    We want to inform you that JMP support will be reduced from our usual response level from December 23 until January 6 . During this period, response times will be significantly longer than usual as our support staff take time with their families. We appreciate your understanding and patience.

    Looking Ahead

    As we move into 2025, we’re excited to keep building on this momentum. Expect even more features, improved services, and expanded opportunities to connect with the JMP community. Your feedback has been, and will always be, instrumental in shaping the future of JMP.

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/december-newsletter-2024

    • chevron_right

      Erlang Solutions: Meet the team: Joanna Wrona

      news.movim.eu / PlanetJabber • 10 December • 3 minutes

    In this edition of our “Meet the Team” series, we’d like to introduce you to Joanna Wrona, Business Unit Leader for the Kraków office at Erlang Solutions.

    She discusses her role at ESL and her passion for empowering her team. She also gives us a glimpse into life in beautiful Kraków and what makes her journey so fulfilling.

    Joanna Wrona ESL

    About Joanna

    What is your role at Erlang Solutions?

    I am the Business Unit Leader at Erlang Solutions, based in Kraków, Poland. I oversee the business unit’s day-to-day operations.

    My role focuses on supporting and guiding our great team of talented and passionate Erlang and Elixir developers. I’m dedicated to creating an environment where they can thrive while ensuring we achieve our goals and stay true to our values: teamwork, knowledge sharing, sustainability, passion and fun.

    Equally important is our partnership with our customers – we work closely with them, valuing their input and collaboration as we strive to deliver the best possible outcomes.

    I also identify new market opportunities to help us grow and strengthen our regional presence. My leadership is rooted in serving our team and customers so everyone succeeds.

    What have been some highlights of your career so far?

    For over a decade, I have worked with passionate people who excel at what they do and genuinely love the technologies we use. Watching them grow, develop, and contribute to our success is incredibly gratifying.

    I’m also very proud of our low employee turnover. Many team members have been with us for over a decade, which reflects the positive work environment we’ve built and the deep fulfilment that comes from being part of such a dedicated business.

    It feels good to work here.

    What are some of the most important lessons you’ve learned about leadership?

    I’ve learned that leadership is fundamentally about service and empathy. Effective leaders prioritise the needs of their teams and create environments where people feel valued, supported, empowered, and listened to. I draw inspiration from thought leaders like Simon Sinek, Brené Brown, Robert Greenleaf and Kim Scott. Their work has helped shape my approach to servant-style leadership.

    While it’s not always easy, it’s incredibly rewarding.

    Having mentors and leadership workshops with our executive management team has also been invaluable in my journey. I’ve been fortunate to have great mentors and colleagues, like Michał Ślaski, who first brought me to Erlang Solutions and Erik Schön, a fellow Business Unit Leader in Sweden, whose insights and books on leadership continue to inspire me.

    Are there any exciting projects you and the team are working on now?

    Yes- we have recently launched the MongooseIM Autoscaler, which is a solution for businesses looking to increase or decrease the capacity of their MongooseIM cluster. It is also for businesses seeking a chat solution, existing chat applications or businesses with variable loads.

    Tell us about what life is like in Poland- what does a typical day look like for you?

    My most important responsibility is as a mother to two teenagers. My schedule is often dictated by school calendars and family events!

    I’m fortunate to live in Kraków. I enjoy biking to work, which allows me to pass by iconic landmarks like Wawel Castle, Main Square, Zakrzówek and along the Vistula River.

    It’s a great way to start my day and appreciate the city’s beauty.

    Tell us one thing you can’t live without.

    Well, it is not a thing, but it’s people around me (my partner, kids, family, friends, colleagues at work). To me, life is a journey that is best when you can share it with others.

    Final thoughts

    We love connecting with people—whether it’s over coffee at our offices or conferences around the world. If you’d like to connect with Joanna, catch her at the GOTO EDA Day in Warsaw on 19th September.

    If you would like to learn more about what we do, drop the team a line .

    The post Meet the team: Joanna Wrona appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Advent of Code 2024

      news.movim.eu / PlanetJabber • 4 December • 3 minutes

    Welcome to Advent of Code 2024!

    Like every year, I start the challenge with the best attitude and love of being an Elixir programmer. Although I know that at some point, I will go to the “what is this? I hate it” phase, unlike other years, this time, I am committed to finishing Advent of Code and, more importantly, sharing it with you.

    I hope you enjoy this series of December posts, where we will discuss the approach for each exercise. But remember that it is not the only one, and the idea of ​​this initiative is to have a great time and share knowledge, so don’t forget to post your solutions and comments and tag us to continue the conversation.

    Let’s go for it!

    Day 1: Historian Hysteria

    Before starting any exercise, I suggest spending some time defining the structure that best fits the problem’s needs. If the structure is adequate, it will be easy to reuse it for the second part without further complications.

    In this case, the exercise itself describes lists as the input, so we can skip that step and instead consider which functions of the Enum or List modules can be helpful.

    We have this example input:

    3 4

    4 3

    2 5

    1 3

    3 9

    3   3

    The goal is to transform it into two separate lists and apply sorting, comparison, etc.

    List 1: [3, 4, 2, 1, 3, 3 ]

    List 2: [ 4, 3, 5, 3, 9, 3 ]

    Let’s define a function that reads a file with the input. Each line will initially be represented by a string, so use String . split to separate it at each line break.

     def get_input(path) do
       path
       |> File.read!()
       |> String.split("\n", trim: true)
     end
    
    
    ["3   4", "4   3", "2   5", "1   3", "3   9", "3   3"]
    

    We will still have each row represented by a string, but we can now modify this using the functions in the Enum module. Notice that the whitespace between characters is constant, and the pattern is that the first element should go into list one and the second element into list two. Use Enum.reduce to map the elements to the corresponding list and get the following output:


    %{
     first_list: [3, 3, 1, 2, 4, 3],
     second_list: [3, 9, 3, 5, 3, 4]
    }
    
    

    I’m using a map so that we can identify the lists and everything is clear. The function that creates them is as follows:

     @doc """
     This function takes a list where the elements are strings with two
     components separated by whitespace.
    
    
     Example: "3   4"
    
    
     It assigns the first element to list one and the second to list two,
     assuming both are numbers.
     """
     def define_separated_lists(input) do
       Enum.reduce(input, %{first_list: [], second_list: []}, fn row, map_with_lists ->
         [elem_first_list, elem_second_list] = String.split(row, "   ")
    
    
         %{
           first_list: [String.to_integer(elem_first_list) | map_with_lists.first_list],
           second_list: [String.to_integer(elem_second_list) | map_with_lists.second_list]
         }
       end)
     end
    

    Once we have this format, we can move on to the first part of the exercise.

    Part 1

    Use Enum.sort to sort the lists ascendingly and pass them to the Enum.zip_with function that will calculate the distance between the elements of both. Note that we are using abs to avoid negative values, and finally, Enum.reduce to sum all the distances.

    first_sorted_list = Enum.sort(first_list)
       second_sorted_list = Enum.sort(second_list)
    
    
       first_sorted_list
       |> Enum.zip_with(second_sorted_list, fn x, y -> abs(x-y) end)
       |> Enum.reduce(0, fn distance, acc -> distance + acc end)
    

    Part 2

    For the second part, you don’t need to sort the lists; use Enum. frequencies and Enum.reduce to get the multiplication of the elements.

     frequencies_second_list = Enum.frequencies(second_list)
    
    
       Enum.reduce(first_list, 0, fn elem, acc ->
         elem * Map.get(frequencies_second_list, elem, 0) + acc
       end)
    

    That’s it. As you can see, once we have a good structure, the corresponding module, in this case, Enum, makes the operations more straightforward, so it’s worth spending some time defining which input will make our life easier.

    You can see the full version of the exercise here .

    The post Advent of Code 2024 appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines

      news.movim.eu / PlanetJabber • 29 November • 17 minutes

    The success of any programming language in the Erlang ecosystem can be apportioned into three tightly coupled components. They are the semantics of the Erlang programming language , (on top of which other languages are implemented), the OTP libraries and middleware (used to architect scalable and resilient concurrent systems) and the BEAM Virtual Machine tightly coupled to the language semantics and OTP.

    Take any of these components on their own, and you have a runner-up. But, put the three together, and you have the uncontested winner for scalable, resilient soft-real real-time systems. To quote Joe Armstrong, “You can copy the Erlang libraries, but if it does not run on BEAM, you can’t emulate the semantics”. This is enforced by Robert Virding’s First Rule of Programming, which states that “Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”

    In this post, we want to explore the BEAM VM internals. We will compare and contrast them with the JVM where applicable, highlighting why you should pay attention to them and care. For too long, this component has been treated as a black box and taken for granted, without understanding the reasons or implications. It is time to change that!

    Highlights of the BEAM

    Erlang and the BEAM VM were invented to be the right tools to solve a specific problem. Ericsson developed them to help implement telecom infrastructure, handling both mobile and fixed networks. This infrastructure is highly concurrent and scalable in nature. It has to display soft real-time properties and may never fail. We don’t want our phone calls dropped or our online gaming experience to be affected by system upgrades, high user load or software, hardware and network outages. The BEAM VM solves these challenges using a state-of-the-art concurrent programming model. It features lightweight BEAM processes which don’t share memory, are managed by the schedulers of the BEAM which can manage millions of them across multiple cores, and garbage collectors running on a per-process basis, highly optimised to reduce any impact on other processes. The BEAM is also the only widely used VM used at scale with a built-in distribution model which allows a program to run on multiple machines transparently.

    The BEAM VM supports zero-downtime upgrades with hot code replacement , a way to modify application code at runtime. It is probably the most cited unique feature of the BEAM. Hot code loading means that the application logic can be updated by changing the runnable code in the system whilst retaining the internal process state. This is achieved by replacing the loaded BEAM files and instructing the VM to replace the references of the code in the running processes.

    It is a crucial feature for no downtime code upgrades for telecom infrastructure, where redundant hardware was put to use to handle spikes. Nowadays, in the era of containerisation, other techniques are also used for production updates. Those who have never used it dismiss it as a less important feature, but it is nonetheless useful in the development workflow. Developers can iterate faster by replacing part of their code without having to restart the system to test it. Even if the application is not designed to be upgradable in production, this can reduce the time needed for recompilation and redeployments.

    Highlights of the JVM

    The Java Virtual Machine (JVM) was invented by Sun Microsystem with the intent to provide a platform for ‘write once’ code that runs everywhere. They created an object-oriented language similar to C++ , but memory-safe because its runtime error detection checks array bounds and pointer dereferences. The JVM ecosystem became extremely popular in the Internet era, making it the de-facto standard for enterprise server applications. The wide range of applicability was enabled by a virtual machine that caters for many use cases, and an impressive set of libraries supporting enterprise development.

    The JVM was designed with efficiency in mind. Most of its concepts are abstractions of features found in popular operating systems such as the threading model which maps the VM threads to operating system threads. The JVM is highly customisable, including the garbage collector (GC) and class loaders. Some state-of-the-art GC implementations provide highly tunable features catering for a programming model based on shared memory. And, the JIT (Just-in-time) compiler automatically compiles bytecode to native machine code with the intent to speed up parts of the application.

    The JVM allows you to change the code while the program is running. It is a very useful feature for debugging purposes, but production use of this feature is not recommended due to serious limitations .

    Concurrency and Parallelism

    We talk about parallel code execution if parts of the code are run at the same time on multiple cores, processors or computers, while concurrent programming refers to handling events arriving at the system independently. Concurrent execution can be simulated on single-core hardware, while parallel execution cannot. Although this distinction may seem pedantic, the difference results in some very different problems to solve. Think of many cooks making a plate of carbonara pasta. In the parallel approach, the tasks are split across the number of cooks available, and a single portion would be completed as quickly as it took these cooks to complete their specific tasks. In a concurrent world, you would get a portion for every cook, where each cook does all of the tasks. You use parallelism for speed and concurrency for scale.

    Parallel execution tries to decompose the problem into parts that are independent of each other. Boil the water, get the pasta, mix the egg, fry the guanciale ham, and grate the pecorino cheese 1 . The shared data (or in our example, the serving dish) is handled by locks, mutexes and various other techniques to guarantee correctness. Another way to look at this is that the data (or ingredients) are present, and we want to utilise as many parallel CPU resources as possible to finish the job as quickly as possible.

    Concurrent programming, on the other hand, deals with many events that arrive at the system at different times and tries to process all of them within a reasonable timeframe. On multi-core or distributed architectures, some of the processing may run in parallel. Another way to look at it is that the same cook boils the water, gets the pasta, mixes the eggs and so on, following a sequential algorithm which is always the same. What changes across processes (or cooks) is the data (or ingredients) to work on, which exist in multiple instances.

    In summary, concurrency and parallelism are two intrinsically different problems, requiring different solutions.

    Concurrency the Java way

    In Java, concurrent execution is implemented using VM threads. Before the latest developments, only one threading model, called Platform Threads existed. As it is a thin abstraction layer above operating system threads, Platform Threads are scheduled in a rather simple, priority-based way, leaving most of the work to the underlying operating system. With Java 21, a new threading model was introduced, the Virtual Threads. This new model is very similar to BEAM processes since virtual threads are scheduled by the JVM, providing better performance in applications where thread contention is not negligible. Scheduling works by mounting a virtual thread to the carrier (OS) thread and unmounting it when the state of the virtual thread becomes blocked, and replacing it with a new virtual thread from the pool.

    Since Java promotes the use of shared data structures, both threading models suffer from performance bottlenecks caused by synchronisation-related issues like frequent CPU cache invalidation and locking errors. Also, programming with concurrency primitives is a difficult task because of the challenges created by the shared memory model. To overcome these difficulties, there are attempts to simplify and unify the concurrent programming models, with the most successful attempt being the Akka framework . Unfortunately, it is not widely used, limiting its usefulness as a unified concurrency model, even for enterprise-grade applications. While Akka does a great job at replicating the higher-level constructs, it is somewhat limited by the lack of primitives provided by the JVM, allowing it to be highly optimised for concurrency. While the primitives of the JVM enable a wider range of use cases, they make programming distributed systems harder as they have no built-in primitives for communication and are often based on a shared memory model. For example, wherein a distributed system do you place your shared memory? And what is the cost of accessing it?

    Garbage Collection

    Garbage collection is a critical task for most of the applications, but applications may have very different performance requirements. Since the JVM is designed to be a ubiquitous platform, it is evident that there is no one-size-fits-all solution. There are garbage collectors designed for resource-limited environments such as embedded devices, and also for resource-intensive, highly concurrent or even real-time applications. The JVM GC interface makes it possible to use 3rd party collectors as well.

    Due to the Java Memory Model , concurrent garbage collection is a hard task. The JVM needs to keep track of the memory areas that are shared between multiple threads, the access patterns to the shared memory, thread states, locks and so on. Because of shared memory, collections affect multiple threads simultaneously, making it difficult to predict the performance impact of GC operations. So difficult, that there is an entire industry built to provide tools and expertise for GC optimisation.

    The BEAM and Concurrency

    Some say that the JVM is built for parallelism, the BEAM for concurrency. While this might be an oversimplification, its concurrency model makes the BEAM more performant in cases where thousands or even millions of concurrent tasks should be processed in a reasonable timeframe.

    The BEAM provides lightweight processes to give context to the running code. BEAM processes are different from operating system processes, but they share many concepts. BEAM processes, also called actors, don’t share memory, but communicate through message passing, copying data from one process to another. Message passing is a feature that the virtual machine implements through mailboxes owned by individual processes. It is a non-blocking operation, which means that sending a message to another process is almost instantaneous and the execution of the sender is not blocked during the operation. The messages sent are in the form of immutable data, copied from the stack of the sending process to the mailbox of the receiving one. There are no shared data structures, so this can be achieved without the need for locks and mutexes among the communicating processes, only a lock on the mailbox in case multiple processes send a message to the same recipient in parallel.

    Immutable data and message passing together enable the programmer to write processes which work independently of each other and focus on functionality instead of the low-level management of the memory and scheduling of tasks. It turns out that this simple design is effective on both single thread and multiple threads on a local machine running in the same VM and, using the inter-VM communication facilities of the BEAM, across the network and machines running the BEAM. Because the messages are immutable between processes, they can be scheduled to run on another OS thread (or machine) without locking, providing almost linear scaling on distributed, multi-core architectures. The processes are handled in the same way on a local VM as in a cluster of VMs, message sending works transparently regardless of the location of the receiving process.

    Processes do not share memory, allowing data replication for resilience and distribution for scale. Having two instances of the same process on a single or more separate machine, state updates can be shared with each other. If one of the processes or machines fails, the other has an up-to-date copy of the data and can continue handling requests without interruption, making the system fault-tolerant. If more than one machine is operational, all the processes can handle requests, giving you scalability. The BEAM provides highly optimised primitives for all of this to work seamlessly, while OTP (the “standard library”) provides the higher level constructs to make the life of the programmers easy.

    Scheduler

    We mentioned that one of the strongest features of the BEAM is the ability to run concurrent tasks in lightweight processes. Managing these processes is the task of the scheduler.

    The scheduler starts, by default, an OS thread for every core and optimises the workload between them. Each process consists of code to be executed and a state which changes over time. The scheduler picks the first process in the run queue that is ready to run, and gives it a certain amount of reductions to execute, where each reduction is the rough equivalent of a BEAM command. Once the process has either run out of reductions, is blocked by I/O, is waiting for a message, or is completed executing its code, the scheduler picks the next process from the run queue and dispatches it. This scheduling technique is called pre-emptive.

    We have mentioned the Akka framework many times. Its biggest drawback is the need to annotate the code with scheduling points, as the scheduling is not done at the JVM level. By removing the control from the hands of the programmer, soft real-time properties are preserved and guaranteed, as there is no risk of them accidentally causing process starvation.

    The processes can be spread around the available scheduler threads to maximise CPU utilisation. There are many ways to tweak the scheduler but it is rarely needed, only for edge cases, as the default configuration covers most usage patterns.

    There is a sensitive topic that frequently pops up regarding schedulers: how to handle Natively Implemented Functions (NIFs) . A NIF is a code snippet written in C, compiled as a library and run in the same memory space as the BEAM for speed. The problem with NIFs is that they are not pre-emptive, and can affect the schedulers. In recent BEAM versions, a new feature, dirty schedulers, was added to give better control for NIFs. Dirty schedulers are separate schedulers that run in different threads to minimise the interruption a NIF can cause in a system. The word dirty refers to the nature of the code that is run by these schedulers.

    Garbage Collector

    Modern, high-level programming languages today mostly use a garbage collector for memory management. The BEAM languages are no exception. Trusting the virtual machine to handle the resources and manage the memory is very handy when you want to write high-level concurrent code, as it simplifies the task. The underlying implementation of the garbage collector is fairly straightforward and efficient, thanks to the memory model based on an immutable state. Data is copied, not mutated and the fact that processes do not share memory removes any process inter-dependencies, which, as a result, do not need to be managed.

    Another feature of the BEAM is that garbage collection is run only when needed, on a per-process basis, without affecting other processes waiting in the run queue. As a result, the garbage collection in Erlang does not ‘stop the world’. It prevents processing latency spikes because the VM is never stopped as a whole – only specific processes are, and never all of them at the same time. In practice, it is just part of what a process does and is treated as another reduction. The garbage collector collecting process suspends the process for a very short interval, often microseconds. As a result, there will be many small bursts, triggered only when the process needs more memory. A single process usually doesn’t allocate large amounts of memory, and is often short-lived, further reducing the impact by immediately freeing up all its allocated memory on termination.

    More about features

    The features of the garbage collector are discussed in an excellent blog post by Lukas Larsson . There are many intricate details, but it is optimised to handle immutable data in an efficient way, dividing the data between the stack and the heap for each process. The best approach is to do the majority of the work in short-lived processes.

    A question that often comes up on this topic is how much memory the BEAM uses. Under the hood, the VM allocates big chunks of memory and uses custom allocators to store the data efficiently and minimise the overhead of system calls.

    This has two visible effects: The used memory decreases gradually after the space is not needed, and reallocating huge amounts of data might mean doubling the current working memory. The first effect can, if necessary, be mitigated by tweaking the allocator strategies. The second one is easy to monitor and plan for if you have visibility of the different types of memory usage. (One such monitoring tool that provides system metrics that are out of the box is WombatOAM ).

    JVM vs BEAM concurrency

    As mentioned before, the JVM and the BEAM handle concurrent tasks very differently. Under high load, shared resources become bottlenecks. In a Java application, we usually can’t avoid that. That’s why the BEAM is superior in these kinds of applications. While memory copy has a certain cost, the performance impact caused by the synchronised access to shared resources is much higher. We performed many tests to measure this impact.

    JVM and the  BEAM

    This chart nicely displays the large differences in performance between the JVM and the BEAM. In this test, the applications were implemented in Elixir and Java. The Elixir code compiles to the BEAM, while the Java code, evidently, compiles to the JVM.

    When not to use the BEAM

    It is very much about the right tool for the job. Do you need a system to be extremely fast, but are not concerned about concurrency? Handling a few events in parallel, and having to handle them fast? Need to crunch numbers for graphics, AI or analytics? Go down the C++, Python or Java route. Telecom infrastructure does not need fast operations on floats, so speed was never a priority. Aided with dynamic typing, which has to do all type checks at runtime means compile-time optimizations are not as trivial. So number crunching is best left to the JVM, Go or other languages which compile to native code. It is no surprise that floating point operations on Erjang, the version of Erlang running on the JVM, was 5000% faster than the BEAM. But where we’ve seen the BEAM shine is using its concurrency to orchestrate number crunching, outsourcing the analytics to C, Julia, Python or Rust. You do the map outside the BEAM and the reduction within the BEAM.

    The mantra has always been fast enough. It takes a few hundred milliseconds for humans to perceive a stimulus (an event) and process it in their brains, meaning that micro or nano-second response time is not necessary for many applications. Nor would you use the BEAM for microcontrollers, it is too resource-hungry. But for embedded systems with a bit more processing power, where multi-core is becoming the norm and you need concurrency, the BEAM shines. Back in the 90s, we were implementing telephony switches handling tens of thousands of subscribers running in embedded boards with 16 MB of memory. How much memory does a Raspberry Pi have these days? And finally, hard real-time systems. You would probably not want the BEAM to manage your airbag control system. You need hard guarantees, something only a hard real-time OS and a language with no garbage collection or exceptions. An implementation of an Erlang VM running on bare metal such as GRiSP will give you similar guarantees.

    Conclusion

    Use the right tool for the job. If you are writing a soft real-time system which has to scale out of the box, should never fail, and do so without the hassle of having to reinvent the wheel, the BEAM is the battle-proven technology you are looking for.

    For many, it works as a black box. Not knowing how it works would be analogous to driving a Ferrari and not being capable of achieving optimal performance or not understanding what part of the motor that strange sound is coming from. This is why you should learn more about the BEAM, understand its internals and be ready to fine-tune and fix it.

    For those who have used Erlang and Elixir in anger, we have launched a one-day instructor-led course which will demystify and explain a lot of what you saw whilst preparing you to handle massive concurrency at scale. The course is available through our new instructor-led lead remote training, learn more here . We also recommend The BEAM book by Erik Stenman and the BEAM Wisdoms , a collection of articles by Dmytro Lytovchenko.

    If you’d like to speak to a member of the team, feel free to drop us a message .

    The post Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/optimising-for-concurrency-comparing-and-contrasting-the-beam-and-jvm-virtual-machines/

    • chevron_right

      Ignite Realtime Blog: Florian, Dan and Dave Elected in the XSF!

      news.movim.eu / PlanetJabber • 21 November • 1 minute

    In an annual vote, not one, not two, but three Ignite Realtime community members have been selected into leadership positions of the XMPP Standards Foundation! :partying_face:

    The XMPP Standards Foundation is an independent, nonprofit standards development organisation whose primary mission is to define open protocols for presence, instant messaging, and real-time communication and collaboration on top of the IETF’s Extensible Messaging and Presence Protocol (XMPP). Most of the projects that we’re maintaining in the Ignite Realtime community have a strong dependency on XMPP.

    The XSF Board of Directors, in which both @Flow and @dwd are elected, oversees the business affairs of the organisation. They are now in a position to make key decisions on the direction of XMPP technology and standards development, manage resources and partnerships to further the growth of the XMPP ecosystem and promote XMPP in the larger open-source and communications community, advocating for its adoption and use in various applications.

    The XMPP Council, in which @danc has been reelected, is the technical steering group that approves XMPP Extension Protocols. The Council is responsible for standards development and process management. With that, Dan is now on the forefront of new developments within the XMPP community!

    Congrats to you all, Dan, Dave and Florian!

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: MongooseIM 6.3: Prometheus, CockroachDB and more

      news.movim.eu / PlanetJabber • 14 November • 9 minutes

    MongooseIM is a scalable, efficient, high-performance instant messaging server using the proven, open, and extensible XMPP protocol. With each new version, we introduce new features and improvements. For example, version 6.2.0 introduced our new CETS in-memory storage, making setup and autoscaling in cloud environments easier than before (see the blog post for details). The latest release 6.3.0 is no exception. The main highlight is the complete instrumentation rework, allowing seamless integration with modern monitoring solutions like Prometheus.

    Additionally, we have added CockroachDB to the list of supported databases, so you can now let this highly scalable database grow with your applications while avoiding being locked into your cloud provider.

    Observability and instrumentation

    In software engineering, observability is the ability to gather data from a running system to figure out what is going inside: is it working as expected? Does it have any issues? How much load is it handling, and could it do more? There are many ways to improve the observability of a system, and one of the most important is instrumentation . Just like adding extra measuring equipment to a physical system, this means adding additional code to the software. It allows the system administrator to observe the internal state of the system. This comes with a price. There is more work for the developers, increased complexity, and potential performance degradation caused by the collection and processing of additional data.

    However, the benefits usually outweigh the costs, and the ability to inspect the system is often a critical requirement. It is also worth noting that the metrics and events gathered by instrumentation can be used for further automation, e.g. for autoscaling or sending alarms to the administrator.

    Instrumentation in MongooseIM

    Even before our latest release of MongooseIM, there have been multiple means to observe its behaviour:

    Metrics provide numerical values of measured system properties. The values change over time, and the metric can present current value, sum from a sliding window, or a statistic (histogram) of values from a given time period. Prior to version 6.3, MongooseIM used to store such metrics with the help of the exometer library. To view the metrics, one had to configure an Exometer exporter, which would periodically send the metrics to an external service using the Graphite protocol. Because of the protocol, the metrics would be exported to Graphite or InfluxDB version 1 . One could also query a limited subset of metrics using our GraphQL API (or the legacy REST API) or with the command line interface. Alternatively, metrics could be retrieved from the Erlang shell of a running MongooseIM node.

    Logs are another type of instrumentation present in the code. They inform about events occurring in the system and since version 4, they are events with extensible map-like structure and can be formatted e.g. as plain text or JSON. Subsequently, they can be shown in the console or stored in files. You can also set up a log management system like the Elastic (ELK) Stack or Splunk – see the documentation for more details.

    The diagram below shows how these two types of instrumentation can work together:

    The first observation is that the instrumented code needs to separately call the log and metric API. Updating a metric and logging an event requires two distinct function calls. Moreover, if there are multiple metrics (e.g. execution time and total number of calls), there would be multiple function calls required. There is potential for inconsistency between metrics, or between metrics and logs, because an error could happen between the function calls. The main issue of this solution is however the hardcoding of Exometer as the metric library and the limitation of the Graphite protocol used to push the metrics to external services.

    Instrumentation rework in MongooseIM 6.3

    The lack of support for the modern and widespread Prometheus protocol was one of the main reasons for the complete rework of instrumentation in version 6.3. Let’s see the updated diagram of MongooseIM instrumentation:

    The most noticeable difference is that in the instrumented code, there is just one event emitted. Such an event is identified by its name and a key-value map of labels and contains measurements (with optional metadata) organised in a key-value map. Each event has to be registered before its instances are emitted with particular measurements. The point of this preliminary step is not only to ensure that all events are handled but also to provide additional information about the event, e.g. the measurement keys that will be used to update metrics. Emitted events are then handled by configurable handlers . Currently, there are three such handlers. Exometer and Logger work similarly as before, but there is a new Prometheus handler as well, which stores the metrics internally in a format compatible with Prometheus and exposes them over an HTTP API. This means that any external service can now scrape the metrics using the Prometheus protocol. The primary case would be to use Prometheus for metrics collection, and a graphical tool like Grafana for display. If you however prefer InfluxDB version 2, you can easily configure a scraper , which would periodically put new data into InfluxDB.

    As you can see in the diagram, logs can be also emitted directly, bypassing the instrumentation API. This is the case for multiple logs in the system, because often there is no need for any metrics, and a log message is enough. In the future though, we might decide to fully replace logs with instrumentation events, because they are more extensible.

    Apart from supporting the Prometheus protocol, additional benefits of the new solution include easier configuration, extensibility, and the ability to add more handlers in the future. You can also have multiple handlers enabled simultaneously, allowing you to gradually change your metric backend from Exometer to Prometheus. Conversely, you can also disable all instrumentation, which was not possible prior to version 6.3. Although it might make little sense at first glance, because it can render the system a black box, it can be useful to gain extra performance in some cases, e.g. if the external metrics like CPU usage are enough, in case of an isolated embedded system, or if resources are very limited.

    The table below compares the legacy metrics solution with the new instrumentation framework:

    Solution Legacy: mongoose_metrics New: mongoose_instrument
    Intended use Metrics Metrics, logs, distributed tracing, alarms, …
    Coupling with handlers Tight: hardcoded Exometer logic, one metric update per function call Loose: events separated from configurable handlers
    Supported handlers Exometer is hardcoded Exometer, Prometheus, Log
    Events identified by Exometer metric name (a list) Event name, Labels (key-value map)
    Event value Single-dimensional numerical value Multi-dimensional measurements with metadata
    Consistency checks None – it is up to the implementer to verify that the correct metric is created and updated Prometheus HTTP endpoint, legacy GraphQL / CLI / REST for Exometer
    API GraphQL / CLI and REST Prometheus HTTP endpoint,legacy GraphQL / CLI / REST for Exometer

    There are about 140 events in total, and some of them have multiple dimensions. You can find an overview in the documentation . In terms of dashboards for tools like Grafana, we believe that each use case of MongooseIM deserves its own. If you are interested in getting one tailored to your needs, don’t hesitate to contact us .

    Using the instrumentation

    Let’s see the new instrumentation in action now. Starting with configuration, let’s examine the new additions to the default configuration file :

    [[listen.http]]
      port = 9090
      transport.num_acceptors = 10
    
      [[listen.http.handlers.mongoose_prometheus_handler]]
        host = "_"
        path = "/metrics"
    
    (...)
    
    [instrumentation.prometheus]
    
    [instrumentation.log]
    
    

    The first section, [[listen.http]] , specifies the Prometheus HTTP endpoint. The following [instrumentation.*] sections enable the Prometheus and Log handlers with the default settings – in general, instrumentation events are logged on the DEBUG level, but you can change it. This configuration is all you need to see the metrics at http://localhost:9091/metrics when you start MongooseIM.

    As a second example, let’s say that you want only the Graphite protocol integration. In this case, you might configure MongooseIM to use only the Exometer handler, which would push the metrics prefixed with mim to the influxdb1 host every 60 seconds:

    [[instrumentation.exometer.report.graphite]]
      interval = 60_000
      prefix = "mim"
      host = "influxdb1"
    


    There are more options possible, and you can find them in the documentation .

    Tracing – ad-hoc instrumentation

    There is one more type of observability available in Erlang systems, which is tracing . It enables a user to have a more in-depth look into the Erlang processes, including the functions being called and the internal messages being exchanged. It is meant to be used by Erlang developers, and should not be used in production environments because of the impact it can have on a running system. It is good to know, however, because it could be helpful to diagnose unusual issues. To make tracing more user-friendly, MongooseIM now includes erlang_doctor with some MongooseIM-specific utilities (see the tr_util module). This tool provides low-level ad-hoc instrumentation, allowing you to instrument functions in a running system, and gather the resulting data in an in-memory table, which can be then queried, processed, and – if needed – exported to a file. Think of it as a backup solution, which could help you diagnose hidden issues, should you ever experience one.

    CockroachDB – a database that scales with MongooseIM

    MongooseIM works best when paired with a relational database like PostgreSQL or MySQL, enabling easy cluster node discovery with CETS and persistent storage for users’ accounts, archived messages and other kinds of data. Although such databases are not horizontally scalable out of the box, you can use managed solutions like Amazon Aurora , AlloyDB or Azure Cosmos DB for PostgreSQL . The downsides are the possible vendor lock-in and the fact that you cannot host and manage the DB yourself. With version 6.3 however, the possibilities are extended to CockroachDB . This PostgreSQL-compatible distributed database can be used either as a provider-independent cloud-based solution or as an internally hosted cluster. You can instantly set it up in your local environment and take advantage of the horizontal scalability of both MongooseIM and CockroachDB. If you want to learn how to deploy both MongooseIM and CockroachDB in Kubernetes, see the documentation for CockroachDB and the Helm chart for MongooseIM, together with our recent blog post about setting up an auto-scalable cluster. If you are interested in having an auto-scalable solution deployed for you, please consider our MongooseIM Autoscaler .

    Summary

    MongooseIM 6.3.0 opens new possibilities for observability – the Prometheus protocol is supported instantly with a new reworked instrumentation layer underneath, guaranteeing ease of future extensions. Regarding database integration, you can now use CockroachDB to store all your persistent data. Apart from these changes, the latest version introduces a multitude of improvements and updates – see the release notes for more information. As the next step, we recommend visiting our product page to see the possible options of support and the services we offer. You can also try the server out at trymongoose.im . In any case, should you have any further questions, feel free to contact us .

    The post MongooseIM 6.3: Prometheus, CockroachDB and more appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/mongooseim-6-3-prometheus-cockroachdb-and-more/

    • chevron_right

      ProcessOne: Docker: set up ejabberd and keep it updated automagically with Watchtower

      news.movim.eu / PlanetJabber • 12 November • 5 minutes

    This blog post will guide you through the process of setting up an ejabberd Community Server using Docker and Docker Compose , and will also introduce Watchtower for automatic updates. This approach ensures that your configuration remains secure and up to date.

    Furthermore, we will examine the potential risks associated with automatic updates and suggest Diun as an alternative tool for notification-based updates.

    1. Prerequisites

    Please ensure that Docker and Docker Compose are installed on your system.
    It would be beneficial to have a basic understanding of Docker concepts, including containers, volumes, and bind-mounts.

    2. Set up ejabberd in a docker container

    Let’s first create a minimal Docker Compose configuration to start an ejabberd instance.

    2.1: Prepare the directories

    For this setup, we will create a directory structure to store the configuration, database, and logs. This will assist in maintaining an organised setup, facilitating data management and backup.

    mkdir ejabberd-setup && cd ejabberd-setup
    touch docker-compose.yml
    mkdir conf
    touch conf/ejabberd.yml
    mkdir database
    mkdir logs
    

    This should give you the following structure:

    ejabberd-setup/
    ├── conf
    │   └── ejabberd.yml
    ├── database
    ├── docker-compose.yml
    └── logs
    

    To verify the structure, use the tree command. It is a very useful tool which we use on a daily basis.

    Set permissions

    Since we&aposll be using bind mounts in this example, it’s important to ensure that specific directories (like database and logs) have the correct permissions for the ejabberd user inside the container (UID 9000 , GID 9000 ).

    Customize or skip depending on your needs:

    sudo chown -R 9000:9000 database
    sudo chown -R 9000:9000 logs
    

    Based on this Issue .

    2.2: The docker-compose.yml file

    Now, create a docker-compose.yml file inside, containing:

    services:
      ejabberd:
        image: ejabberd/ecs:latest
        container_name: ejabberd
        ports:
          - "5222:5222"  # XMPP Client
          - "5280:5280"  # Web Admin Interface, optional
        volumes:
          - ./database:/home/ejabberd/database
          - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
          - ./logs:/home/ejabberd/logs
        restart: unless-stopped
    

    2.3: The ejabberd.yml file

    A basic configuration file for ejabberd will be required. we will name it conf/ejabberd.yml .

    loglevel: 4
    hosts:
    - "localhost"
    
    acl:
      admin:
        user:
          - "admin@localhost"
    
    access_rules:
      local:
        allow: all
    
    listen
      -
        port: 5222
        module: ejabberd_c2s
    
      -
        port: 5280                       # optional
        module: ejabberd_http            # optional
        request_handlers:                # optional
          "/admin": ejabberd_web_admin   # optional
    

    Did you know? Since 23.10 , ejabberd now offers users the option to create or update the relevant MySQL, PostgreSQL or SQLite tables automatically with each update. You can read more about it here .

    3: Starting ejabberd

    Finally, we&aposre set: you can run the following command to start your stack: docker-compose up -d

    Your ejabberd instance should now running in a Docker container! Good job! 🎉

    From there, customize ejabberd to your liking! Naturally, in this example we&aposre going to keep ejabberd in its barebones configuration, but we recommend that you configure it as you wish at this stage, to suit your needs (Domains, SSL, favorite modules, chosen database, admin accounts, etc.)

    Example: You could register your admin account at this stage

    To use the admin interface, you need to create an admin account. You can do so by running the following command:

    $ docker exec -it ejabberd bin/ejabberdctl register admin localhost very_secret_password
    > User admin@localhost successfully registered
    

    Once this step is complete, you will then be able to access the web admin interface at http://localhost:5280/admin .

    4. Set up automatic updates

    Finally, we come to the most interesting part: how do I keep my containers up to date?

    To keep your ejabberd instance up-to-date, you can use Watchtower , a Docker container that automatically updates other containers when new versions are available.

    Warning: Auto-updates are undoubtedly convenient, but they can occasionally cause issues if an update includes breaking changes. Always test updates in a staging environment and back up your data before enabling auto-updates. Further information can be found at the end of this post.

    If greater control over updates is required (for example, for mission-critical production servers or clusters), we recommend using Diun , which can notify you of available updates and allow you to decide when to apply them.

    4.1: Add Watchtower to your docker-compose.yml

    To include Watchtower , add it as a service in docker-compose.yml :

    services:
      ejabberd:
        image: ejabberd/ecs:latest
        container_name: ejabberd
        ports:
          - "5222:5222"  # XMPP Client
          - "5280:5280"  # Web Admin Interface, optional
        volumes:
          - ./database:/home/ejabberd/database
          - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
          - ./logs:/home/ejabberd/logs
        restart: unless-stopped
    
      watchtower:
        image: containrrr/watchtower
        container_name: watchtower
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        environment:
          - WATCHTOWER_POLL_INTERVAL=3600 # Sets how often Watchtower checks for updates (in seconds).
          - WATCHTOWER_CLEANUP=true # Ensures old images are cleaned up after updating.
        restart: unless-stopped
    

    Watchtower offers a wide range of additional features, including the ability to set up notifications, exclude specific containers, and more. For further information, please refer to the Watchtower Docs .

    Once the docker-compose.yml has been updated, please bring it up using the following command: docker-compose up -d

    And.... here you go, you&aposre all set!

    5. Best Practices & closing words

    Now Watchtower will now perform periodic checks for updates to your ejabberd container and apply them automatically.

    Well to be fair, by default if other containers are running on the same server, Watchtower will also update them. This behaviour can be controlled with the help of environment variables (see Container Selection ), which will assist in excluding containers from updates.


    One important thing to understand is that Watchtower will only update containers tagged with the :latest tag.

    In an environment with numerous Docker containers, using the latest tag streamlines the process of automatic updates. However, it may introduce unanticipated changes with each new, disruptive update. Ideally, we recommend always setting a speficic version like ejabberd/ecs:24.10 and deciding how/when to update it manually (especially if you&aposre into infra-as-code ).

    However, we recognise that some users may prefer the convenience of automatic updates, personnally that&aposs what I do my homelab but I&aposm not scared to dig in if stuff breaks.


    tl;dr: For a small community server/homelab/personnal instance, Watchtower will help keep things up to date with minimal effort. However, for bigger production environments, it is advisable to tag specific versions to ensure greater control and resilience and update them manually.

    With this setup, you now have a fully functioning XMPP server using ejabberd, with automatic updates. You can now start building your chat applications or integrate it with your existing services! 🚀

    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/docker-ejabberd-watchtower/