phone

    • chevron_right

      The XMPP Standards Foundation: Scaling up with MongooseIM 6.2.1

      news.movim.eu / PlanetJabber • 28 May, 2024 • 3 minutes

    MongooseIM is a scalable, extensible and efficient real-time messaging server that allows organisations to build cost-effective communication solutions. Built on the XMPP server, MongooseIM is specifically designed for businesses facing the challenge of large deployments, where real-time communication and user experience are critical. The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend , which simplifies and enhances its scalability.

    It is difficult to predict how much traffic your XMPP server will need to handle. This is why MongooseIM offers several means of scalability. Firstly, even one machine can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. As a result, it is recommended to have a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol . Having at least three nodes in the cluster allows you to perform a rolling upgrade , where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance. During such an upgrade, you can increase hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier, because you only need to add new nodes to the already deployed cluster.

    Mnesia

    Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey, because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

    1. Consistency issues, which tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
    2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
    3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

    First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data shared between the cluster nodes. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

    Introducing CETS

    Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to keep all your persistent data.

    Getting rid of Mnesia removes a lot of important obstacles. For example, if you are using Kubernetes , MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up horizontal autoscaling for your installation.

    See it in action

    If you want quickly set up a working autoscaled MIM cluster using Helm, see the detailed blog post . For more information, consult the documentation , GitHub or the product page . You can try MongooseIM online as well.

    Read about Erlang Solution as sponsor of the XSF .

    erlang-solutions.png
    • wifi_tethering open_in_new

      This post is public

      xmpp.org /2024/05/scaling-up-with-mongooseim-6.2.1/

    • chevron_right

      Ignite Realtime Blog: New Openfire plugin: XMPP Web!

      news.movim.eu / PlanetJabber • 26 May, 2024 • 1 minute

    We are excited to be able to announce the immediate availability of a new plugin for Openfire: XMPP Web!

    This new plugin for the real-time communications server provided by the Ignite Realtime community allows you to install the third-party webclient named ‘ XMPP Web ’ in mere seconds! By installing this new plugin, the web client is immediately ready for use.

    This new plugin compliments others that similarly allow to deploy a web client with great ease, like Candy , inVerse and JSXC ! With the addition of XMPP Web, the selection of easy-to-install clients for your users to use becomes even larger!

    The XMPP Web plugin for Openfire is based on release 0.10.2 of the upstream project, which currently is the latest release. It will automatically become available for installation in the admin console of your Openfire server in the next few days. Alternatively, you can download it immediately from its archive page.

    Do you think this is a good addition to the suite of plugins? Do you have any questions or concerns? Do you just want to say hi? Please stop by our community forum or our live groupchat !

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Balancing Innovation and Technical Debt

      news.movim.eu / PlanetJabber • 23 May, 2024 • 10 minutes

    Let’s explore the delicate balance between innovation and technical debt.

    We will look into actionable strategies for managing debt effectively while optimising our infrastructure for resilience and agility.

    Balancing acts and trade-offs

    I was having this conversation with a close acquaintance not long ago. He’s setting up his new startup, filling a market gap he’s found, rushed before the gap closes in. It’s a common starting point for many entrepreneurs. You have an idea you need to implement, and until it is implemented and (hopefully) sold, there is no revenue, all while someone else can close the gap before you do. Time-to-market is key.

    While there’s no revenue, you acquire debt. But while reasonably careful to keep it under control, you pay the Financial Debt off with a different kind of debt: Technical Debt . You choose to make a trade-off here, a trade-off that all too often is taken without awareness. This trade-off between debts requires careful thinking too, just as much as financial debt is an obvious risk, so is a technical one.

    Let’s define these debts. Technical is the accumulated cost of shortcuts or deferred maintenance in software development and IT infrastructure. Financial is the borrowing of funds to finance business operations or investments. They share a common thread: the trade-off between short-term gains and long-term sustainability .

    Just like financial debt can provide immediate capital for growth, it can also drag the business into financial inflexibility and burdensome interest rates. Technical debt expedites product development or reduces time-to-market, at the expense of increased maintenance, reduced scalability, and decreased agility . It is an often overlooked aspect of a technological investment, whose prompt care can have a huge impact on the lifespan of the business. As an enterprise must manage its financial leverage to maintain solvency and liquidity, it must also manage its technical debt to ensure the reliability, scalability, and maintainability of their systems and software.

    The Economics of Technical Debt

    Consider the example of a rapidly growing e-commerce platform: appeal attracts demand, demand requires resources, and resources mean increased vulnerability: the increasing user data and resources attract threats, aiming to disrupt services, steal sensitive data, or cause reputational harm. In this environment, the platform’s success is determined by its ability to strike a delicate balance between serving legitimate customers and thwarting malicious actors, where both play ever-increasing proportions.

    Early on, the platform prioritised rapid development and deployment of new features; however, in their haste to innovate, the technical team accumulated debt by taking shortcuts and deferring critical maintenance tasks. What results from this is a platform that is increasingly fragile and inflexible, leaving it vulnerable to disruptive attacks and more agile competitors . Meanwhile, reasonably, the platform’s financial team kept allocating capital to funding marketing campaigns, product launches, and strategic acquisitions, under pressure to maximise profitability and shareholder value; however, they neglected to allocate sufficient resources towards cybersecurity initiatives, viewing them as discretionary expenses rather than critical investments in risk mitigation and resilience .

    Technical currencies

    If we’re talking about debt, and drawing a parallel with financial terms, let’s complete the parallel. By establishing the concept of currencies, we can build quantifiable metrics of value that reflect the health and resilience of digital assets. Code coverage, for instance, measures the proportion of codebase exercised by automated tests, providing insights into the potential presence of untested or under-tested code paths. In this line, tests and documentation are the two assets that pay the highest technical debt.

    See for example how coverage for MongooseIM has been continuously trending higher .

    Similarly, Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of integrating code changes, running automated tests, verifying engineering work, and deploying applications to diverse environments, enabling teams to deliver software updates frequently and with confidence. By streamlining the development workflow and reducing manual intervention, CI/CD pipelines enhance productivity, accelerate time-to-market, and minimise the risk of human error. Humans have bad days and sleepless nights, well-developed automation doesn’t.

    Additionally, valuations on code quality that are diligently tracked on the organisation’s ticketing system provide valuable insights into the evolution of software assets and the effectiveness of ongoing efforts to address technical debt and improve code maintainability. These valuations enable organisations to prioritise repayment efforts, allocating resources effectively.

    Repaying Technical Debt

    The longer any debt remains unpaid, the greater its impact on the organisation — (technical) debt accrues “interest” over time. But, much like in finances, a debt is paid with available capital, and choosing a payment strategy can make a difference in whether capital is wasted or successfully (re)invested:

    1. Priorities and Plans : Identify and prioritise areas of technical debt based on their impact on the system’s performance, stability, and maintainability. Develop a plan that outlines the steps needed to address each aspect of technical debt systematically.
    2. Refactoring : Allocate time and resources to refactor code and systems to improve their structure, readability, and maintainability. Break down large, complex components into smaller, more manageable units, and eliminate duplicate or unnecessary code. See for example how we battled technical debt in MongooseIM .
    3. Automated Testing : Invest in automated testing frameworks and practices to increase test coverage and identify regression issues early in the development process. Implement continuous integration and continuous deployment (CI/CD) pipelines to automate the testing and deployment of code changes. Establishing this pipeline is always the first step into any new project we join and we’ve become familiar with diverse CI technologies like GitHub Actions , CircleCI , GitlabCI, or Jenkins.
    4. Documentation : Enhance documentation efforts to improve understanding and reduce ambiguity in the codebase. Document design decisions, architectural patterns, and coding conventions to facilitate collaboration and knowledge sharing among team members. Choose technologies that facilitate and enhance documentation work .

    Repayment assets

    Repayment assets are resources or strategies that can be leveraged to make debt repayment financially viable. Here are some key repayment assets to consider:

    1. Training and Education : Provide training and education opportunities for developers to enhance their skills and knowledge in areas such as software design principles, coding best practices, and emerging technologies. Encourage continuous learning and professional development to empower developers to make informed decisions and implement effective solutions.
    2. Technical Debt Reviews : Conduct regular technical debt reviews to assess the current state of the codebase, identify areas of concern, and track progress in addressing technical debt over time. Use metrics and KPIs to measure the impact of technical debt reduction efforts and inform decision-making.
    3. Collaboration and Communication : Foster a culture of collaboration and communication among development teams, stakeholders, and other relevant parties. Encourage open discussions about technical debt, its implications, and potential strategies for repayment, and involve stakeholders in decision-making processes.
    4. Incremental Improvement : Break down technical debt repayment efforts into smaller, manageable tasks and tackle them incrementally. Focus on making gradual improvements over time rather than attempting to address all technical debt issues at once, prioritising high-impact and low-effort tasks to maximise efficiency and effectiveness.

    Don’t acquire more debt than you have to

    While debt is a quintessential aspect of entrepreneurship, acquiring it unwisely is obviously shooting in one’s foot. You’ll have to make many decisions and choose over many trade-offs, so you better be well-informed before putting your finger on the red buttons.

    Your service will require infrastructure

    Whether you choose one vendor over another or decide to go self-hosted, use containerised technologies, so that future changes to better infrastructures are possible. Containers also provide a consistent environment for development, testing and production. Choose technologies that are good citizens in containerised environments .

    Your service will require hardware resources

    Whether you choose one or another hardware architecture or any amount of memory, use runtimes that can efficiently use and adapt to any given hardware, so that future changes to better hardware are fruitful. For example Erlang’s concurrency model is famous for automatically taking advantage of any number of cores, and with technologies like Elixir’s Nx you can take advantage of esoteric GPUs and TPUs hardware for your machine learning tasks.

    Your service will require agility

    The market will push your offerings to its limit, in a never-ending stream of requests for new functionality and changes to your service. Your code will need to change, and respond to changes. From Elixir ‘s metaprogramming and language extensibility to Gleam ‘s strong type-safety, prioritise tools that likewise aid your developers to change things safely and powerfully.

    Your service will require resiliency

    There are two philosophies in the culture of error handling: either it is mathematically proven that errors cannot happen – Haskell’s approach – or it is assumed they can’t always be avoided and we need to learn to handle them – Erlang’s approach. Wise technologies take one starting point as an a-priori foundation of the technology and, a-posteriori, deal with the other end. Choose wisely your point on the scale, and be wary of technologies that don’t take a safe stance. Errors can happen: electricity goes down, cables are cut, and attackers attack. Programmers have bad sleepless nights or get sick. Take a stance, before errors bite your service.

    Your service will require availability

    No fancy unique idea will sell if it can’t be bought, and no service will be used if it is not there to begin with. Unavailability takes an exponential toll on your revenue, so prioritise availability. Choose technologies that can handle not just failure, but even upgrades (!), without downtime . And to have real availability, you always need at least two computers, in case one dies: choose technologies that make many independent computers cooperate easily and can take over another’s work transparently.

    A Case Study: A Balancing Act in Traffic Management

    A chat system, like many web services, handles a countably infinite number of independent users. It is a heavily network-based application that needs to respond to requests that are independent of each other in a timely and fair manner. It is an embarrassingly parallel problem, messages can be processed independently of each other, but it is also a challenge of soft real-time properties, where messages should be processed sufficiently soon for a human to have a good user experience. It also faces the challenge of bad actors, which makes requests blacklisting and throttling necessary.

    MongooseIM is one such system. It is written in Erlang, and in its architecture, every user is handled by one actor.

    It is containerised, and easily uses all available resources efficiently and smoothly, adapting to any change of hardware, from small embedded systems to massive mainframes. Its architecture uses the Publish-Subscribe programming pattern heavily, and because Erlang is a functional language, functions are first-class citizens, and therefore functions are installed to handle all sorts of events extensively because we never know what new functionality we will need to implement in the future.

    One important event is a new session starting: mechanisms for blacklisting are plenty, whether they’re based on specific identifiers, IP regions, or even modern AI-based behaviour analysis, we can’t predict the future,  so we simply publish the “session opened” event and leave for future us to install the right handler when is needed.

    Another important event is that of a simple message being sent. What if bad actors have successfully opened sessions and start flooding the system, consuming the CPU and Database unnecessarily? Again, changing requirements might dictate the system is to handle some users with preferential treatment. One default option is to slow down all message processing within some reasonable rate, for which we use a traffic shaping mechanism called the Token Bucket algorithm, implemented in our library Opuntia – named that way because if you touch it too fast, it stings you.

    You can read more about how scalable MongooseIM is in this article, where we pushed it to its limit . And while we continuously load-test our server, we haven’t done another round of limit-pushing since then, stay tuned for a future blog when we do just this!

    Lessons Learned

    Technical Debt has an inherent value akin to Financial Debt. Choosing the right tool for the job means acquiring the right Technical Debt when needed – leveraging strategies, partnerships, and solutions, that prioritise resilience, agility, and long-term sustainability.

    The post Balancing Innovation and Technical Debt appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/balancing-innovation-and-technical-debt/

    • chevron_right

      JMP: Newsletter: SMS Routes, RCS, and more!

      news.movim.eu / PlanetJabber • 21 May, 2024 • 3 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    SMS Censorship, New Routes

    We have written before about the increasing levels of censorship across the SMS network. When we published that article, we had no idea just how bad things were about to get. Our main SMS route decided at the beginning of April to begin censoring all messages both ways containing many common profanities. There was quite some back and forth about this, but in the end this carrier has declared that the SMS network is not meant for person-to-person communication and they don’t believe in allowing any profanity to cross their network.

    This obviously caused us to dramatically step up the priority of integration with other SMS routes, work which is now nearing completion. We expect very soon to be offering long-term customers with new options which will not only dramatically reduce the censorship issue, but also in some cases remove the max-10 group text limit, dramatically improve acceptance by online services , and more.

    RCS

    We often receive requests asking when JMP will add support for RCS, to complement our existing SMS and MMS offerings. We are happy to announce that we have RCS access in internal testing now. The currently-possible access is better suited to business use than personal use, though a mix of both is certainly possible. We are assured that better access is coming later in the year, and will keep you all posted on how that progresses. For now if you are interested in testing this, especially if you are a business user, please do let us know and we’ll let you know when we are ready to start some testing.

    One thing to note is that “RCS” means different things to different people. The main RCS features we currently have access to are typing notifications, displayed/read notifications, and higher-quality media transmission.

    Cheogram Android

    Cheogram Android 2.15.3-1 was released this month, with bug fixes and new features including:

    • Major visual refresh, including optional Material You
    • Better audio routing for calls
    • More customizable custom colour theme
    • Conversation read-status sync with other supporting apps
    • Don’t compress animated images
    • Do not default to the network country when there is no SIM (for phone number format)
    • Delayed-send messages
    • Message loading performance improvements

    New GeoApp Experiment

    We love OpenStreetMap , but some of us have found existing geocoder/search options lacking when it comes to searching by business name, street address, etc. As an experimental way to temporarily bridge that gap, we have produced a prototype Android app ( source code ) that searches Google Maps and allows you to open search results in any mapping app you have installed. If people like this, we may also extend it with a server-side component that hides all PII, including IP addresses, from Google, for a small monthly fee. For now, the prototype is free to test and will install as “Maps+” in your launcher until we come up with a better name (suggestions welcome!).

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/may-newsletter-2024

    • chevron_right

      Erlang Solutions: Instant Scalability with MongooseIM and CETS

      news.movim.eu / PlanetJabber • 17 May, 2024 • 6 minutes

    The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend which makes it much easier to scale up.

    It is difficult to predict how much traffic your XMPP server will need to handle. Are you going to have thousands or millions of connected users? Will you need to deliver hundreds of millions of messages per minute? Answering such questions is almost impossible if you are just starting up. This is why MongooseIM offers several means of scalability.

    Clustering

    Even one machine running MongooseIM can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. This is why we recommend using a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol . Having at least three nodes in the cluster allows you to perform a rolling upgrade , where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance and eliminating unnecessary downtime. During such an upgrade procedure, you can increase the hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier because you only need to add new nodes to the already deployed cluster.

    Mnesia

    Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

    1. Consistency issues, tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
    2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
    3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

    After trying to mitigate such issues for a couple of years, we have concluded that it is best not to use Mnesia at all. First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

    Introducing CETS

    Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to store all your persistent data. Getting rid of Mnesia removes the last obstacle on your way to easy and simple management of MongooseIM. For example, if you are using Kubernetes , MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up automatic scaling of your installation.

    Installing with Helm

    As an example, let’s quickly set up a cluster of three MongooseIM nodes. You will need to have Helm and Kubernetes installed. The examples were tested with Docker Desktop , but they should work with any Kubernetes setup. As the first step, let’s install and initialise a PostgreSQL database with Helm:

    $ curl -O https://raw.githubusercontent.com/esl/MongooseIM/6.2.1/priv/pg.sql
    $ helm install db oci://registry-1.docker.io/bitnamicharts/postgresql \
       --set auth.database=mongooseim --set auth.username=mongooseim --set auth.password=mongooseim_secret \
       --set-file 'primary.initdb.scripts.pg\.sql'=pg.sql
    

    It is useful to monitor all Kubernetes resources in another shell window:

    $ watch kubectl get pod,sts,pvc,pv,svc,hpa

    As soon as pod/db-postgresql-0 is shown as ready, you can check that the DB is running:

    $ kubectl exec -it db-postgresql-0 -- \
      env PGPASSWORD=mongooseim_secret psql -U mongooseim -c 'SELECT * from users'

    As a result, you should get an empty list of MongooseIM users. Next, let’s create a three-node MongooseIM cluster using the Helm Chart :

    $ helm repo add mongoose https://esl.github.io/MongooseHelm/
    $ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
       --set persistentDatabase=rdbms --set rdbms.tls.required=false --set rdbms.host=db-postgresql \
       --set resources.requests.cpu=200m

    By setting persistentDatabase to RDBMS and volatileDatabase to CETS, we are eliminating the need for Mnesia, so no PVC’s are created. To connect to PostgreSQL, we specify db-postgresql as the database host. The requested CPU resources are 0.2 of a core per pod, and they will be useful for autoscaling. You can monitor the shell window, where watch kubectl … is running, to make sure that all MongooseIM nodes are ready. It is useful to verify logs as well, e.g. kubectl logs mongooseim-0 should display logs from the first node. To see how easy it is to scale up horizontally, let’s increase the number of MongooseIM nodes (which correspond to Kubernetes pods) from 3 to 6:

    $ kubectl scale --replicas=6 sts/mongooseim

    You can use kubectl logs -f mongooseim-0 to see the log messages about each newly added node of the CETS cluster. With helm upgrade , you can do rolling upgrades and scaling as well. The main difference is that the changes done with helm are permanent.

    Autoscaling

    Should you need automatic scaling, you can set up the Horizontal Pod Autoscaler . Please ensure that you have the Metrics Server installed. There are separate instructions to install it in Docker Desktop. We have already set the requested CPU resources to 0.2 of a core per pod, so let’s start the autoscaler now:

    $ kubectl autoscale sts mongooseim --cpu-percent=50 --min=1 --max=8

    It is going to keep the CPU usage at 0.1 (which is 50% of 0.2) of a core per pod. The threshold is so low to be able to easily trigger scaling up, and in any real application, it should be much higher. You should see the cluster getting scaled down until it has just one node because there is no CPU load yet. See the reported targets in the window, where you have the watch kubectl … command running. To trigger scaling up, we need to put some load on the server. We could just fire up random HTTP requests, but let’s instead use the opportunity to explore MongooseIM CLI and GraphQL API. Firstly, create a new user on the first node with the CLI:

    $ kubectl exec -it mongooseim-0 -- \
      mongooseimctl account registerUser --domain localhost --username alice --password secret

    Next, you can send XMPP messages in a loop with the GraphQL Client API:

    $ LB_HOST=$(kubectl get svc mongooseim-lb \
      --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    $ BASIC_AUTH=$(echo -n 'alice@localhost:secret' | base64)
    $ while true; \
      do curl --get -N -H "Authorization:Basic $BASIC_AUTH" \
        -H "Content-Type: application/json" --data-urlencode \
        'query=mutation {stanza {sendMessage(to: "alice@localhost", body: "Hi") {id}}}' \
        http://$LB_HOST:5561/api/graphql; \
      done

    You should observe new pods being launched as the load increases. If there is not enough load, run the snippet in a few separate shell windows. Stopping the script should bring the cluster size back down.

    Summary

    Thanks to CETS and the Helm Chart , MongooseIM 6.2.1 can be easily installed, maintained and scaled in a cloud environment. What we have shown here are the first steps, and there is much more to explore. To learn more, you can read the documentation for MongooseIM or check out the live demo at trymongoose.im . Should you have any questions, or if you would like us to customise, configure, deploy or maintain MongooseIM for you, feel free to contact us .

    The post Instant Scalability with MongooseIM and CETS appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/instant-scalability-with-mongooseim-and-cets/

    • chevron_right

      Erlang Solutions: Comparing Elixir vs Java

      news.movim.eu / PlanetJabber • 16 May, 2024 • 21 minutes

    After many years of active development using various languages, in the past months, I started learning Elixir. I got attracted to the language after I heard and read nice things about it and the BEAM VM, but – to support my decision about investing time to learn a new language – I tried to find a comparison between Elixir and various other languages I already knew.

    What I found was pretty disappointing. In most of these comparisons, Elixir performed much worse than Java, even worse than most of the mainstream languages. With these results in mind, it became a bit hard to justify my decision to learn a new language with such a subpar performance, however fancy its syntax and other features were. After delving into the details of these comparisons, I realised that all of them were based on simplistic test scenarios and specialised use cases, basically a series of microbenchmarks (i.e. small programs created to measure a narrow set of metrics, like execution time and memory usage). It is obvious that the results of these kinds of benchmarks are rarely representative of real-life applications.

    My immediate thought was that a more objective comparison would be useful not only for me but for others as well. But before discussing the details, I’d like to compare several aspects of Elixir and Java that are not easily quantifiable.

    Development

    Learning curve

    Before I started learning Elixir, I used various languages like Java, C, C++, Perl, and Python. Despite that, all of them are imperative languages and Elixir is a functional language, I found the language concepts clear and concise, and – to tell the truth – much less complex than Java. Similarly, Elixir syntax is less verbose and easier to read and see through.

    When comparing language complexities, there is an often forgotten, but critical thing: It’s hard to develop anything more complex than a Hello World application just by using the core language. To build enterprise-grade software, you should use at least the standard library, but in most cases, many other 3rd party libraries. They all contribute to the learning curve.

    In Java, the standard library is part of the JDK and provides basic support for almost every possible use, but lacked the most important thing, the component framework (like Spring Framework or OSGi), for about 20 years. During that time, several good component frameworks were developed and became widespread, but they all come with different design principles, configuration and runtime behaviour, so for a novice developer, the aggregated learning curve is pretty steep.On the other side, Elixir has the OTP from the beginning, a collection of libraries once called Open Telecom Platform . OTP provides its own component framework which shares the same concepts and design principles as the core language.

    Documentation

    I was a bit spoiled by the massive amount of tutorials, guides and forum threads of the Java ecosystem, not to mention the really nice Javadoc that comes with the JDK. It’s not that Elixir lacks the appropriate documentation, there are really nice tutorials and guides, and most of the libraries are comparably well documented as their Java counterparts, but it will take time for the ecosystem to reach the same level of quality. There are counterexamples, of course, the Getting Started Guide is a piece of cake, I didn’t need anything else to learn the language and start active development.

    IDE support

    For me as a novice Elixir developer, the most important roadblock was the immature IDE support. Although I understand that supporting a dynamically typed language is much harder than a statically typed one like Java, I’m missing the most basic refactoring support from both the IntelliJ IDEA and VSCode . I know that Emacs offers more features, but being a hardcore vi user, I kept some distance from it.

    Fortunately, these shortcomings can be improved easily, and I’m sure there are enough interested developers in the open-source world, but as usual, some coordination would be needed to facilitate the development.

    Programming model

    Comparing entire programming models of two very different languages is too much for a blog entry, so I’d like to focus on the language support for performance and reliability, more precisely several aspects of concurrency, memory management and error handling.

    Concurrency and memory management

    The Java Memory Model is based on POSIX Threads (pthreads). Heap memory is allocated from a global pool and shared between threads. Resource synchronisation is done using locks and monitors. A conventional Java thread (Platform Thread) is a simple wrapper around an OS thread. Since an OS thread comes with its own large stack and is scheduled by the OS, it is not lightweight in any way. Java 21 introduced a new thread type ( Virtual Thread ) which is more lightweight and scheduled by the JVM, so it can be suspended during a blocking operation, allowing the OS thread to mount and execute another Virtual Thread. Unfortunately, this is only an afterthought. While it can improve the performance of many applications, it makes the already complex concurrency model even more complicated. The same is true for Structured Concurrency . While it can improve reliability, it will also increase complexity, especially if it is mixed with the old model. This is also true for the 3rd party libraries, adopting the new features, and upgrading the old deployments will take time, typically years. Until that, a mixed model will be used which can introduce additional issues.

    There are several advantages of adopting POSIX Threads, however: it is familiar for developers of languages implementing similar models (e.g. C, C++ etc.), and keeps the VM internals fairly simple and performant. On the other hand, this model makes it hard to effectively schedule tasks and heavily constrains the design of reliable concurrent code. And most importantly, it introduces issues related to concurrent access to shared resources. These issues can materialise in performance bottlenecks and runtime errors that are hard to debug and fix.

    The concurrency model of Elixir is based on different concepts, introduced by Erlang in the 80s. Instead of scheduling tasks as OS threads, it uses a construct called “process”, which is different from an operating system process. These processes are very lightweight, operating on independently allocated/deallocated memory areas and are scheduled by the BEAM VM. Scheduling is done by multiple schedulers, one for each CPU core. There is no shared memory, synchronised resource access, or global garbage collection, inter-process communication is performed using asynchronous signalling. This model eliminates the conventional concurrency-related problems and makes it much easier to write massively concurrent, scalable applications. There is one drawback, however: due to these conceptual differences, the learning curve is a bit steeper for developers experienced only with pthreads-related models.

    Fault tolerance

    Error recovery and fault tolerance in general are underrated in the world of enterprise software. For some reason, we think that fault tolerance is for mission-critical applications like controlling nuclear power plants, running medical devices or managing aircraft avionics. In reality, almost every business has critical software assets and applications that should be highly available or data, money and consumer trust will be lost. Redundancy may prevent critical downtimes, but no amount of redundancy can mitigate the risk of data corruption or other similar errors, not to mention the cost of duplicated resources.

    Java and Elixir handle errors in very different ways. While Java follows decades-old conventions and treats errors as exceptional situations, Elixir inherited a far more powerful concept from Erlang, originally borrowed from the field of fault-tolerant systems. In Elixir, errors are part of the normal behaviour of the application and are treated as such. Since there are no shared resources between processes, an error during the execution of a process does not affect nor propagate to the others; their states remain consistent, so the application can safely recover from the error. In addition, supervision trees can make sure that the failed components will be replaced immediately.

    This way, the BEAM VM provides guarantees against data loss during error recovery. But this kind of error recovery is possible only if no errors can leave the system in an inconsistent state. Since Java relies on OS threads, and shared memory can’t be protected from incorrectly behaving threads, under the JVM, there are no such safeties. Although there are Java libraries that provide better fault tolerance by implementing different programming models (probably the most noteworthy is Akka , implementing the Actor Model), the number of 3rd party libraries supporting these programming models is very limited.

    Runtime

    Performance

    For CPU or memory-intensive tasks, Java is a good choice, due to several things, like a more mature Just In Time compiler and tons of runtime optimisations in the JVM, but most importantly, because of its memory model. Since memory allocation and thread handling are basically done on OS level, the management overhead is very low.

    On the other hand, this advantage vanishes when concurrent execution is paired with a mixed workload, like blocking operations and data exchange between concurrent tasks. This is the field where Elixir thrives since Erlang and the BEAM VM were originally designed for these kinds of tasks. Due to the well-designed concurrency model, memory and other resources are not shared, requiring no synchronisation. BEAM processes are more lightweight than Java threads, and their scheduling is done at VM level, leading to fewer context switches and better scheduling granularity.

    Concurrent operations also affect memory use. Since a Java thread is not lightweight, the more threads are waiting for execution, the more memory is used. In parallel with the memory allocations related to the increasing number of waiting threads, the overhead caused by garbage collection also grows.

    Today’s enterprise applications are usually network-intensive. We have separate databases, microservices, clients accessing our services via REST APIs etc. Compared to operations on in-memory data, network communication is many orders of magnitude slower, latency is not deterministic, and the probability of erroneous responses, timeouts or infrastructure-related errors is not negligible. In this environment, Elixir and the BEAM VM offer more flexibility and concurrent performance than Java.

    Scalability

    When we talk about scalability, we should mention both vertical and horizontal scalability. While vertical scalability is about making a single hardware bigger and stronger, horizontal scalability deals with multiple computing nodes.

    Java is a conventional language in a sense that is built for vertical scaling, but it was designed at a time when vertical scaling meant running on bigger hardware with better single-core performance. It performs reasonably well on multi-core architectures, but its scalability is limited by its concurrency model since massive concurrency comes with frequent cache invalidations and lock contention on shared resources. Horizontal scaling enlarges these issues due to the increased latency. Moreover, since the JVM was also designed for vertical scaling, there is no simple way to share or distribute workload between multiple nodes, it requires additional libraries/frameworks, and in many cases, different design principles and massive code changes.

    On the other hand, a well-designed Elixir application can scale up seamlessly, without code changes. There are no shared resources that require locking, and asynchronous messaging is perfect for both multi-core and multi-node applications. Of course, Elixir itself does not prevent the developers from introducing features that are hard to scale or require additional work, but the programming model and the OTP make horizontal scaling much easier.

    Energy efficiency

    It is a well-known fact that resource and energy usage are highly correlated metrics. However, there is another, often overlooked factor that contributes significantly to energy usage. The concurrency limit is the number of concurrent tasks an application can execute without having stability issues. Near the concurrency limit, applications begin to use the CPU excessively, therefore the overhead of context switches begins to matter a lot. Another consequence is the increased memory usage, caused by the growing number of tasks waiting for CPU time. Since frequent context switches are also memory intensive, we can safely say that applications become much less energy efficient near the concurrency limit.

    Maintenance

    Tackling concurrency issues is probably the hardest part of any maintenance task. We certainly collect metrics to see what is happening inside the application, but these metrics often fail to provide enough information to identify the root cause of concurrency problems. We have to trace the execution flow to get an idea of what’s going on inside. Profiling or debugging of such issues comes with a certain cost: using these tools may alter the performance behaviour of the system in a way that makes it hard to reproduce the issue or identify the root cause.

    Due to the message-passing concurrency model, the code base of a typical concurrent Elixir application is less complex and free from resource-sharing-related implementation mistakes often poisoning Java code, eliminating the need for this kind of maintenance. Also, the BEAM VM is designed with traceability in mind, leading to lower performance cost of tracing the execution flow.

    Dependencies

    Most of the enterprise applications heavily depend on 3rd party libraries. In the Java ecosystem, even the component framework comes from a 3rd party, with its own dependencies on other 3rd party libraries. This creates a ripple effect that makes it hard to upgrade just one component of such a system, not to mention the backward incompatible changes potentially introduced by newer 3rd party components. Anyone who has tried to upgrade a fairly large Maven project could tell stories about this dependency nightmare.

    The Elixir world is no different, but the number of required 3rd party libraries can be much smaller since the BEAM VM and the OTP provide a few useful things (like the component platform, asynchronous messaging, seamless horizontal scalability, supervision trees), functionality that is very often used and can only be found in 3rd party libraries for Java.

    Let’s get more technical

    As I mentioned before, I was not satisfied with other language comparisons as they are usually based on simplistic or artificial test cases, and wanted to create something that mimics a common, but easy-to-understand scenario, and then measure the performance and complexity of different implementations. Although real-world performance is rarely just a number, it is a composite of several metrics like CPU and memory usage, I/O and network throughput, I tried to quantify the performance using the processing time, and the time an application needs to finish a task. Another important aspect is code complexity since it contributes to development and maintenance costs. The size and complexity of the implementations also matter since these factors contribute to the development and maintenance costs.

    Test scenario

    Most real-world applications process data in a concurrent way. These data originate from a database or other kind of backends, microservice or from a 3rd party service. In any way, data is transferred via a network. In the enterprise world, the dominating way of network communication is via HTTP, often part of a REST workflow. That is the reason why I chose to measure how fast and reliable REST clients can be implemented in Elixir and Java, and in addition, how complex each implementation is.

    The workflow starts with reading a configuration from a disk and then gathering data according to the configuration using several REST API calls. There are dependencies in between workflow steps, so several of them can’t be done concurrently, while the others can be done in parallel. The final step is to process the received data.

    The actual scenario is to evaluate rules, where each rule contains information used to gather data from 3rd party services and predict utility stock prices based on historical weather, stock price and weather forecast data.

    Rule evaluation is done in a concurrent manner. Both the Elixir and Java implementation are configured to evaluate 2 rules concurrently.

    Implementation details

    Elixir

    The Elixir-based REST client is implemented as an OTP application. I tried to minimise the external dependencies since I’d like to focus on the performance of the language and the BEAM VM, and the more 3rd party libraries the application depends on, the more probable there’ll be some kind of a bottleneck.

    The dependencies I use:

    Each concurrent task is implemented as a process, and data aggregation is done using asynchronous messaging. The diagram below shows the rule evaluation workflow.

    NddGhWGaFfmN-_NNFr7iux_zaPBM1w4_HBGE9mV6gN3HB15fIbh8LUgvtx657XFizU_D8rEWI-23VElOWpdD1d1Rv15eBHlZu-B8Oeu4hDma5QuFyf0xJ46MheA0rqvMAOzA_r9O2KLqAjxyBUbrN18

    There are altogether 8 concurrent processes in each task, one process is spawned for each rule, and then 3 processes are started to retrieve stock, historical weather and weather prediction data.

    Java

    The Java-based REST client is implemented as a standalone application. Since the Elixir application uses OTP, the fair comparison would be to use some kind of a component framework, like Spring or OSGi, since both are very common in the enterprise world. However, I decided not to use them, as they both would contribute heavily to the complexity of the application, although they wouldn’t change the performance profile much.

    Dependencies:

    There are two implementations of concurrent task processing. The first one uses two platform thread pools, for rule processing and for retrieving weather data. This might seem a bit naive, as this workflow could be optimised better, but please keep in mind that

    1. I wanted to model a generic workflow, and it is quite common that several thread pools are used for concurrent processing of various tasks.
    2. My intent was to find the right balance between implementation complexity and performance.

    The other implementation uses Virtual Threads for rule processing and weather data retrieval.

    The diagram below shows the rule evaluation workflow.

    d9KHuVMI6A2T_5EstLJ5TV3R1y6SNBw_4w9GLPNXsxkWF3sc2pMrKWIz3K87CRFTpAsckUV9QV7aUoHixUyK2cguAo5bgT7vz4R_wj-0T651d3txfgawwDrG6pZyviB2WGKXGxnMqGiNoncRUzX3noo

    There are altogether 6 concurrent threads in each task, one thread is started for each rule, and then 2 threads are started to retrieve historical weather and weather prediction data.

    Results

    Hardware

    Google Compute Node

    • CPU Information: AMD EPYC 7B12
    • Number of Available Cores: 8
    • Available memory: 31.36 GB

    9yY8Lp7tcsmHXxvvrZkkXYeTMjBFwAKSr6P0sm5B26DmjwtGb2GHxFGUOnjCAXVRSSD9SOxvnTvIzmcLkCz7WUmt9Ng4AjBaz90RlIh-7RyC145ikrx3zlrTS8271VIsM3sDD0sWaw0BNYmXdFfCO94

    Tasks Elixir Java Platform Threads Java Virtual Threads
    320 2.52 s 2.52 s 2.52 s
    640 2.52 s 2.52 s 2.52 s
    1280 2.51 s 2.52 s, 11% error 2.52 s
    2560 5.01 s 2.52 s, 7 errors
    5120 5.01 s High error rate
    10240 5.02 s
    20480 7.06 s

    Detailed results

    Elixir

    • Elixir 1.16.2
    • Erlang 26.2.4
    • JIT enabled: true
    Concurrent tasks per minute Average Median 99th % Remarks
    5 2.5 s 2.38 s 3.82 s
    10 2.47 s 2.38 s 3.77 s
    20 2.47 s 2.41 s 3.77 s
    40 2.5 s 2.47 s 3.79 s
    80 2.52 s 2.47 s 3.82 s
    160 2.52 s 2.49 s 3.78 s
    320 2.52 s 2.49 s 3.77 s
    640 2.52 s 2.47 s 3.81 s
    1280 2.51 s 2.47 s 3.8 s
    2560 5.01 s 5.0 s 5.17 s
    3840 5.01 s 5.0 s 5.11 s
    5120 5.01 s 5.0 s 5.11 s
    10240 5.02 s 5.0 s 5.15 s
    15120 5.53 s 5.56 s 5.73 s
    20480 7.6 s 7.59 s 8.02 s

    Java 21, Platform Threads

    • OpenJDK 64-Bit Server VM, version 21
    Concurrent tasks per minute Average Median 99th % Remarks
    5 2.5 s 2.36 s 3.71 s
    10 2.54 s 2.48 s 3.69 s
    20 2.5 s 2.5 s 3.8 s
    40 2.56 s 2.45 s 3.84 s
    80 2.51 s 2.46 s 3.8 s
    160 2.5 s 2.5 s 3.79 s
    320 2.52 s 2.46 s 3.8 s
    640 2.52 s 2.48 s 3.8 s
    1280 2.52 s 2.47 s 3.8 s 11% HTTP timeouts

    Java 21, Virtual Threads

    • OpenJDK 64-Bit Server VM, version 21
    Concurrent tasks per minute Average Median 99th % Remarks
    5 2.46 s 2.49 s 3.8 s
    10 2.51 s 2.52 s 3.68 s
    20 2.56 s 2.44 s 3.79 s
    40 2.53 s 2.46 s 3.8 s
    80 2.52 s 2.48 s 3.79 s
    160 2.52 s 2.49 s 3.77 s
    320 2.52 s 2.48 s 3.8 s
    640 2.52 s 2.49 s 3.8 s
    1280 2.52 s 2.48 s 3.8 s
    2560 2.52 s 2.48 s 3.8 s Errors: 7 (HTTP client EofException)
    3840 N/A N/A N/A Large amount of HTTP timeouts

    Stability

    Under high load, strange things can happen. Concurrency (thread contentions, races), operating system or VM-related (resource limits) and hardware-specific (memory, I/O, network etc.) errors may occur anytime. Many of them cannot be handled by the application, but the runtime usually can (or should) deal with them to provide reliable operation even in the presence of faults.

    During the test runs, my impression was that the BEAM VM is superior in this task, in contrast to the JVM which entertained me with various cryptic error messages, like the following one:

    java.util.concurrent.ExecutionException: java.io.IOException
            at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
            at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
            at esl.tech_shootout.RuleProcessor.evaluate(RuleProcessor.java:38)
            at esl.tech_shootout.RuleProcessor.lambda$evaluateAll$0(RuleProcessor.java:29)
            at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
            at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
            at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
            at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
            at java.base/java.lang.Thread.run(Thread.java:840)
    Caused by: java.io.IOException
            at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:586)
            at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:123)
            at esl.tech_shootout.RestUtils.callRestApi(RestUtils.java:21)
            at esl.tech_shootout.StockService.stockSymbol(StockService.java:23)
            at esl.tech_shootout.StockService.stockData(StockService.java:17)
            at esl.tech_shootout.RuleProcessor.lambda$evaluate$3(RuleProcessor.java:37)
            ... 4 more
    Caused by: java.nio.channels.ClosedChannelException
            at java.base/sun.nio.ch.SocketChannelImpl.ensureOpen(SocketChannelImpl.java:195)

    Although in this case, I know the cause of this error, the error message is not very informative. Compare the above stack trace with the error raised by Elixir and the BEAM VM:

    16:29:53.822 [error] Process #PID<0.2373.0> raised an exception
    ** (RuntimeError) Finch was unable to provide a connection within the timeout due to excess queuing for connections. Consider adjusting the pool size, count, timeout or reducing the rate of requests if it is possible that the downstream service is unable to keep up with the current rate.
    
        (nimble_pool 1.0.0) lib/nimble_pool.ex:402: NimblePool.exit!/3
        (finch 0.18.0) lib/finch/http1/pool.ex:52: Finch.HTTP1.Pool.request/6
        (finch 0.18.0) lib/finch.ex:472: anonymous fn/4 in Finch.request/3
        (telemetry 1.2.1) /home/sragli/git/tech_shootout/elixir_demo/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
        (elixir_demo 0.1.0) lib/elixir_demo/rule_processor.ex:56: ElixirDemo.RuleProcessor.retrieve_weather_data/3
    

    This exception shows what happens when we mix different concurrency models:

    Thread[#816,HttpClient@6e579b8-816,5,VirtualThreads]
     at java.base@21/jdk.internal.misc.Unsafe.park(Native Method)
     at java.base@21/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
     at java.base@21/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1758)
     at app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:219)
     at app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:1139)
    
    
    

    The Jetty HTTP client is a nice piece of code and very performant, but uses platform threads in its internals, while our benchmark code relies on virtual threads.

    That’s why I had to switch from JDK HttpClient to Jetty:

    Caused by: java.io.IOException: /172.17.0.2:60876: GOAWAY received
           at java.net.http/jdk.internal.net.http.Http2Connection.handleGoAway(Http2Connection.java:1166)
           at java.net.http/jdk.internal.net.http.Http2Connection.handleConnectionFrame(Http2Connection.java:980)
           at java.net.http/jdk.internal.net.http.Http2Connection.processFrame(Http2Connection.java:813)
           at java.net.http/jdk.internal.net.http.frame.FramesDecoder.decode(FramesDecoder.java:155)
           at java.net.http/jdk.internal.net.http.Http2Connection$FramesController.processReceivedData(Http2Connection.java:272)
           at java.net.http/jdk.internal.net.http.Http2Connection.asyncReceive(Http2Connection.java:740)
           at java.net.http/jdk.internal.net.http.Http2Connection$Http2TubeSubscriber.processQueue(Http2Connection.java:1526)
           at java.net.http/jdk.internal.net.http.common.SequentialScheduler$LockingRestartableTask.run(SequentialScheduler.java:182)
           at java.net.http/jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run(SequentialScheduler.java:149)
           at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:207)
           at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
           at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
           at java.base/java.lang.Thread.run(Thread.java:1583)

    According to the HTTP 2.0 standard, an HTTP server can send a GOAWAY response at any time (typically under high load, in our case, after about 2000 requests/min) to indicate connection shutdown. It is the client’s responsibility to handle this situation. The HttpClient implemented in the JDK fails to do that internally and it does not provide enough information to make the proper error handling possible.

    Concluding remarks

    As I expected, both the Elixir and Java applications performed well in low concurrency settings, but the Java application became less stable as the number of concurrent tasks increased, while Elixir exhibited rock-solid performance with minimal slowdown.

    The BEAM VM was also superior in providing reliable operation under high load, even in the presence of faults. After about 2000 HTTP requests per second, timeouts were inevitable, but they didn’t impact the stability of the application. On the other hand, the JVM started to behave very erratically after about 1000 (Platform Threads-based implementation) or 3000 (Virtual Threads-based implementation) concurrent tasks.

    Code complexity

    There are a few widely accepted complexity metrics to quantify code complexity, but I think the most representative ones are the Lines of Code and Cyclomatic Complexity.

    Lines of Code, more precisely the Source Lines of Code (SLoC for short) quantifies the total number of lines in the source code of an application. Strictly speaking, it is not very useful as a complexity measure, but it is a good indicator of how much effort is needed to look through a particular codebase. Source Lines of Code is measured by calculating the total number of lines in all source files, not including the dependencies and configuration files.

    Cyclomatic Complexity (CC for short) is more technical as it measures the number of independent execution paths through the source code. CC measurement works in a different way for each language. Cyclomatic Complexity of the Elixir application is measured using Credo , and CC of the Java application is quantified using the CodeMetrics plugin of IntelliJ IDEA.

    These numbers show that there is a clear difference in complexity even between such small and simple applications. While 9 is not a particularly high score for Cyclomatic Complexity, it indicates that the logical flow is not simple. It might not be concerning, but what’s more problematic is that even the most basic error handling increases the complexity by 3.

    Conclusion

    These results might paint a black-and-white picture, but keep in mind that both Elixir and Java have their advantages and shortcomings. If we are talking about CPU or memory-intensive operations in low concurrency mode, Java is the clear winner thanks to its programming model and the huge amount of optimisation done in the JVM. On the other hand, Elixir is a better choice for highly concurrent, available and scalable applications, not to mention the field of fault-tolerant systems. Elixir also has the advantage of being a more modern language with less syntactic clutter and less need to write boilerplate code.

    The post Comparing Elixir vs Java appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: A Comprehsive Guide to Ruby v Elixir

      news.movim.eu / PlanetJabber • 9 May, 2024 • 8 minutes

    Deciding what programming language is best for your long-term business strategy is a difficult decision. If you’re tossing the coin between Ruby and Elixir, or considering making a shift from one to the other, you probably have a lot of questions about both languages.

    So let’s compare these widely popular and dynamic languages: Elixir and Ruby. We’ll explore the advantages and disadvantages of each language, as well as their optimal use cases and other key points, providing you with a clearer insight into both.

    Ruby and Elixir: A History

    To gain a better understanding of the frequent Ruby and Elixir comparisons, let’s take it back to the 90’s when Ruby was created by Yukihiro Matsumoto. He combined the best features of Small, Perl, Eiffel, Ada, Lip and Smalltalk languages to simplify the tasks of developers. But Ruby’s popularity surged with the release of the open-source framework Ruby on Rails.

    This launch proved to be revolutionary in the world of web development, making code tasks achievable in a matter of days instead of months. As one of the leading figures on the Rails Core Team, Jose Valim recognised the potential for evolution within the Ruby language.

    In 2012, Elixir was born- a functional programming language, built on the Erlang virtual machine (VM). The aim of Elixir was to create a language with the friendly syntax of Ruby while boasting fault tolerance, concurrency capabilities and a commitment to developer satisfaction.

    The Elixir community also has Phoenix, an open-source framework from Phoenix’s creator Chris McCord. Working with Jose Valim and implementing the core values from Ruby on Rails perfected a much more effective framework for the Elixir ecosystem.

    So what is Elixir?

    Elixir describes itself as a “dynamic, functional language for building scalable and maintainable applications.” It is a great choice for any situation where scalability, performance and productivity are priorities, particularly within IoT endeavours and web applications.

    Elixir runs on the BEAM virtual machine, originating from Erlang’s virtual machine (VM). It is well known for managing fault-tolerant, low-latency distributed systems. Created in 1986 by the Ericsson company, Erlang was designed to address the growing demands within the telecoms industry.

    It was later released as free and open-source software in 1998 and has since grown in popularity thanks to the demand for concurrent services.

    What is Ruby?

    Ruby stands out as a highly flexible programming language. Developers who code in Ruby are able to make changes to its functionality. Unlike compiled languages like C or C++, Ruby is an interpreted language, similar to Python.

    But unlike Python which focuses on a singular, definitive solution for every problem, Ruby projects try to take on multiple problem-solving approaches. Depending on your project,  this approach has pros and cons.

    One hallmark of Ruby is its user-friendly nature. It hides a lot of intricate details from the programmer, making it much easier to use compared to other popular languages. But it also means that finding bugs in code can be harder.

    There is a major convenience factor to coding in Ruby. Any code that you write will run on any major operating system such as macOR, Windows, and Linux, without having to be ported.

    The pros of Elixir

    If you’re considering migrating from Ruby to Elixir, you’ll undoubtedly be looking into its benefits and some key advantages it has over other languages. So let’s jump into some of its most prominent features.

    Built on the BEAM

    As mentioned, Elixir operates on the Erlang virtual machine (BEAM). It has a long history as one of the oldest VMs in IT history and remains widely used. The Erlang VM BEAM is ideal for managing and building systems with concurrent connections.

    Immutable Data

    A major advantage of Elixir is its support for immutable data, which simplifies code understanding. Elixir ensures that data is unchanged once it has been defined, enhancing code reliability by preventing unexpected changes to variables, and making for easier debugging.

    Top Performance

    Elixir offers amazing performance. Phoenix framework is the most popular web development framework in Elixir, boasting remarkable speed and response times (a matter of milliseconds). While Rails isn’t a slow system either, Elixir’s performance just edges it out, making it a superior choice.

    Parallelism

    Parallel systems often have latency and responsiveness challenges due to how much computer power is required for a single task. But Elixir addresses this with its very clever process scheduler, which proactively reallocates control to different processes.

    So even under heavy loads, a slow process isn’t able to significantly impact the overall performance of an Elixir application. This capability ensures low latency, a key requirement for modern web applications.

    Highly Fault-Tolerant

    In most programming languages, when a bug is identified in one process, it crashes the whole application. But Elixir handles this differently. It has unmatched fault tolerance.

    A fan favourite of the language, Elixir inherits Erlang’s “let it crash” philosophy, allowing processes to restart after a critical failure. This eliminates the need for complex recovery strategies.

    Distributed Concurrency

    Elixir supports code concurrency, allowing you to run concurrent connections on a single computer and multiple machines.

    Scalability

    Elixir gets the most out of a single machine perfect for systems or applications that need to scale or maintain traffic. Thanks to its architecture, there’s no need to add servers to accommodate demand continuously.

    The pros of Ruby

    Now let’s explore the benefits that Ruby has to offer. There are massive advantages for developers, from its expansive library ecosystem to its user-friendly syntax and supportive community.

    Huge ecosystem

    Not only is Ruby a very user-friendly programming language, but it also boasts a vast library ecosystem. Whatever feature you want to implement, there’s likely something available to help you develop swift applications.

    Easy to work with

    The founder of Ruby’s aim was to make development a breeze and pleasant for users. For this reason, Ruby is straightforward, clean and has an easily understandable syntax. This makes for very easy and productive development, which is why it remains such a popular choice with developers.

    Helpful, vibrant community

    The Ruby community is a vibrant one that thrives with the consistent publishing of readily available solutions that are open to the public. This environment is very advantageous for new developers, who can easily seek assistance and valuable solutions online.

    Commitment to standards

    Ruby offers strong support for web standards across all aspects of an application, from its user interface to data transfer.

    When building an application with Ruby, developers adhere to already established software design principles such as  “coding by convention,” “don’t repeat yourself,” and the “active record pattern.”

    So why are all of these points considered so advantageous to Ruby?

    Firstly, it simplifies the learning curve for beginners, designed to enhance the professional experience. It also lends itself to better code readability, which is great for collaboration and developers and finally, it reduces the amount of code needed to implement features.

    Key differences between Ruby and Elixir

    There are some significant differences between the two powerhouse languages. While Elixir and Ruby are both versatile, dynamic languages, unlike Ruby, Elixir code undergoes ahead-of-time compilation to Erlang VM (virtual machine) bytecode, which enhances its single-core performance substantially. Elixir’s focus is on code readability and expressiveness, while its robust macro system facilitates easy extensibility.

    Elixir and Ruby’s syntax also differ in several ways. For instance, Elixir uses pipes (marked by |> operator) to pass the outcome of one expression as the initial argument to another function, while Ruby employs “.” for method chaining.

    Also, Elixir provides explicit backing for immutable data structures, a feature not directly present in Ruby. It also offers first-rate support for typespecs, a capability lacking in Ruby.

    Best use for developers

    Elixir is a great option for developers who want the productivity of Ruby and the scalability of Elixir. It also performs just as well as Ruby for Minimum Viable Products (MVPs) and startups for larger applications, while demonstrating robust scalability for extensive applications.

    For companies who want swift delivery without sacrificing quality, Elixir works out as a great overall choice.

    Exploring the talent pool for Elixir and Ruby

    Elixir is a newer language than Ruby and therefore has a smaller pool of developers. But let’s not forget it’s also a functional language, and functional programming typically demands a different way of thinking compared to object-oriented programming.

    As a result, Elixir developers tend to have more experience and understanding of programming concepts. And the Elixir community is also rapidly growing. Those who are familiar with Ruby commonly make the switch to Elixir.

    Although Elixir developers might be more difficult to find, once you do, they are worth their weight.

    Who is using Ruby and Elixir?

    Let’s take a look at some highly successful companies that have used Ruby and Elixir:

    Ruby

    Airbnb: Uses Ruby on Rails for its web platform, including features like search, booking, and reviews.

    GitHub: Is built primarily using Ruby on Rails.

    Shopify: Relies on Ruby on Rails for its backend infrastructure.

    Basecamp: Built using Ruby on Rails.

    Kickstarter: Uses Ruby on Rails for its website and backend services.

    Elixir

    Discord: Uses Elixir for its real-time messaging infrastructure, benefiting from Elixir’s concurrency and fault tolerance.

    Pinterest: Takes advantage of Elixir’s scalability and fault-tolerance features.

    Bleacher Report: Bleacher Report, a sports news website, utilizes Elixir for its backend services, including real-time updates and notifications.

    Moz: Uses Elixir for its backend services, benefiting from its concurrency model and fault tolerance.

    Toyota Connected: Leverages Elixir for building scalable and fault-tolerant backend systems for connected car applications.

    So, what to choose?

    Migrating or simply deciding between programming languages presents an opportunity for enhancing performance, scalability, and robustness. But it is a journey. One that requires careful planning and execution and achieve the best long-term results for your business.

    While the Ruby community offers longevity, navigating outdated solutions can be a challenge. Nonetheless, the overlap of the Ruby and Elixir communities fosters a supportive environment for transitioning from one to the other. Elixir provides a learning curve that may deter some, but for developers seeking typed languages and parallel computing benefits, it is invaluable.

    If you’re already working with existing Ruby infrastructure, incorporating Elixir to address scaling and reliability issues is a viable option. The synergies between the two languages promote a seamless transition.

    Ultimately, while Ruby remains a solid choice, the advantages of Elixir make it a compelling option worth considering for future development and business growth.

    The post A Comprehsive Guide to Ruby v Elixir appeared first on Erlang Solutions .

    • chevron_right

      The XMPP Standards Foundation: The XMPP Newsletter April 2024

      news.movim.eu / PlanetJabber • 5 May, 2024 • 5 minutes

    XMPP Newsletter Banner

    XMPP Newsletter Banner

    Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of April 2024.

    XSF Announcements

    If you are interested to join the XMPP Standards Foundation as a member, please apply until 19th May 2024! .

    XMPP and Google Summer of Code 2024

    The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and are in the community bonding phase now:

    XSF and Google Summer of Code 2024

    XSF and Google Summer of Code 2024

    XSF Fiscal Hosting Projects

    The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective . For more information, see the announcement blog post . Current projects you can support:

    XMPP Events

    • XMPP Sprint in Berlin : On Friday, 12th to Sunday, 14th of July 2024.
    • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

    Articles

    Software News

    Clients and Applications

    Servers

    Libraries & Tools

    Extensions and specifications

    The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs .

    Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001 , which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process . Communication around Standards and Extensions happens in the Standards Mailing List ( online archive ).

    Proposed

    The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

    • No XEP was proposed this month.

    New

    • No new XEPs this month.

    Deferred

    If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

    • No XEPs deferred this month.

    Updated

    • Version 0.7.0 of XEP-0333 (Displayed Markers)
      • Change title to “Displayed Markers”
      • Bring back Service Discovery feature (dg)
    • Version 0.4.1 of XEP-0440 (SASL Channel-Binding Type Capability)
      • Recommend the usage of tls-exporter over tls-server-end-point (fs)
    • Version 0.2.1 of XEP-0444 (Message Reactions)
      • fix grammar and spelling (wb)
    • Version 1.0.1 of XEP-0388 (Extensible SASL Profile)
      • Fixed typos (md)

    Last Call

    Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

    • XEP-0398: User Avatar to vCard-Based Avatars Conversion

    Stable

    • Version 1.0.0 of XEP-0386 (Bind 2)
      • Accept as Stable as per Council Vote from 2024-04-02. (XEP Editor (dg))
    • Version 1.0.0 of XEP-0388 (Extensible SASL Profile)
      • Accept as Stable as per Council Vote from 2024-04-02. (XEP Editor (dg))
    • Version 1.0.0 of XEP-0333 (Displayed Markers)
      • Accept as Stable as per Council Vote from 2024-04-17. (XEP Editor (dg))
    • Version 1.0.0 of XEP-0334 (Message Processing Hints)
      • Accept as Stable as per Council Vote from 2024-04-17 (XEP Editor (dg))

    Deprecated

    • No XEP deprecated this month.

    Rejected

    • XEP-0360: Nonzas (are not Stanzas)

    Spread the news

    Please share the news on other networks:

    Subscribe to the monthly XMPP newsletter
    Subscribe

    Also check out our RSS Feed !

    Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board .

    Newsletter Contributors & Translations

    This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

    • English (original): xmpp.org
      • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, XSF iTeam
    • French: jabberfr.org and linuxfr.org
      • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
    • Italian: notes.nicfab.eu
      • Translators: nicola

    Help us to build the newsletter

    This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad . At the end of each month, the pad’s content is merged into the XSF Github repository . We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

    Tasks we do on a regular basis:

    • gathering news in the XMPP universe
    • short summaries of news and events
    • summary of the monthly communication on extensions (XEPs)
    • review of the newsletter draft
    • preparation of media images
    • translations
    • communication via media accounts

    License

    This newsletter is published under CC BY-SA license .

    • wifi_tethering open_in_new

      This post is public

      xmpp.org /2024/05/the-xmpp-newsletter-april-2024/