phone

    • chevron_right

      Erlang Solutions: Why do systems fail? Tandem NonStop system and fault tolerance

      news.movim.eu / PlanetJabber • 3 October • 6 minutes

    If you’re an Elixir, Gleam, or Erlang developer, you’ve probably heard about the capabilities of the BEAM virtual machine, such as concurrency, distribution, and fault tolerance. Fault tolerance was one of the biggest concerns of Tandem Computers. They created their Tandem Non-Stop architecture for high availability in their systems, which included ATMs and mainframes.

    In this post, I’ll be sharing the fundamentals of the NonStop architecture design with you. Their approach to achieving high availability in the presence of failures is similar to some implementations in the Erlang Virtual Machine, as both rely on concepts of processes and modularity.

    Systems with High Availability

    Why do systems fail? This question should probably be asked more often, considering all the factors it involves. It was central to the NonStop architecture because achieving high availability depends on understanding system failures.

    For tandem systems , any system has critical components that could potentially cause failures. How often do you ask yourself how long can your system operate before a failure? There is a metric known as MTBF (mean time between failures), which is calculated by dividing the total operating hours of the system by the number of failures. The result represents the hours of uninterrupted operation.

    Many factors can affect the MTBF, including administration, configuration, maintenance, power outages, hardware failures, and more. So, how can you survive these eventualities to achieve at least virtual high availability in your systems?

    Tandem NonStop critical components

    High availability in hardware has taught us important insights about continuous operation. Some hardware implementations rely on decomposing the system into modules, allowing for modularity to contain failures and maintain operation through backup modules instead of breaking the whole system and needing to restart it. The main concept, from this point of view, is to use modules as units of failure and replacement.

    Tandem NonStop system in modules

    High Availability for Software Systems

    But what about the software’s high availability? Just as with hardware, we can find important lessons from operative system designers who decompose systems into modules as units of service. This approach provides a mechanism for having a unit of protection and fault containment.

    To achieve fault tolerance in software, it’s important to address similar insights from the NonStop design:

    • Modularity through processes and messages.
    • Fault containment.
    • Process pairs for fault tolerance.
    • Data integrity.

    Can you recognise some similarities so far?

    The NonStop architecture essentially relies on these concepts. The key to high availability, as I mentioned before, is modularity as a unit of service failure and protection.

    A process should have a fail-fast mechanism, meaning it should be able to detect a failure during its operation, send a failure signal and then stop its operation. In this way, a system can achieve fault detection through fault containment and by sharing no state.

    Tandem NonStop primary backup

    Another important consideration for your system is how long it takes to recover from a failure. Jim Gray, software designer and researcher at Tandem Computers, in his paper ”Why computers stop and what can be done about it?” proposed a model of failure affected by two kinds of bugs: Bohrbugs, which cause critical failures during operation, and Heisenbugs, which are more soft and can persist in the system for years.

    Implementing Processes-Pairs Strategies

    The previous categorisation helps us to understand better strategies for implementing processes-pairs design, based on a primary process and a backup process:

    • Lockstep: Primary and backup processes execute the same task, so if the primary fails, the backup continues the execution. This is good for hardware failures, but in the presence of Heisenbugs, both processes will remain the failure.
    • State checkpointing: A requestor entity is connected to a processes-pair. When the primary process stops operation, the requestor switches to the backup process. You need to design the requestor logic.
    • Automatic checkpointing: Similar to the previous, but using the kernel to manage the checkpointing.
    • Delta checkpointing : Similar to state checkpointing but using logical rather than physical updates.
    • Persistence: When the primary process fails, the backup process starts its operation without a state. The system must implement a way to synchronise all the modules and avoid corrupt interaction.
    Tandem NonStop processes pairs

    All of these insights are drawn from Jim Gray’s paper, written in 1985 and referenced in Joe Armstrong’s 2003 thesis, “Making Reliable Distributed Systems in the presence of software errors” . Joe emphasised the importance of the Tandem NonStop system design as an inspiration for the OTP design principles.

    Elixir and High Availability

    So if you’re a software developer learning Elixir, you’ll probably be amazed by all the capabilities and great tooling available to build software systems. By leveraging frameworks like Phoenix and toolkits such as Ecto, you can build full-stack systems in Elixir. However, to fully harness the power of the Erlang virtual machine (BEAM) you must understand processes.

    Just as the Tandem computer system relied on transactions, fault containment and a fail-fast mechanism, Erlang achieves high availability through processes. Both systems consider it important to modularise systems into units of service and failure: processes.

    About the process

    A process is the basic unit of abstraction in Erlang, a crucial concept because the Erlang virtual machine (BEAM) operates around this. Elixir and Gleam share the same virtual machine, which is why this concept is important for the entire ecosystem.

    A process is:

    • A strongly isolated entity.
    • Creation and destruction is a lightweight operation.
    • Message passing is the only way to interact with processes.
    • Share no state.
    • Do what they are supposed to do or fail.

    Just remember, these are the fundamentals of Erlang, which is considered a message-oriented language, and its virtual machine (BEAM), on which Elixir runs.

    Tandem NonStop BEAM

    If you want to read more about processes in Elixir I recommend reading this article I wrote: Understanding Processes for Elixir Developers.

    I consider it important to read papers like Jim Gray’s article because they teach us the history behind implementations that attempt to solve problems. I find it interesting to read and share these insights with the community because it’s crucial to understand the context behind the tools we use. Recognising that implementations exist for a reason and have stories behind them is essential.

    You can find many similarities between Tandem and Erlang design principles:

    • Both aim to achieve high availability .
    • Isolation of operations is extremely important to contain failure.
    • Processes that share no state are crucial for building modular systems.
    • Process interactions are key to maintaining operation in the presence of errors. While Tandem computers implemented process-pairs design, Erlang implemented OTP patterns .

    To conclude

    Take some time to read about the Tandem computer design. It’s interesting because these features share significant similarities with OTP design principles for achieving high availability. Failure is something we need to deal with in any kind of system, and it’s important to be aware of the reasons and know what you can do to manage it and continue your operation. This is crucial for any software developer, but if you’re an Elixir developer, you’ll probably dive deeper into how processes work and how to start designing components with them and OTP.

    Thanks for reading about the Tandem NonStop system. If you like this kind of content, I’d appreciate it if you shared it with your community or teammates. You can visit this public repository on GitHub where I’m adding my graphic recordings and insights related to the Erlang ecosystem or contact the Erlang Solutions team to chat more about Erlang and Elixir.

    Tandem NonStop Joe Armstrong

    Illustrations by Visual Partner-Ship @visual_partner

    Jaguares, ESL Americas Office

    @carlogilmar

    The post Why do systems fail? Tandem NonStop system and fault tolerance appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/why-do-systems-fail-tandem-nonstop-system-and-fault-tolerance/

    • chevron_right

      Ignite Realtime Blog: XMPP: The Protocol for Open, Extensible Instant Messaging

      news.movim.eu / PlanetJabber • 2 October • 7 minutes

    Introduction to XMPP

    XMPP, the Extensible Messaging and Presence Protocol, is an Instant Messaging (IM) standard of the Internet Engineering Task Force (IETF) - the same organization that standardized Email (POP/IMAP/SMTP) and the World Wide Web (HTTP) protocols. XMPP evolved out of the early XML streaming technology developed by the XMPP Open Source community and is now the leading protocol for exchanging real-time structured data. XMPP can be used to stream virtually any XML data between individuals or applications, making it a perfect choice for applications such as IM.

    A Brief History

    IM has a long history, existing in various forms on computers as soon as they were attached to networks. Most IM systems were designed in isolation using closed networks and/or proprietary protocols, meaning each system can only exchange messages with users on the same IM network. Users on different IM networks often can’t send or receive messages, or do so with drastically reduced features because the messages must be transported through “gateways” that use a least common denominator approach to message translation.

    The problem of isolated, proprietary networks in IM systems today is similar to email systems in the early days of computer networks. Fortunately for email, the IETF created early standards defining the protocols and data formats that should be used to exchange email. Email software vendors rapidly switched to the IETF standards to provide universal exchange of email among all email users on the Internet.

    In 2004 the IETF published RFC 3920 and 3921 (the “Core” and “Instant Messaging and Presence” specifications for instant messaging) officially adding XMPP, mostly known as Jabber at the time, to the list of Internet standards. A year later, Google introduced Google Talk, a service that uses XMPP as its underlying protocol.

    Google’s endorsement of the XMPP protocol greatly increased the visibility and popularity of XMPP and helped pave the way for XMPP to become the Internet IM standard. Over the years, more and more XMPP-based solutions followed: from Whatsapp, Jitsi, Zoom and Grinder in the IM-sphere, Google Cloud Print, Firebase Cloud Messaging and Logitec’s Harmony Hub in the IoT-realm, to Nintendo Switch, Fortnite and League of Legends in the world of gaming.

    XMPP: Open, Extensible, XML Instant Messaging

    The XMPP protocol benefits from three primary features that appeal to administrators, end users and developers: an IETF open standard, XML data format, and simple extensions to the core protocol. These benefits combine to position XMPP as the most compelling IM protocol available for businesses, consumers, and organizations of any size.

    Open Standard Benefits

    The fact that XMPP is an open standard has led to its adoption by numerous software projects that cover a broad range of environments and users. This has helped improve the overall design of the protocol, as well as ensured a “best of breed” market of client applications and libraries that work with all XMPP servers. The vibrant XMPP software marketplace contains 90+ compatible clients that operate on all standard desktop systems and mobile devices, from mobile phones to tablets.

    Wide adoption has provided real-world proof that XMPP-based software from different vendors, deployed by both large and small organizations, can work together seamlessly. For example, XMPP users logged into their personal home server and an employee logged into a corporate IM server can chat, see each other’s presence on their contact lists, and participate in chat rooms hosted on an Openfire XMPP server running at a university.

    XML Data

    XML is one of the most popular, robust data exchange formats in use today and has become a standard part of most software systems. As a well-matured protocol, XMPP uses the XML data format to transport data over standard TCP/IP sockets and websockets, making the protocol and its data easy to use and understand. Any developer familiar with XML can immediately work with XMPP as no special data format or other proprietary knowledge is needed. Existing tools for creating, reading, editing, and validating XML data can all be used with XMPP without significant modification. The XML foundation of XMPP greatly simplifies integration with existing environments and eases the movement of data to and from the XMPP network.

    Extending XMPP

    The extensible nature of XML provides much of the extension support built into XMPP. Through the use of XML namespaces, the XMPP protocol can be easily used to transport custom data in addition to standard IM messages and presence information. Software developers and companies interested in the real-time exchange of data are using XMPP as an alternative to custom data transport systems.

    The XMPP community publishes standard extensions called XMPP Enhancement Proposals (XEPs) through the XMPP Software Foundation (XSF). The XSF’s volunteer-driven process provides a way for companies creating innovative extensions and enhancements to the XMPP protocol to work together to create standard improvements that all XMPP users benefit from. There are well over 400 XEPs today covering a wide range of functionality, including security enhancements, user experience improvements and VoIP and video conferencing. XEPs allow the XMPP protocol to rapidly evolve and improve in an open, standards-based way.

    XMPP Networks Explained

    An XMPP network is composed of all the XMPP clients and servers that can reach each other on a single computer network. The biggest XMPP network is available on the Internet and connects public XMPP servers. However, people are free to create private XMPP networks within a single company’s internal LAN, on secure corporate virtual private networks, or even within a private network running in a person’s home. Within each XMPP network, each user is assigned a unique XMPP address.

    Addresses - Just Like Email

    XMPP addresses look exactly the same as email addresses, containing a user name and a domain name. For example, sales@acme.com is a valid XMPP address for a user account named “sales” in the acme.com domain. It is common for an organization to issue the same XMPP address and email address to a user. Within the XMPP server, user accounts are frequently authenticated against the same common user account system used by the email system.

    XMPP addresses are generated and issued in the same way that email addresses are. Each XMPP domain is managed by the domain owner, and the XMPP server for that domain is used to create, edit, and delete user accounts. For example, the acme.com server is used to manage user accounts that end with @acme.com . If a company runs the acme.com server, the company sets its own policies and uses its own software to manage user accounts. If the domain is a hosted account on an Internet Service Provider (ISP) the ISP usually provides a web control panel to easily manage XMPP user accounts in the same way that email accounts are managed. The flexibility and control that the XMPP network provides is a major benefit of XMPP IM systems over proprietary public IM systems like Whatsapp, Telegram and Signal, where all user accounts are hosted by a third party.

    Server Federation

    XMPP is designed using a federated, client-server architecture. Server federation is a common means of spreading resource usage and control between Internet services. In a federated architecture, each server is responsible for controlling all activities within its own domain and works cooperatively with servers in other domains as equal peers.

    In XMPP, each client connects to the server that controls its XMPP domain. This server is responsible for authentication, message delivery and maintaining presence information for all users within the domain. If a user needs to send an instant message to a user outside of their own domain, their server contacts the external server that controls the “foreign” XMPP domain and forwards the message to that XMPP server. The foreign XMPP server takes care of delivering the message to the intended recipient within its domain. This same server-to-server model applies to all cross-domain data exchanges, including presence information.

    XMPP server federation is modeled after the design of Internet email, which has shown that the design scales to include the entire Internet and provides the necessary flexibility and control to meet the needs of individual domains. Each XMPP domain can define the level of security, quality of service, and manageability that make sense for their organization.

    Conclusion

    XMPP is open, flexible and extensible, making it the protocol of choice for real-time communications over the Internet. It enables the reliable transport of any structured XML data between individuals or applications. Numerous mission-critical business applications use XMPP, including chat and IM, network management and financial trading. With inherent security features and support for cross-domain server federation, XMPP is more than able to meet the needs of the most demanding environments.

    1 post - 1 participant

    Read full topic

    • wifi_tethering open_in_new

      This post is public

      discourse.igniterealtime.org /t/xmpp-the-protocol-for-open-extensible-instant-messaging/94748

    • chevron_right

      ProcessOne: Matrix and XMPP: Thoughts on Improving Messaging Protocols – Part 1

      news.movim.eu / PlanetJabber • 2 October • 4 minutes

    For over two decades, ProcessOne has been developing large-scale messaging platforms, powering some of the largest services in the world. Our mission is to build the best messaging back-ends imaginable–an exciting yet complex challenge.

    We began with XMPP (eXtensible Messaging and Presence Protocol), but the need for interoperability and support for a variety of use cases led us to implement additional protocols. Our stack now supports:

    • XMPP (eXtensible Messaging and Presence Protocol): A robust, highly scalable, and flexible protocol for real-time messaging.
    • MQTT (Message Queuing Telemetry Transport): The standard for IoT messaging, ideal for lightweight communication between devices.
    • SIP (Session Initiation Protocol): A widely used standard for voice-over-IP (VoIP) communications.
    • Matrix: A decentralized protocol for secure, real-time communication.

    A Distributed Protocol That Replicates Data Across Federated Servers

    This brings me to the topic of Matrix. Matrix is designed not just to be federated but also distributed. While it uses the term “decentralized,” I find this slightly misleading. A federated protocol is inherently decentralized, as it allows users across different domains to communicate–think email, XMPP, and Matrix itself. What truly sets Matrix apart from XMPP is its data distribution model.

    Matrix is distributed because it aims to ensure that all participating nodes in a conversation have a copy of that conversation, typically in end-to-end encrypted form. This ensures high availability: if the primary node hosting a conversation becomes unavailable, the conversation can continue on another node.

    In Matrix, a conversation is represented as a graph, a replicated document containing all events related to a group discussion. You can think of each conversation as a mini-blockchain, except that instead of forming a chain, the events create a graph.

    Resource Penalty: Computing and Storage

    As with any design decision, there are trade-offs. In the case of Matrix, this comes with a performance penalty. Since conversations are replicated across nodes, the protocol performs merge operations to ensure consistency between the replicated data. The higher the traffic, the greater the cost of these merge operations, which adds CPU load on both the Matrix node and its database. Additionally, there is a significant cost in terms of storage.

    If the Matrix network were to scale massively, with many nodes and conversations, it would encounter the same growth challenges as blockchain protocols. Each node must store a copy of the conversation, and the amount of replication depends on the number of conversations and nodes globally. As these numbers grow, so does the replication factor.

    Comparison with XMPP

    XMPP, on the other hand, is event-based rather than document-based. It processes and distributes events in the order they arrive without attempting to merge conversation histories. This simpler approach avoids the replication of group chat data across federated nodes, but it comes with some limitations.

    Here’s how XMPP mitigates these limitations:

    • One-on-One Conversations: Each user’s messages are archived on their server, keeping the replication factor under control (usually limited to two copies).
    • Group Chats: If a chatroom goes down, the conversation becomes unavailable for both local and remote users. However, XMPP has strategies to reduce the need for data replication. Several servers implement clustering, making it possible to upgrade the service node by node. If one node is taken down (for maintenance, for instance), another node can take over the management of the chatroom.
    • Hot Code Upgrades: Some servers, like ejabberd, allow hot code upgrades, which means minor updates can be applied without shutting down the node, minimizing downtime and the need for data replication.
    • Message Archiving for MUC Rooms: Some servers offer message archiving for multi-user chat (MUC) rooms, but also allows users to store on their local server recent chat history in select MUC for future reference.
    • Cluster Replication (ejabberd Business Edition): Chatrooms can be replicated within a cluster, ensuring they remain available even if a node crashes.

    Thanks to the typically high uptime of XMPP servers, especially for clustered services, intermittent availability of servers in the network hasn’t posed a significant issue, so large-scale data replication hasn’t been a necessity.

    What’s Next for XMPP?

    This comparison suggests potential improvements for XMPP. Could XMPP benefit from an optional feature to address the centralized nature of chatrooms? Possibly. What if there were a caching and resynchronization protocol for multi-user chatrooms across different servers? This could enhance the robustness of the federation without the storage burden of full content replication, offering the best of both worlds.

    What’s Next for Matrix?

    Matrix, by design, comes with trade-offs. One of its key goals is to resist censorship, which is vital in certain situations and countries. That’s why we believe it is worth trying to improve the Matrix protocol to address those use cases. There’s still room to optimize the protocol, specifically by reducing the cost of running a distributed data store. We plan to propose and implement improvements on how merge operations work to make Matrix more efficient.

    I’ll share our proposals in the next article.

    As always, this is an open discussion, and I’d be happy to dive deeper into these topics if you’re interested.

    The post Matrix and XMPP: Thoughts on Improving Messaging Protocols – Part 1 first appeared on ProcessOne .
    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/matrix-and-xmpp-thoughts-on-improving-messaging-protocols-part-1/

    • chevron_right

      Erlang Solutions: Erlang Concurrency: Evolving for Performance

      news.movim.eu / PlanetJabber • 26 September • 6 minutes

    Some languages are born performant, and later on tackle concurrency. Others are born concurrently and later build on performance. C or Rust system’s programming are examples of the former, Erlang’s Concurrency is an example of the latter.

    A mistake in concurrency can essentially let all hell loose, incurring incredibly hard-to-track bugs and even security vulnerabilities, and a mistake in performance can leave a product trailing behind the competition or even make it entirely unusable to begin with.

    It’s all risky trade-offs. But what if we can have a concurrent framework with competitive performance?

    Let’s see how Erlang answers that question.

    A look into scalable platforms

    C or Rust are languages traditionally considered performant by compiling native machine instructions, enforcing manual memory management , and exposing semantics that map to that of the underlying hardware. Meanwhile, Erlang is a language famous for its top-notch concurrency and fault tolerance and was built to be that way. Lightweight processes, cheap message passing, and share-nothing semantics are some of the key concepts that make it a strength. Like everything in IT, there’s a wide range of trade-offs between these two problems.

    A case of encoding and decoding protocol

    An advantage of having a performance problem versus having a concurrency problem is that the former is much easier to track down. thanks to automated tooling like benchmarking and profiling. It’s a common path for Erlang. When you need performance, it offers a very easy-to-use foreign function interface. This is especially common for single-threaded algorithms, where challenges of concurrency or parallelism aren’t important and risks of a memory-unsafe language like C are minimal. The most common examples are hashing operations, protocol encoding and decoding like JSON or XML .

    Messaging platforms

    Many years ago, one of my first important contributions to MongooseIM , our extensible and scalable Instant Messaging server, was a few optimisations about JIDs, short for Jabber IDentifiers, the identifiers XMPP uses to identify users and their sessions. After an important cleanup of JID usage , I noticed in profiles that calls to encoding and decoding operations were not only extremely common but also suspiciously slow. So I then wrote my first NIF , implementing these basic and very simple operations in C, and later moving this code to its own repo .

    Fast forward a few years, and I had the chance to revert to the original Erlang code for the encoding and later on for the decoding . The same very straightforward Erlang code has become a lot faster than the very carefully optimised C one. But this is a very simple algorithm, so, let’s push the limits to more interesting places.

    The evolution of the Erlang runtime

    OTP releases have been getting more and more powerful performance improvements, especially since the JIT compiler was introduced. To put things into context, when I made those NIFs back then, no JIT compiler was available at the time. Ever since the quest to beat C code has become feasible in a myriad of scenarios.

    For example, one place where the community has put some emphasis on building ever more performant code was on JSON parsers. Starting from Elixir’s Jason and Poison , we also have Thoas , a pure-Erlang alternative. They all ran their benchmarks competing most dearly against jiffy , the archetypical C solution. Until recently a JSON module was incorporated directly into Erlang . See the benchmarks yourself, for encoding and decoding . It beats jiffy in virtually all scenarios, sometimes by a very good difference.

    Building communities

    Here’s a more interesting example. I’ve been working on a MongooseIM extension for FaceIT , the leading independent competitive gaming platform for online multiplayer PvP gamers . They provide a platform where you can find the right matching peers , that keeps the community healthy and thriving , making sure matches are fair and no cheating is allowed , and ultimately, a platform where you can build a relationship with your peers.

    These communities are enabled by MongooseIM and need presence lists.

    The challenge of managing presence for such a large community is two-fold. First, we want to reduce network operations by introducing pagination and real-time updates of only deltas, so that without any network redundancy your device can update the view on display. Then we want to be able to manage such large data structures within maximal performance.

    Native code

    This is archetypical: a very large data structure that requires fast updates is not where immutable runtime would shine, as “mutations” are in reality copying. So you consider a solution in a classical performant language, for example, C, or even better, a memory-safe one like Rust. Quickly we found a ready solution, a Rust implementation of an indexed sorted set was readily available . A very good option, probably the best around. Let’s have a measurement for reference: inserting a new element at any point in a set of 250K elements takes ~4μs. The question then is, can we give pure Erlang code a chance to compete against ~4μs?

    Data structures

    You can find more details about this endeavour in a guest blog I wrote for FaceIT. In a nutshell, the point of disadvantage is the enormous amount of copying that any mutation implies. So the first idea is to have a list of lists instead: this is pretty much a skip-list . A list of lists of lists is just a skip list with more lanes, and all these lanes require constant rebalancing.

    Another data structure famous for requiring rebalancing is the balance binary tree and Erlang ships with an implementation of general balanced trees based on this research paper by Prof. Arne Andersson. This will be our candidate. The only thing missing is operations on indexes, as we want to know the position modifications that took place. Wikipedia has a hint here: Order Statistic Trees . In turn, extending ` gb_sets ` to support indexes took not more than a couple of hours, and we’re ready to go.

    AD_4nXff89JXbO_hveR1XckfPsRUI2E3XFpTZ5saB7V2nnVBK7K-j6sazsoZPvaYftIE7A9AjkP_PbNbxO1sJSrfTZFqSn-hoNmp1nVSOqnlkC-jT-8nw5ZfkFcKAe4zQXIaRfxlxVPVeLYiZRiAJaMEt5oYnXZv?key=vSjbgsXgP5Gma-wSCoCCYw

    Benchmarking

    5μs!

    Adding an element at any point of a 250K list takes on average 5μs! In pure Erlang!

    The algorithm has an amortised runtime of `O(1.46*log(n))`, and the logarithm base two of 250K is ~8, that is, on average it will take 12 (exactly 11.68) steps to apply any operation. And including the most unfortunate operations that required the entire 12 steps and some rebalancing, the worst case is still 17μs.

    What about getting pages? Getting the first 128 elements of the set takes 92μs. To add a bit more detail, the same test takes 100μs on OTP26, that is, by only upgrading the Erlang version we already get an 8% performance improvement.

    Conclusion

    We’ve seen a scenario with MongooseIM JIDs where once code is rewritten in C for performance reasons could simply be reverted to the original Erlang code and beat C’s performance; a case with JSON parsing where well-crafted Erlang code could beat the unbeatable C alternative; and at last, a problem where a very large data structure once written in Rust wasn’t necessary as a simple pure-Erlang implementation was just as competitive.

    We have a runtime that was born concurrently and evolved to be performant. We have a runtime that is memory-safe and makes the enormously complex problem of concurrency and distribution easy, allowing the developer to focus on the actual business case while massively reducing the costs of support and the consequences of failure. A runtime where performance does not need to be sacrificed to achieve these things. A runtime that enables you to write messaging systems (WhatsApp, Discord, MongooseIM), websites (Phoenix and the insanely performant JSON parsing), and community platforms (FaceIT).


    We know how to make this all work, and how to keep the ecosystem evolving. If you want to see the power of Erlang and Elixir in your business case, contact us !


    The post Erlang Concurrency: Evolving for Performance appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/erlang-concurrency-evolving-for-performance/

    • chevron_right

      Ignite Realtime Blog: Openfire 4.9.0 release!

      news.movim.eu / PlanetJabber • 17 September • 1 minute

    The Ignite Realtime community is happy to be able to announce the immediate availability of version 4.9.0 of Openfire , its cross-platform real-time collaboration server based on the XMPP protocol!

    As compared to the previous non-patch release, this one is a bit smaller. This mostly is a maintenance release, and includes some preparations (deprecations, mainly) for a future release.

    Highlights for this release:

    • A problem has been fixed that caused, under certain conditions, a client connection to be disconnected. This appears to have affected clients sending multi-byte character data more than others.
    • Community member Akbar Azimifar has provided a full Persian translation for Openfire!

    The list of changes that have gone into the Openfire 4.9.0 release has some more items though! Please review the change log for all of the details.

    Interested in getting started? You can download installers of Openfire here . Our documentation contains an upgrade guide that helps you update from an older version.

    The integrity of these artifacts can be checked with the following sha256sum values:

    7973cc2faef01cb2f03d3f2ec59aff9b2001d16b2755b4cc0da48cc92b74d18a  openfire-4.9.0-1.noarch.rpm
    a0cd627c629b00bb65b6080e06b8d13376ec0a4170fd27e863af0573e3b4f791  openfire_4.9.0_all.deb
    bf62c02b0efe1d37fc505f6942a9cf058975746453d6d0218007b75b908a5c3c  openfire_4_9_0.dmg
    1082d9864df897befa47230c251d91ec0780930900b2ab2768aaabd96d7b5dd9  openfire_4_9_0.exe
    12a4a5e5794ecb64a7da718646208390d0eb593c02a33a630f968eec6e5a93a0  openfire_4_9_0.tar.gz
    c86bdb1c6afd4e2e013c4909a980cbac088fc51401db6e9792d43e532963df72  openfire_4_9_0_x64.exe
    97efe5bfe8a7ab3ea73a01391af436096a040d202f3d06f599bc4af1cd7bccf0  openfire_4_9_0.zip
    

    We would love to hear from you! If you have any questions, please stop by our community forum or our live groupchat . We are always looking for volunteers interested in helping out with Openfire development!

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: Openfire Hazelcast plugin version 3.0.0

      news.movim.eu / PlanetJabber • 12 September

    Earlier today, we blogged about a boatload of Openfire plugins for which we made available maintenance releases.

    Apart from that, we’ve also made a more notable release: that of the Hazelcast plugin for Openfire.

    The Hazelcast plugin for Openfire adds clustering support to Openfire. It is based on the Hazelcast platform .

    This release brings a major upgrade of the platform. It migrates from version 3.12.5 to 5.3.7.

    As a result, replacing an older version of the plugin with this new release requires some careful planning. Notably, the configuration stored on-disk has changed (and is unlikely to be compatible between versions). Please refer to the readme of the plugin for details.

    A big thank you goes out to community member Arwen for making this upgrade happen!

    As usual, the new version of the plugin will become available in your Openfire server within the next few hours. Alternatively, you can download the plugin from its archive page .

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: Openfire plugin maintenance releases!

      news.movim.eu / PlanetJabber • 12 September

    The Ignite Realtime community is gearing up for a new release of Openfire. In preparation, we have been performing maintenance releases for many Openfire plugins.

    These Openfire plugin releases have mostly non-functional changes, intended to make the plugin compatible with the upcoming 4.9.0 release of Openfire:

    The following plugins have (also) seen minor functional upgrades:

    As usual, the new versions of the plugins should become available in your Openfire server within the next few hours. Alternatively, you can download the plugins from their archive pages, which are linked to above.

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      JMP: Newsletter: eSIM Adapter Launch!

      news.movim.eu / PlanetJabber • 11 September • 2 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    eSIM Adapter

    We’ve talked before about the eSIM Adapter , but today we’re excited to announce that we have a good amount of production stock, and you can order the eSIM adapter right now . Existing JMP customers who want to pay with their account balance can also order by contacting support . Have a look at the product launch on Product Hunt as well.

    JMP’s eSIM Adapter is a device that acts exactly like a SIM card and will work in any device that accepts a SIM card (phone, tablet, hotspot, USB modem), but the credentials it offers come from eSIMs provided by you. With the adapter, you can use eSIMs from any provider in any device, regardless of whether the device or OS support eSIM. It also means you can move all your eSIMs between devices easily and conveniently. It’s the best of both worlds: the convenience of downloading eSIMs along with the flexibility of moving them between devices and using them on any device.

    For JMP Data Plan Physical SIM Owners

    Our data plan has always has the choice for a physical SIM. For people who just want the data plan and no other eSIMs this works fine, and we will continue to sell these legacy cards until we run out of stock. However some of you might be wondering if you need to buy an eSIM Adapter now in order to get some of these benefits. The answer might be no! If you order just the USB reader, you can use the app to flash new eSIMs and switch profiles on your existing physical SIM! This isn’t quite as convenient as the full eSIM Adapter, you will need to pop out the SIM and put it into the USB reader even to switch profiles, but it does work for those who have one already.

    Cheogram Android

    Cheogram Android 2.15.3-3 and 2.15.3-4 have been released. These releases contain some improvements to the embedded “widget” system, funded by NLnet . You can now select from a large list of widgets right in the app. More improvements to this system are coming soon, and if you’re a web-tech developer who is interested in extending people’s chat clients, check out the docs !

    Email Gateway

    We sponsor the development of an email gateway, Cheogram SMTP, which is also getting better thanks to NLnet. The gateway now supports file attachments on emails, and will soon support sharing widgets with Delta Chat users as well!

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/september-newsletter-2024

    • chevron_right

      Erlang Solutions: How Generative AI is Transforming Healthcare

      news.movim.eu / PlanetJabber • 9 September • 11 minutes

    Generative AI (Gen AI) has emerged as a transformative technology across the healthcare industry. It has the potential to vastly transform the clinical decision-making process and ultimately improve patient health outcomes.

    The adoption of generative AI is now valued at over $1.6 billion and the global AI market in the healthcare market is projected to reach $45.2 billion by 2026 .

    Given such rapid growth projections, healthcare providers should recognise the opportunities presented by generative AI and explore how to incorporate it to improve patient care.

    We will take an in-depth look into Gen AI and provide insights into how it is poised to revolutionise the healthcare space.

    Understanding Generative AI in healthcare

    So traditional vs generative AI- what’s the difference?

    Before we compare the two, let’s take a moment to clearly define them:

    Traditional AI- Also known as narrow or weak AI is a branch of artificial intelligence that executes tasks based on pre-established algorithms. These systems are usually specialised and excel at specific functions. However, they have a restricted range of applications compared to other types of AI.

    Examples include: Chatbots, voice assistants, credit score systems and spam filters.

    Generative AI- Is a form of artificial intelligence that can produce new outputs. These include text, images, and other types of data. It starts by analysing large amounts of existing data. Those insights are then used to create fresh content. Generative AI uses machine learning to identify patterns, make predictions, and generate new material based on the data it processes.

    Generative AI and predictive AI are quite similar, as predictive AI is also a learning-based technology that identifies and anticipates patterns.

    As well as data analysis, generative AI needs direct human input, or what is more commonly known as a “’prompt,’ to guide the AI’s output. Prompts aren’t just text focus. They also include videos, graphics, or audio.

    Key differences between Traditional and Generative AI

    Now let’s look into the main differences between traditional and generative AI:

    Aspect Traditional AI Generative AI
    Applications Is most commonly used for data analysis, forecasting, and optimisation tasks where rules are well-defined. Best for tasks such as image recognition, natural language processing (NLP), and sentiment analysis, where patterns in unstructured data are key.
    Data- handling Best for structured data and tasks requiring precise, rule-based decision-making. Best at managing and interpreting large volumes of unstructured data, such as images, videos, and text.
    Rules-based v learning-based Operates on explicit rules that are programmed by humans. Learns from data and adapts its behaviour according to the patterns it discovers.
    Flexibility and adaptability Rigid and less adaptable. It requires manual updates to handle new and/or unexpected situations. Highly flexible and capable of learning from diverse datasets. It can adjust to new scenarios without manual intervention.
    Creativity and autonomy Lacks creative abilities and autonomy and is confined to predefined rules and tasks. Highly capable of autonomously generating content, such as images and text, demonstrating creativity beyond rule-based systems.
    Learning capabilities Is dependent on predefined rules and algorithms and requires human intervention for updates or adjustments. It utilises deep learning to continuously improve by analysing data, identifying patterns, and making predictions, making it highly adaptable.

    While there are no clear “winners”, generative AI’s advantage lies in its broader offering. Its strength comes from using models to create unique content, without the need to rely on clear and rigid rules. Compared to its traditional counterpart, generative AI allows for greater flexibility and problem-solving.

    For healthcare industry leaders, the strategic value of Generative AI cannot be overstated. Its benefits provide the ability to foster a culture of continuous improvement, enabling organisations to innovate in a rapidly evolving industry.

    Enhancing patient care and outcomes with Generative AI

    Healthcare leaders have long understood the role of personalised care in meeting patient needs. Generative AI provides the opportunity to create personalised care plans, with the ability to leverage patient experience and achieve significantly better health outcomes.

    Here are some of generative AI’s main growing applications:

    Personalised treatment plans

    A standout advantage of generative AI in healthcare is its ability to create treatment plans that are uniquely tailored to each patient. Traditional approaches are dependent on standardised protocols, which might not fully address the specific needs of every individual.

    But generative AI can analyse extensive patient data- from medical history and genetic information to lifestyle choices and social factors- to develop highly personalised treatment recommendations.

    AI can examine a patient’s genetic likelihood of developing certain conditions and suggest preventative measures or specific treatment options. By offering customised care recommendations, healthcare professionals can enhance the effectiveness of treatments and reduce the chances of adverse effects, leading to improved health outcomes.

    Predictive analytics

    Generative AI also offers significant potential in the realm of predictive analytics, helping healthcare providers foresee and prevent potential health issues before they become critical. By examining large datasets of patient information, such as vital signs, lab results, and diagnostic images, AI algorithms can detect patterns that may signal future health risks.

    For example, researchers at Mayo Clinic utilised generative AI to develop a deep-learning algorithm to forecast the likelihood of patient complications following surgery.

    AD_4nXe8K32uUWWisMb7_z1AqKbQL8qMEFjnGCx9Iw_ArBqHAl30Gu02yA7-Xv-myNKzFXqjqHpoEiU71NiDERHVjC5HS2DwfUbBZdRb1BcarZGumo5JR2EiXpr_GBh6eKpmrf86dKuqo6rlf2EIifdtrrCsg-s?key=aVqbSaLi8JBihkoeHMBO7g

    It can produce customised treatment recommendations, depending on the risk.

    Enhanced diagnostic accuracy

    Diagnosis mistakes are a massive concern within healthcare. They can lead to incorrect or delayed treatment, which can lead to life-threatening consequences for patients. Generative AI provides enhanced accuracy of diagnoses by supporting healthcare professionals in their decision-making processes.

    Generative AI systems can:

    • Examine medical images, such as X-rays, MRIs, and pathology slides, with incredible precision.
    • Identify even the smallest abnormalities that might be missed by the human eye.
    • Recognise patterns that are linked to particular diseases by learning from sizable datasets.
    • Analyse symptoms and clinical data to suggest possible diagnoses.

    By improving diagnostic accuracy, generative AI helps to ensure timely and correct treatments. It also reduces the likelihood of unnecessary procedures and time delays, which ultimately ease the strain on healthcare systems.

    Virtual health assistants

    Chatbots that are powered by AI act as virtual health assistants (VHAs) for patients. They can provide a wealth of information, such as answering medical queries, providing relevant health information, medication reminders, personalised health advice and necessary support for chronic conditions- to name a few services.

    DAX Express , a notable US healthcare provider integrates GPT-4, a generative AI technology, to automate clinical documentation. This application listens to patient-physician interactions and then generates medical notes. The notes are directly uploaded into Electronic Health Records (EHR) systems, removing the need for a human review.

    AD_4nXeHJQtnPUoldUvSfOq6ujll8gPR5BDdu_urQCCpnrmfd0XnWx5HoZQcfkiMV5TkFRR2Lj7qAr3h5g4HrbJLjlDPJPO3aZafgV7ekC7rnAydloXFrGexEQfjryabYhXfCcg_ct-hlszXTqFPGzfs2VwTI7Aa?key=aVqbSaLi8JBihkoeHMBO7g

    The use of gen AI in DAX Express significantly speeds up the documentation process. It reduces it from hours to seconds, a substantial benefit of VHAs. This also improves patient care by allowing doctors more time with patients, enhancing the accuracy of medical records, and speeding up follow-up treatments.

    Operational efficiency and cost reduction

    As briefly mentioned, AI’s ability to provide transformative solutions that streamline processes and reduce costs, while enhancing patient care is a major driver for healthcare service providers  Various AI-driven technologies including GenAI are reshaping the way healthcare organisations operate, so both providers and patients benefit from these advancements.

    Wearables to enhance patient monitoring

    Fitbits, Apple watches, Oura rings and ECG machines are just a few examples of AI-powered wearables that are transforming patient care through continuous health monitoring.

    According to research by Deloitte, wearable technologies are expected to reduce 16% of hospital costs by 2027, and by 2037, it could save $200 billion with its remote patient monitoring devices. These devices are designed to catch potential health issues early, reducing the need for emergency interventions and, therefore lowering the frequency of hospital visits. They also empower patients to manage their health proactively, thanks to personalised reminders, which are aimed to foster healthier habits (think of fitness rings on your Apple watch).

    This approach optimises healthcare resources and contributes to reducing overall healthcare costs.

    Accelerating drug discovery and reducing costs

    In the pharmaceutical sector, generative AI is accelerating drug discovery processes for faster treatment development, especially in underserved medical areas.

    Gen AI models analyse large amounts of data to identify new disease markers, streamlining clinical trials and reducing their duration by up to two years.

    Improved accessibility and inclusivity

    AI-driven chatbots and digital assistants are breaking down barriers to healthcare access. They support multiple languages and provide user-friendly interactions.

    These tools make healthcare more inclusive, especially for patients with disabilities or those who face language barriers. By ensuring that everyone can access care without obstacles, AI enhances patient satisfaction and promotes equity in healthcare delivery.

    Optimising data management and analysis

    The healthcare industry generates vast amounts of data daily, and managing this information efficiently is crucial. Traditional AI assists in organising and analysing healthcare data, which helps professionals extract valuable insights quickly.

    Generative AI can sift through electronic health records, research articles, and clinical notes to identify patterns and trends, enabling more informed decision-making and improving the overall quality of care.

    Enhancing supply chain and equipment utilisation

    AI is also playing a critical role in optimising the healthcare supply chain and equipment usage. According to research from Accenture , 43% of all working hours across end-to-end supply chain activities could be impacted by gen AI in the near future.

    AI algorithms can provide actionable insights into the best supplies or drugs to use, considering cost, quality, and patient outcomes. Additionally, gen AI helps schedule diagnostic equipment, ensuring that costly machines like MRIs and CT scanners are utilised to their full potential. This not only reduces operational costs but also enhances the overall efficiency of healthcare delivery.

    Integrating Elixir in Generative AI in Healthcare

    As healthcare organisations increasingly adopt generative AI technologies, integrating the right tools and technologies is crucial for maximising their potential. Elixir , a functional and concurrent programming language is well-suited to underpin the delivery of AI models, due to its scalability, performance and ability to handle large-scale data processing efficiently. Healthcare organisations can leverage Elixir to ensure robust, reliable deployment of their generative AI technologies, to drive innovation and enhance capabilities.

    Let’s look into some of the key features of Elixir:

    • Concurrency: Elixir excels at managing numerous simultaneous tasks, crucial for handling the large-scale data processing needs of AI applications.
    • Fault tolerance: Built to handle failures with ease, Elixir ensures continuous operation and reliability—a key trait for healthcare systems that demand high uptime.

    Scalability: Elixir’s ability to scale horizontally supports the growing computational demands of advanced AI models and data analytics.

    Improved data processing and management

    Elixir’s concurrency capabilities allow for efficient handling of multiple data requests. This feature is essential for real-time AI applications in healthcare, for example, diagnostic tools and patient management systems. Its scalability allows healthcare organisations to build robust data pipelines, ensuring smooth data flow and faster processing.

    Enhanced system reliability

    Elixir supports fault-tolerant architectures. It maintains system reliability for critical healthcare applications. Its ability to recover from errors without system-wide disruption means that AI-driven healthcare solutions remain operational and dependable.

    Optimised performance

    Once again, Elixir’s scalability meets the growing demands of AI workloads. This improves computational efficiency and enhances overall system performance. Its support for real-time processing also improves the responsiveness of healthcare applications, providing immediate feedback and improving operational efficiency.

    Integrating Elixir and other technologies with generative AI provides huge potential to enhance healthcare applications- from improving data management to optimising system performance. For business leaders, strategic planning and collaboration are key to harnessing these technologies effectively, to ensure that their organisations can capitalise on the benefits of AI while maintaining robust and reliable systems.

    The future of Generative AI

    While generative AI has the proven power to revolutionise the healthcare industry, some ethical and regulatory points still need to be considered. Areas surrounding patient privacy, security and equitable access to AI-powered equipment are still a work in progress.

    Consumer trust is also a critical factor in the future of AI. To optimise AI results, business leaders need to understand consumers’ feelings towards Generative AI.

    In a survey conducted by Wolters Kluwer, while those asked did have some concerns or fears surrounding generative AI, 45% were “starting from a position of curiosity.” Over half (52%) of those surveyed further reported that they would be fine with their healthcare providers using gen AI to support their care.

    generative ai in healthcare american feelings

    The trust in gen AI is not just down to the technology, but the trust of the consumer in their healthcare provider.

    As these issues are addressed, a new era of improved health outcomes and more accessible medical services is just years, if not months, away from being a reality.

    To conclude

    AI has undeniable transformative potential. Leveraging and gaining a better understanding of this technology is key to embracing innovation in the healthcare industry and most importantly, prioritising patient-centric care.

    Whether your current systems are falling short or you’re actively seeking new ways to improve patient outcomes and operational efficiency, generative AI provides a powerful solution. It can revolutionise how healthcare services are delivered- from personalised treatment plans to predictive analytics and enhanced diagnostics.

    By adopting gen AI, you can position healthcare providers at the forefront of healthcare innovation. If you’d like to talk more about your healthcare needs or how Elixir can power your AI model, feel free to drop us a line .

    The post How Generative AI is Transforming Healthcare appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/how-generative-ai-is-transforming-healthcare/