phone

    • chevron_right

      Ignite Realtime Blog: Openfire 4.7.3 released

      news.movim.eu / PlanetJabber • 2 August, 2022 • 1 minute

    The Ignite Realtime Community is pleased to announce the release of Openfire version 4.7.3. This version brings a number of bug fixes and other improvements and signifies our efforts to produce a stable 4.7 series of Openfire whilst work continues on the next feature release 4.8.0.

    You can find download artifacts on our website with the following sha256sum values.

    f05df8e68b8d52ef2941d80884484e62dcface5069868d3f51d2bfe17a72ea5a  openfire-4.7.3-1.noarch.rpm
    8ba71bbf0b1abb5c2cd0e18dc20ade77ca2714d58a8ad5313f64614bdc7dac44  openfire_4.7.3_all.deb
    d2b1ffa24b3d86858e4d5451a094193c839bceae0a73773fa5ae0e114d0732ff  openfire_4_7_3.dmg
    41056744e15e3a9b384b852f5e2c1d1ebb4bfd1d79bb10b54526a1fd9a7fee07  openfire_4_7_3.exe
    3ced4613c3cef61068fb89eed723e49b5845e960c6eec194961f75bc65042832  openfire_4_7_3.tar.gz
    9c8ebcb930f2373713c1eb53a4b70d9f76a8577b95cbefd41e1ae284c2ff28aa  openfire_4_7_3_x64.exe
    670db7574e6b528145c002f22f58789f949d5d5c03fbe372204816db07bd01e7  openfire_4_7_3.zip
    

    As always, thanks for your interest in Openfire and stop by the open chat if you need any help or just want to say hi! Please report any issues you have in our Community Forums .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: REST API Openfire plugin 1.8.3 released!

      news.movim.eu / PlanetJabber • 22 July, 2022

    We recently release version 1.8.3 of the Openfire REST API plugin. This version extends the MUC search capability to include the natural name of the MUC (instead of just the name). It also updates a number of library dependencies.

    The updated plugin should be available for download in your Openfire admin console already. Alternatively, you can download the plugin directly, from the plugin’s archive page .

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic

    • chevron_right

      Paul Schaub: Creating a Web-of-Trust Implementation: Certify Keys with PGPainless

      news.movim.eu / PlanetJabber • 19 July, 2022 • 2 minutes

    Currently I am working on a Web-of-Trust implementation for the OpenPGP library PGPainless. This work will be funded by the awesome NLnet foundation through NGI Assure . Check them out! NGI Assure is made possible with financial support from the European Commission’s Next Generation Internet programme .

    NLnet
    NGI Assure

    Technically, the WoT consists of a graph where the nodes are OpenPGP keys (certificates) with User-IDs and the edges are signatures. In order to be able to create a WoT, users need to be able to sign other users certificates to create those edges.

    Therefore, support for signing other certificates and User-IDs was added to PGPainless as a milestone of the project. Since release 1.3.2, users have access to a straight-forward API to create signatures over other users certificates. Let’s explore the new API together!

    There are two main categories of signatures which are important for the WoT:

    • Certifications are signatures over User-IDs on certificates. A certification is a statement “I believe that the person holding this certificate goes by the name ‘Alice alice@pgpainless.org ‘”.
    • Delegations are signatures over certificates which can be used to delegate trust decisions to the holder of the signed certificate.

    This is an example for how a user can certify a User-ID:

    PGPSecretKeyRing aliceKey = ...;
    PGPPublicKeyRing bobsCertificate = ...;
    
    CertificationResult result = PGPainless.certify()
            .userIdOnCertificate("Bob Babbage <bob@pgpainless.org>",
                    bobsCertificate)
            .withKey(aliceKey, protector)
            .build();
    
    PGPSignature certification = result.getCertification();
    // or...
    PGPPublicKeyRing certifiedCertificate = result.getCertifiedCertificate();

    It is possible to choose between different certification types, depending on the “quality” of the certification. By default, the certification is created as GENERIC_CERTIFICATION , but other levels can be chosen as well.

    Furthermore, it is possible to modify the signature with additional signature subpackets, e.g. custom annotations etc.

    In order to create a delegation, e.g. in order to delegate trust to an OpenPGP CA, the user can do as follows:

    PGPSecretKeyRing aliceKey = ...;
    PGPPublicKeyRing caCertificate = ...;
    
    CertificationResult result = PGPainless.certify()
            .certificate(caCertificate,
                    Trustworthiness.fullyTrusted().introducer())
            .withKey(aliceKey, protector)
            .buildWithSubpackets(new CertificationSubpackets.Callback() {
                @Override
                public void modifyHashedSubpackets(
                        CertificationSubpackets hashedSubpackets) {
                    hashedSubpackets.setRegularExpression(
                            "<[^>]+[@.]pgpainless\.org>$");
                }
            });
    
    PGPSignature delegation = result.getCertification();
    // or...
    PGPPublicKeyRing delegatedCertificate = result.getCertifiedCertificate();

    Here, Alice decided to delegate to the CA as a fully trusted introducer, meaning Alice will trust certificates that were certified by the CA.

    Depending on the use-case, it is advisable to scope the delegation, e.g. to a specific domain by adding a Regex packet to the signature, as seen above. As a result, Alice will only trust those certificates introduced by the CA, which have a user-id matching the regex. This is optional however.

    Check out PGPainless and don’t forget to check out the awesome NLnet foundation!

    • wifi_tethering open_in_new

      This post is public

      blog.jabberhead.tk /2022/07/19/creating-a-web-of-trust-implementation-certify-keys-with-pgpainless/

    • chevron_right

      Erlang Solutions: Updates to the MIM Inbox in version 5.1

      news.movim.eu / PlanetJabber • 14 July, 2022 • 6 minutes

    User interfaces in open protocols

    When a messaging client starts, it typically presents the user with:

    • an inbox
    • a summary of chats (in chronological order)
    • unread messages in their conversation
    • a snippet of the most recent message in the conversation
    • information on if a conversation is muted (and if so how long a conversation is muted for)
    • other information that users may find useful on their welcome screen

    MongooseIM as a messaging server uses XMPP as its underlying protocol, and to gather the data needed above, traditional XMPP makes it possible to build a summary by combining knowledge of the users’ roster (friend list), the archive , and possibly private storage for any extensions, but this means far too many round-trips between the server and the client to build the initial UI, a number of round-trips that worsens the more out-of-date the client is, and which also imposes a load on the server to answer them all.

    MongooseIM Inbox open extension

    For this, MongooseIM introduced an extension called inbox, back in 3.0.0 , that simply keeps track of the active conversations, the snippet with the last message, and a count of unread messages for each conversation which is, intuitively, restarted when a user sends a new message or a chat marker pointing to the latest message in the chat.

    In retrospect

    In 3.5.0 , we introduced a mechanism to manually set the unread count of a conversation to zero, which is a common operation for a user, when reading the conversation is as undesired as continuing to see it pop up with unread messages, but on 4.2.0 we took this really seriously: we introduced a mechanism to not only set a conversation as read, but also to set it as unread again, to mark a conversation as muted until a certain date, and to unmute it as well.

    Until now we had not added a function to set as unread because it would have been easy to introduce a problematic API and it was deemed out-of-scope for the effort at that time: if we allowed a client to set any count they desire, we might allow broken client software to break the experience of a good client software. the same user might use separately, for example by setting a number much higher than the actual number of messages in the conversation. And there’s also the chance of setting a number at the same time more messages arrive, that would therefore invalidate the count: should the number requested be considered an absolute number, or an increment? In the end, we settled for an atomic “set-as-one-if-it-was-zero”, i.e., if the conversation already had unread messages, the request does nothing. If the conversation was fully read, the request only sets it as strictly one. This way, a user cannot possibly move away from the actual number of unread messages, set incorrect numbers, or accidentally set a number at the same time more messages arrive.

    But this is not all, what if the user doesn’t want to be notified about a certain conversation? For example, they might say “mute this chat for an hour”, and all their devices should sync on the fact that the chat shouldn’t be making the phone buzz, and maybe the logo should be displayed differently. 4.2.0 introduces a possibility for that as well! A client might provide the number of seconds they want a conversation muted for, and MongooseIM will tell all his other devices, persist this information on the inbox summary, and propagate this information to be picked up by other services, like possibly the push-notification one.

    Classifications

    One more functionality (apart from a myriad of performance optimizations!) added in the latest MongooseIM 5.1 , the concept of boxes. Like in an email service, a user can classify chats in boxes like “favourites”, “travel”, “business”, or “home”. You can set a box for a chat and then when reconnecting focus on that box specifically in MongooseIM, and build a nice UI accordingly, including a “pinned” box of conversations you always want at the top!

    Configuration

    To use this functionality, it is as easy as enabling the inbox module in your MongooseIM configuration, all the defaults are set for you. Out-of-the-box the inbox will have the “inbox”, “archive” and “bin” boxes enabled, and the trash bin will be periodically cleaned every hour for entries older than 30 days. Group chats are enabled, chats are considered read when a displayed marker is sent, and the synchronous backend is chosen. To go further away from the details, try enabling the asynchronous backend, see below for the scalability details.

    It always comes down to scalability

    Another important feature is the creation of an asynchronous backend for inbox. Until now, inbox operations were synchronous in the context of a message processing pipeline: that means that the message processing would be blocked by the database operation, until the database successfully returned. This becomes especially bad for big group-chats where a message updates the inbox of each room participant. This means that MongooseIM’s message processing throughput would be as scalable as the database is, which is bad news.

    Releasing the Database

    In order to untie MongooseIM’s scalability from the database performance, we needed to first make the inbox processing asynchronous, that is, the message processing pipeline will only “submit a task” to update the appropriate inbox. But this still leaves the fact that the inbox generates too many database operations. So we tried to do less of them: the trick was that inbox is (mostly) an update-only operation: Alice receives a message from Bob, we update Alice’s inbox’s snippet with the new message, and an incremented count of one, Alice receives a new message, we update the snippet again with an incremented count of another one. We could have just applied the second update with an incremented count of two instead!

    % If we don't have any request pending, it means that it is the first task submitted,
    % so aggregation is not needed.
    handle_task(_, Value, #state{async_request = no_request_pending} = State) ->
        State#state{async_request = make_async_request(Value, State)};
    handle_task(Key, NewValue, #state{aggregate_callback = Aggregator,
                                      flush_elems = Acc,
                                      flush_queue = Queue,
                                      flush_extra = Extra} = State) ->
        case Acc of
            #{Key := OldValue} ->
                case Aggregator(OldValue, NewValue, Extra) of
                    {ok, FinalValue} ->
                        State#state{flush_elems = Acc#{Key := FinalValue}};
                    {error, Reason} ->
                        ?LOG_ERROR(log_fields(State, #{what => aggregation_failed, reason => Reason})),
                        State
                end;
            _ ->
                % The queue is used to ensure the order in which elements are flushed,
                % so that first requests are first flushed.
                State#state{flush_elems = Acc#{Key => NewValue},
                            flush_queue = queue:in(Key, Queue)}
        end.
    

    The new inbox asynchronous backend does just this: it submits updates to the database immediately, but instead of waiting for the database’s response, it aggregates all upcoming requests in the meantime. This means that on an unloaded server inbox is still pretty much immediate, and it will saturate the database throughput as fast as messages arrive: but once the database is saturated, it will not overload it!

    aggregate({set_inbox_incr_unread, _, _, _, _, Incrs2, _},
              {set_inbox_incr_unread, Entry, Content, MsgId, Timestamp, Incrs1, Box}) ->
        {set_inbox_incr_unread, Entry, Content, MsgId, Timestamp, Incrs1 + Incrs2, Box};

    To enable this backend, all you need to do is to select “rdbms_async” as the inbox backend in your mongooseim.toml configuration file and you’re good to go!

    Note some comparisons:

    rdbms
    12k users, rooms
    of 5 user
    rdbms_async
    30k users, rooms of 10 users
    No inbox
    35k users, rooms of 10 users
    Delivery time, .99 >40s
    50ms
    50ms
    Stanzas sent per minute 140k 180k 180k
    CPU usage 80% 99% 99%
    Memory usage 800MB 1200MB 950MB

    Note how for the traditional backend the CPU is not even saturated and messages are terribly stalled, but for the new asynchronous backend, we saturate the CPU with more than twice the number of users, and get pretty fast delivery times. Even when compared to disabling inbox altogether, the regular backend penalty shoots above 70% of the throughput, while the new backend costs only 12% (35k users lowers to 30k).

    The post Updates to the MIM Inbox in version 5.1 appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/updates-to-the-mim-inbox-in-version-5-1/

    • chevron_right

      Ignite Realtime Blog: Openfire 4.7.2 released

      news.movim.eu / PlanetJabber • 13 July, 2022 • 1 minute

    The Ignite Realtime Community is pleased to announce the release of Openfire version 4.7.2. This version fixes a number of bugs and signifies our efforts to produce a stable 4.7 series of Openfire whilst work continues on the next feature release 4.8.0.

    A major highlight of this release is fixing of BOSH bugs found under load testing.

    You can find download artifacts on our website with the following sha256sum values.

    998e469fad00452004c9f9d6c309c6749fa48d53af48c1501407f0f77870edca  openfire-4.7.2-1.noarch.rpm
    72ee5a31685f6010fc14a992d87d2d1f9c49b79b4e718c239520911ec7167340  openfire_4.7.2_all.deb
    674bab49908ff4de14f54dab1fdf7a6a29904b32f00669289b793d8dc19189c8  openfire_4_7_2.dmg
    66571c760776d02a28d00ed3c0ca085c327a1874f9d75f8b2037b347ec99decb  openfire_4_7_2.exe
    707ede6cc6d8d8bcce257c7f30d8a79fc7ed52576fb217a8220b800f2d52be64  openfire_4_7_2.tar.gz
    8eeaf4f07948d10b6b77b3e1e8b8d1f65a6ad3ffd85e5353a2d8055503d724fc  openfire_4_7_2_x64.exe
    ad07c991a3812a45250b303e95129a37242de01a591cce438057437d4f5a30d9  openfire_4_7_2.zip
    

    For those of you curious, here’s an accounting of artifact downloads from the previous release.

    Openfire Download Stats for Release: 4.7.1
               openfire-4.7.1-1.noarch.rpm   3690
                    openfire_4.7.1_all.deb   8102
                        openfire_4_7_1.dmg   1018
                        openfire_4_7_1.exe   6338
                     openfire_4_7_1.tar.gz   4175
                        openfire_4_7_1.zip   2029
                    openfire_4_7_1_x64.exe  18466
                                           ======
                                            43818
    
    

    As always, thanks for your interest in Openfire and stop by the open chat if you need any help or just want to say hi! Please report any issues you have in our Community Forums .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: Push Notification Openfire plugin 0.9.1 released

      news.movim.eu / PlanetJabber • 6 July, 2022

    The Ignite Realtime community is happy to announce the immediate availability of a bugfix release for the Push Notification plugin for Openfire!

    This plugin adds support for sending push notifications to client software, as described in XEP-0357: “Push Notifications” .

    This update includes only one bugfix, but one that prevented functionality for many people. You are highly encouraged to update!

    Your instance of Openfire should automatically display the availability of the plugin in the next few hours. Alternatively, you can download the new release of the plugin at the Push Notification plugin archive page

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: Smack 4.4.6 released

      news.movim.eu / PlanetJabber • 29 June, 2022

    We are happy to announce the release of Smack 4.4.6. For a high-level overview of what’s changed in Smack 4.4.6, check out Smack’s changelog

    This release mostly consists of bug fixes, many of them reported by the Jitsi folks. I would like to thank especially Damian Minkov for detailed problem descriptions, for the fruitful collaboration and for various joint bug hunts which turned out to be very successful. :slight_smile:

    As always, all Smack releases are available via Maven Central .

    We would like to use this occasion to point at that Smack now ships with a NOTICE file. Please note that this adds some requirements when using Smack as per the Apache License 2.0 . The content of Smack’s NOTICE file can conveniently be retrieved using Smack.getNoticeStream() .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Gaining a Competitive Advantage in Fintech From Your Choice of Tech Stack

      news.movim.eu / PlanetJabber • 27 June, 2022 • 11 minutes

    In our recent white paper ‘Technology Trends in Financial Services 2022’ , we explained the importance of software engineering for gaining a competitive advantage in the industry. Since the start of the year, a lot has occurred on a macro level strengthening our belief that modern financial services must be based on a solid technical foundation to deliver the user experiences and business reliability needed for commercial success.

    We see the role of the underlying technology (away from the customer-facing products) as being critical in enabling success in fintech in two main ways:

    • Building customer trust – by guaranteeing operational resilience and optimal availability of fintech systems
    • Exceptional user experience – rapid development of features facilitated by a tech stack that just works

    These tenets, if you like, are core to the Erlang ecosystem, including the Elixir programming language and the powerful BEAM VM (or virtual machine). In this follow-up article, we will dive deeper into how your choice of tech stack impacts your business outcomes and why Erlang and Elixir will often be the right tool for the job in fintech. We also share some guiding principles that underpin our expert engineering team’s approach to projects in the financial services space.

    What are the desirable characteristics of a fintech system?

    Let’s first look at some of the non-negotiable must-haves of a tech stack if you are involved in a fintech development project.

    A seamless customer experience

    Many projects fail in the fintech space due to focusing only on an application’s user interface. This short-sighted view doesn’t consider the knock-on effects of predicted (user growth) or unpredicted changes (like the pandemic lockdowns). For instance, your slick customer interface loses a lot of its shine when it’s connected to a legacy backend that is sluggish in responding to requests.

    Key point: When it comes to modern financial services, customers expect real-time, seamless and intelligent services and not clunky experiences. To ensure you deliver this, you need predictable behaviour under heavy loads and during usage spikes, resilience and fault-tolerance without the associated costs sky-rocketing.

    Technology that enables business agility

    Financial services is a fast-moving industry. To make the most of emerging opportunities, whether as an incumbent or fintech-led startup, you need to be agile from a business perspective, which is bound to tech agility. With the adoption of open-source technology on the rise in FS, we’re starting to see the benefits of moving away from proprietary tech, the risk of vendor lock-in, and the obstacles that can create. When you can dedicate more resources to shipping code without being constrained and forced into unfavourable trade-offs, you’re better positioned to cash in on opportunities before your competitors.

    Key point: You want a system that is easy to maintain and understand; this will help with onboarding new developers to the team and focusing resources where they can make the most impact.

    Tech stacks that use fewer resources

    Designing for sustainability is now a key consideration for any business, especially in an industry under the microscope like financial services. The software industry is responsible for a high level of carbon usage, and shareholders and investors are now weighing this up when making investment decisions. As funding in the tech space tightens, this is something that business leaders need to be aware of as a part of their tech decision-making strategy.

    CTOs and architects can help by making better choices in technologies. For instance, using a BEAM-based language can reduce the physical infrastructure requiring just one-tenth of the servers. That leads to significant cost reductions and considerable savings in terms of carbon footprint.

    Key point: Minimising costs is important, but sustainability too is now part of the consideration process.

    System Reliability and availability

    A robust operational resiliency strategy strengthens your business case in financial services. We’ve learnt from the stress placed on systems caused by spikes in online commerce since the pandemic is that using technologies that are proven and built to deal with the unpredictability of the modern world is critical.

    One thing sure to damage any FS player, regardless of size, is high-profile system outages and downtime. This can cause severe reputation damage and attract hefty fines from regulators.

    According to Gartner, the average cost of IT downtime is around $5,600 per minute. That is about $300,000 per hour on average. So avoiding downtime in your fintech production system is mission-critical.

    Key point: Your fintech system must be fault-tolerant, able to handle spikes in traffic and always be available and scalable.

    How Erlang/Elixir is meeting these challenges

    Erlang, Elixir and the BEAM VM overview

    Erlang is a programming language designed to build massively scalable, soft real-time systems that require high availability. Elixir is a programming language that runs on the BEAM VM – the same virtual machine as Erlang – and can be adopted throughout the tech stack. Many of the world’s largest banking, e-commerce and fintech companies depend on these technologies to power their tech stacks, such as Klarna, SumUp and SolarisBank.

    In telecoms, where phone systems, by law, had to work, and penalties were huge if they failed. Vendors designed in resilience from day one and not as an afterthought. Erlang and Elixir, as programming languages originally designed for the telecoms, are the right tools for the job for many fintech use cases where fault-tolerance and reliability are also essential.

    Of course, many languages are being used successfully across financial services but, the big differentiator with Erlang and Elixir is that high availability, fault-tolerance and scalability are built-in, out of the box. This makes developers’ lives easier and allows them the freedom to deliver innovative features to end-users. For fast-moving verticals such as fintech, the robust libraries and reduced lines of code compared to C, C++ or Java mean that you are set up for rapid prototyping and getting to market before the opposition.

    Key attributes of Erlang/Elixir for fintech development:

    • Can handle a huge number of concurrent activities
    • Ideal for when actions must be performed at a certain point in time or within a specific time (soft real-time)
    • Benefits of system distribution
    • Ideal for massive software systems
    • Software maintenance without stopping the system
    • Built-in fault tolerance and reliability

    If you’re in the early stages of your fintech project, you may not need these capabilities right away, but trying to retrofit them later will cost you valuable time and resources. Our expert team has helped many teams adopt BEAM-based technology at various times in their business lifecycle, talk to us about how we can help you.

    Let’s start a discussion about how we can help with your fintech project >

    Now let’s look at how Erlang/Elixir and the BEAM VM deliver against the desirable fintech characteristics outlined in the previous section.

    System availability and resilience during unpredicted events

    Functional Programming helps developers to write reliable software. Using the BEAM VM means you can have reliability of up to ‘nine-nines’ (99.9999999%) – that’s almost zero downtime for your system, obviously very desirable in any fintech system.

    This comes from Erlang/Elixir systems having ‘no single point of failure’ that risks bringing down your entire system. The ‘actor model’ (where parallel processes communicate with each other via messages) crucially does not have shared memory, so errors that inevitably will occur are localised and will not impact the rest of your system.

    Fault tolerance is another crucial aspect of Erlang/Elixir systems, making them a good option for your fintech project. ‘Supervisors’ are programmed with instructions on how to restart parts of a system when things do fail. This involves going back to a known initial state that is guaranteed to work:

    The end result is that using Erlang/Elixir means that your system will achieve unrivalled availability with far less effort and resources than other programming languages.

    System scalability for when demand grows

    Along with unmatched system uptime and availability, Erlang and Elixir offer scalability that makes your system able to handle changes in demand and sudden spikes in traffic. This is possible without many of the difficulties of trying to scale with other programming languages.

    With Erlang/Elixir, your code allows thousands of ‘processes’ to run concurrently on the same machine – in other words, you are making the most of each machine’s resources (vertical scaling).

    These processes are distributed, meaning they can communicate with processes on other machines within the network enabling developers to coordinate work across multiple nodes (horizontal scaling).

    In the fintech startup space especially, having confidence that if you achieve dramatic levels of fast growth, your tech system will stand up to demand and not require a costly rewrite of the codebase can be a critical factor in maintaining momentum.

    Concurrency model for high volume transactional systems

    Concurrent Programming makes it appear like multiple sequences of commands are being executed in parallel. Erlang and Elixir are ideal for workloads that involve a considerable amount of concurrency, such as with transaction-intensive segments of financial services like payments and trading.

    The functional nature of Erlang/Elixir, plus the lightweight nature of how the BEAM executes processes, makes writing concurrent programs far more straightforward than with other languages.

    If your fintech project expects to need to process massive amounts of transactional data from different sources, then Erlang/Elixir could be the most frictionless way for your team to go about building it.

    Developer friendly

    There are many reasons why developers enjoy working with Erlang/Elixir – in fact, Elixir has just been voted the second most loved programming language in the 2022 Stack Overflow Developer Survey.

    OTP middleware (the secret sauce behind Erlang/Elixir) abstracts the technical difficulty of concurrency and handling system failures. It allows your tech team the space to focus on business logic instead of time-consuming computational plumbing and facilitates fast prototyping and development of new features.

    Speed to market is a crucial differentiator in the competitive fintech landscape, with Erlang/Elixir you can release new features in time to attract new customers and retain existing ones better than with many other languages.

    Because using Erlang/Elixir for your project means less code and a lightweight execution model of processes demanding fewer CPU resources, you will need fewer servers, reducing your energy consumption and infrastructure costs. An independent study made at Heriot-Watt University found that an application written in Erlang compared to one in C++ found that 4-20 times less code was required for the Erlang codebase.

    During the beginning of the pandemic in 2020, Turn.io used Elixir to launch the world’s first WhatsApp-based COVID-19 response in just 5 days . The service was designed, deployed, stress-tested, and launched. It scaled to 450K unique users on the first day and has since grown to serve over 7.5 million people.

    Key takeaways about using Erlang and Elixir in fintech

    We can summarise that what is needed for success in fintech is a real-time, secure, reliable and scalable system that is easy to maintain and cost-efficient. Furthermore, you need a stack that lets your developers ship code and release new products and features quickly. The Erlang Ecosystem (Erlang, Elixir and the BEAM VM) sets a solid foundation for your fintech startup or project to be successful.

    With a reliable, easy-to-maintain code base, your most valuable resource (your tech talent) will be freed up to concentrate on delivering value and competitive advantage that delivers to your bottom line.

    With the right financial products to market and an Erlang/Elixir backend, you can be confident in delivering smooth and fast end-user experiences that are always available and seamless. This is crucial if you are looking to digitally onboard new customers, operate payment services, handle vast numbers of transactions or build trust in emerging areas such as cryptocurrency or blockchain.

    The benefits of using Erlang/Elixir for your fintech project

    2x FASTER DEVELOPMENT – of new services thanks to the language’s design, the OTP middleware, set of frameworks, principles, and design patterns that guide and support the structure, design, implementation, and deployment

    10x BETTER RELIABILITY – services that are down less than 5 minutes per year thanks to built-in fault tolerance and resilience mechanisms built into the language and the OTP middleware, e.g. software upgrades and generic error handling, as seen by e.g. Ericsson in their mobile network packet routers.

    10x MORE SECURE – solutions that are hard to hack and crash through denial of service attacks thanks to the language construction with lack of pointers, use of messages instead of shared memory and immutable state rather than variable assignments, as seen by the reduced number of vulnerabilities compared to other languages.

    10x MORE USERS – handling of the potential millions of transactions per second within milliseconds thanks to a battle-tested VM.

    10x LESS COSTS AND ENERGY CONSUMPTION – thanks to fewer servers needed and the fact that the BEAM is open-source. Swapping from Ruby to Elixir Bleacher report managed to reduce their hardware requirements from 150 to just 8.


    We will be an events partner with Fintech Week London again this year, hosting a special panel and networking evening at CodeNode in Central London on 12 July from 6 pm . Register your interest in attending via this link here.

    The post Gaining a Competitive Advantage in Fintech From Your Choice of Tech Stack appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/erlang-elixir-for-fintech-stack-development/

    • chevron_right

      Ignite Realtime Blog: REST API Openfire plugin 1.8.1 released!

      news.movim.eu / PlanetJabber • 23 June, 2022

    Earlier today, version 1.8.1 of the Openfire REST API plugin was released. This version removes the need to authenticate for status endpoints, adds new endpoints for bulk modifications of affiliations on MUC rooms, as well as a healthy number of other bugfixes.

    The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page .

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic