• chevron_right

      Erlang Solutions: 10 Unusual Blockchain Use Cases

      news.movim.eu / PlanetJabber · Thursday, 6 June - 10:55 · 10 minutes

    When Blockchain technology was first introduced with Bitcoin in 2009, no one could have foreseen its impact on the world or the unusual cases of blockchain that have emerged. Fast forward to now and Blockchain has become popular for its ability to ensure data integrity in transactions and smart contracts.

    Thanks to its cost-effectiveness,  transparency, speed and top security, it has found its way into many industries, with blockchain spending expected to reach $19 billion this year .

    In this post, we will be looking into 10 use cases that have caught our attention, in industries benefiting from Blockchain in unusual and impressive ways.

    Ujo Music- Transforming payment for artists

    Let’s start exploring the first unusual use case for blockchain with Ujo Music .

    Ujo Music started with a mission to get artists paid fairly for their music, addressing the issues of inadequate royalties from streaming and complicated copyright laws.

    To solve this, they turned to blockchain technology, specifically Ethereum. In using this, Ujo Music was able to create a community that allowed music owners to automatically receive royalty payments. Artists were also able to retain these rights due to smart contracts and cryptocurrencies. This approach allowed artists to access their earnings instantly, without the need to incur fees or have the wait time associated with more traditional systems.

    As previously mentioned, Blockchain also allows for transparency and security, which is key in preventing theft and copyright infringement of the owner’s information. Ujo Music is transforming the payment landscape for artists in the digital, allowing for better management and rights over their music.

    Cryptokitties-Buying virtual cats and gaming

    For anyone looking to collect and breed digital cats in 2017, Cryptokitties was the place to be. While the idea of a cartoon crypto animation seems incredibly niche, the initial Cryptokitties craze is one that cannot be denied in the blockchain space.

    Upon its launch, it immediately went viral, with the alluring tagline “The world’s first Ethereum game.” According to nonfungible.com , the NFT felines saw sales volume spike from just 1,500 on launch day, to 52,000 by the of 2017.

    CryptoKitties was among the first projects to harness smart contracts by attaching code to data constructs called tokens on the Ethereum blockchain. Each chunk of the game’s code (which it refers to as a “gene”) describes the attributes of a digital cat. Players buy, collect, sell, and even breed new felines.

    AD_4nXd5CuA_8r612NtimgQvYawRuk8uAA5XdxR_XmdzOpIRMI2XmEImsWEvE0l80fHEbYdkjnrJ6yntTAR_p9qnnGku4zZSZVyTeuNQEBwb74IGs0Pfa9uGR03nN0uEHV6Plt-YFAAKkRZ1hcDI47Pfanq4cC0M?key=gUKCSBjQvqkvfaRqHyRAKQ

    Source: Dapper Labs

    Just like individual Ethereum tokens and bitcoins, the cat’s code also ensures that the token representing each cat is unique, which is where the nonfungible token, or NFT, comes in. A fungible good is, by definition, one that can be replaced by an identical item—one bitcoin is as good as any other bitcoin. An NFT, by contrast, has a unique code that applies to no other NFT.

    Blockchain can be used in gaming in general by creating digital and analogue gaming experiences. By investing in CryptoKitties, players could invest, build and extend their gaming experience.

    ParagonCoin for Cannabis

    Our next unusual blockchain case stems from legal cannabis.

    Legal cannabis is a booming business, expected to be worth $1.2 billion by the end of 2024. With this amount of money, a cashless solution offers business owners further security. Transactions are easily trackable and offer transparency and accountability that traditional banking doesn’t.

    AD_4nXcHxBj7Jck1V9TDxn5lcfcSt277zRBpMyL0vjq9lOSeBoCZXJaeJ7bXonimwTT0mi4wREivxIzDy_7SJPAUr07zRZQjzol4tQ4vRDuNBeqCp_PFN80MRvmXqND6ZfVVMPxRoy06KMI-Y61ijAlz1IEaSlo9?key=gUKCSBjQvqkvfaRqHyRAKQ

    ParagonCoin business roadmap

    Transparency in the legal cannabis space is key for businesses looking to challenge its negative image. ParagonCoin, a cryptocurrency startup had a unique value proposition for its entire ecosystem, making it clear that its business would be used for no illegal activity.

    Though recently debunked, ParagonCoin was a pioneer in its field in utilising B2B payments. At the time of its launch, paying for services was only possible with cash, as businesses that were related to cannabis were not allowed to officially have a bank account.

    This creates a dire knock-on effect, making it difficult for businesses to pay for solicitors, staff and other operational costs. The only ways to get an operation running would have been unsafe, inconvenient and possibly illegal. ParagonCoin remedied this by asking businesses to adopt a pseudo-random generator (PRG) payment system to answer the immediate issues.

    Here are some other ways ParagonCoin adopted blockchain technology in their cannabis industry:

    • Regulatory compliance – Simplifying compliance issues on a local and federal level.
    • Secure transactions – Utilising smart contracts to automate and enforce agreement terms, reducing the risk of fraud.
    • Decentralised marketplace – Creating a platform for securely listing and reviewing products and services, while fostering a community of engaged users, businesses and regulators.
    • Innovative business models – The facilitating of crowdfunding to transparently raise business capital.

    These cases highlight blockchain technologies’ ability to enhance transparency, compliance, and security, within even the most unexpected industries.

    Siemens partnership- Sharing solar power

    Siemens has partnered with startup LO3 Energy with an app called Brooklyn Microgrid . This allows residents of Brooklyn who own solar panels to transfer their energy to others who don’t have this capability. Consumers and solar panel owners are in control of the entire transaction.

    Residents with solar panels sell excess energy back to their neighbours, in a peer-to-peer transaction. If you’d like to learn more about the importance of peer-to-peer (p2p) networks, you can check out our post about the Principles of Blockchain .

    Microgrids reduce the amount of energy that gets lost during transmission. It provides a more efficient alternative since approximately 5% of electricity generated in the US is lost in transit. The Brooklyn microgrid not only minimises these losses but also offers economic benefits to those who have installed solar panels, as well as the local community.

    Björn Borg and same-sex marriage

    Same-sex marriage is still banned in a majority of countries across the world. With that in mind, the Swedish sportswear brand Björn Borg discovered an ingenious way for loved ones to be in holy matrimony, regardless of sexual orientation on the blockchain. But how?

    Blockchain is stereotypically linked with money, but remove those connotations and you have an effective ledger that can record events as well as transactions.

    Björn Borg has put this loophole to extremely good use by forming the digital platform Marriage Unblocked, where you can propose, marry and exchange vows all on the blockchain. What’s more, the records can be kept anonymous offering security for those in potential danger, and you get the flexibility of smart contracts.

    Of course, you can request a certificate to display proudly too!

    Whilst this doesn’t hold any legal requirements, everything is produced and stored online. If religion or government isn’t a primary concern of yours, where’s the harm in a blockchain marriage?

    Tangle -Simplifying the Internet of Things (IoT)

    Blockchain offers ledgers that can record the huge amounts of data produced by IoT systems. Once again the upside is the level of transparency it offers that simply cannot be found in other services.

    The Internet of Things is one of the most exciting elements to come out of technology. The connected ecosystems can record and share various interactions. Blockchain lends itself perfectly to this, as it can transfer data and give identification for both public and private sector use cases. Here is an example:

    Public sector- Infrastructure management, taxes (and other municipal services).

    Private sector -logistical upgrade, warehousing tracking, greater efficiency, and enhanced data capabilities.

    IOTA’s Tangle is a blockchain specifically for IoT which handles machine-to-machine micropayments. It has reengineered distributed ledger technology (DLT), enabling the secure exchange of both value and data.

    Tangle is the data structure behind micro-transaction crypto tokens that are purposely optimised and developed for IoT. It differs from other blockchains and cryptocurrencies by having a much lighter, more efficient way to deal with tens of billions of devices.

    It includes a decentralised peer-to-peer network that relies on a Distributed Acyclic Graph (DAG), which creates a distributed ledger rather than “blocks”. There are no transaction fees, no mining, and no external consensus process. This also secures data to be transferred between digital devices.

    Walmart and IBM- Improving supply chains

    Blockchain’s real-time tracking is essential for any company with a significant number of supply chains.

    Walmart partnered with IBM to produce a blockchain called Hyperledger Fabric blockchain to track foods from the supplier to the shop shelf. When a food-borne disease outbreak occurs, it can take weeks to find the source. Better traceability through blockchain helped save time and lives, allowing companies to act fast and protect affected farms.

    Walmart chose blockchain technology as the best option for a decentralised food supply ecosystem. With IBM, they created a food traceability system based on Hyperledger Fabric.

    The food traceability system built for the two products worked and Walmart can now trace the origin of over 25 products from five of its different suppliers using this system.

    Agora for elections and voter fraud

    Voting on a blockchain offers full transparency, and reduces the chance of voter fraud. A prime example of this is in Sierra Leone, which in 2018 became the first country to run a blockchain-based election, with 70% of the pollers using the technology to anonymously store votes in an immutable ledger.

    AD_4nXel07n9r91A8y0t13HqXmj-yWezIZxh8_equlqWA_GMg2QiJYv1zodL4SLkyKOeyU1bLcRc9SrPKKgl3DzRVWe3i_u2awA2WJn8zU_epDJGuAlpg4IijQTCB3Un3FxkAajU7QBOin0qFWS8cgJvC1MXzOym?key=gUKCSBjQvqkvfaRqHyRAKQ

    Sierra Leone results on the Agora blockchain

    These results were placed on Agora’s blockchain and by allowing anyone to view it, the government aimed to provide a level of trust with its citizens. The platform reduced controversy and costs enquired when using paper ballots.

    The result of this is a trustworthy and legitimate result that will also limit the amount of the hearsay from opposition voters and parties, especially in Sierra Leone which has had heavy corruption claims in the past.

    MedRec and Dentacoin Healthcare

    With the emphasis on keeping many records in a secure manner, blockchain lends itself nicely to medical records and healthcare.

    MedRec is one business using blockchain to keep secure files of medical records by using a decentralised CMS and smart contracts. This also allows transparency of data and the ability to make secure payments connected to your health. Blockchain can also be used to track dental care in the same sort of way.

    One example is Dentacoin, which uses the global token ERC20. It can be used for dental records but also to ensure dental tools and materials are sourced appropriately, whether tools are used on the correct patients, networks that can transfer information to each other quickly and a compliance tool.

    Everledger- Luxury items and art selling

    Blockchain’s ability to track data and transactions lends itself nicely to the world of luxury items.

    Everledger.io is a blockchain-based platform that enhances transparency and security in supply chain management. It’s particularly used for high-value assets such as diamonds, art, and fine wines.

    The platform uses blockchain technology to create a digital ledger that records the provenance and lifecycle of these assets, ensuring authenticity and preventing fraud. Through offering a tamper-proof digital ledger, Everledger allows stakeholders to trace the origin and ownership history of valuable assets, reducing the risk of fraud and enhancing overall market transparency.

    The diamond industry is a great use case of the Everledger platform.

    By recording each diamond’s unique attributes and history on an immutable blockchain, Everledger provides a secure and transparent way to verify the authenticity and ethical sourcing of diamonds. This helps in combating the circulation of conflict diamonds but also builds consumer trust by providing a verifiable digital record of each diamond’s journey from mine to market.

    To conclude

    While there is a buzz around blockchain, it’s important to note that the industry is well-established, and these surprising cases of blockchain display the broad and exciting nature of the industry as a whole. There are still other advantages to blockchain that we haven’t delved into in this article, but we’ve highlighted one of its greatest advantages for businesses and consumers alike- its transparency.
    If you or your business are working on an unusual blockchain case, let us know – we would love to hear about it! Also if you are looking for reliable FinTech or blockchain experts, give us a shout, we offer many services to fix issues of scale .

    The post 10 Unusual Blockchain Use Cases appeared first on Erlang Solutions .

    • chevron_right

      The XMPP Standards Foundation: The XMPP Newsletter May 2024

      news.movim.eu / PlanetJabber · Thursday, 6 June - 00:00 · 4 minutes

    XMPP Newsletter Banner

    XMPP Newsletter Banner

    Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of May 2024.

    XMPP and Google Summer of Code 2024

    The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and will kick-off with coding now:

    XSF and Google Summer of Code 2024

    XSF and Google Summer of Code 2024

    XSF Fiscal Hosting Projects

    The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective . For more information, see the announcement blog post . Current projects you can support:

    XMPP Events

    XMPP Videos

    Debian and XMPP in Wind and Solar Measurement talk at MiniDebConf Berlin 2024.

    XMPP Articles

    XMPP Software News

    XMPP Clients and Applications

    XMPP Servers

    XMPP Web as Openfire plugin

    XMPP Web as Openfire plugin

    XMPP Libraries & Tools

    Slixfeed News Bot

    Slixfeed News Bot

    Extensions and specifications

    The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs .

    Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001 , which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process . Communication around Standards and Extensions happens in the Standards Mailing List ( online archive ).

    Proposed

    The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

    • No XEP was proposed this month.

    New

    • No new XEPs this month.

    Deferred

    If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

    • No XEPs deferred this month.

    Updated

    • Version 2.5.0 of XEP-0030 (Service Discovery)
      • Add note about some entities not advertising the feature. (pep)
    • Version 1.34.6 of XEP-0045 (Multi-User Chat)
      • Remove contradicting keyword on sending subject in §7.2.2. (pep)

    Last Call

    Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

    • XEP-0421: Anonymous unique occupant identifiers for MUCs
    • XEP-0440: SASL Channel-Binding Type Capability

    Stable

    • Version 1.0.0 of XEP-0398 (User Avatar to vCard-Based Avatars Conversion)
      • Accept as Stable as per Council Vote from 2024-04-30. (XEP Editor (dg))

    Deprecated

    • No XEP deprecated this month.

    Rejected

    • No XEP rejected this month.

    Spread the news

    Please share the news on other networks:

    Subscribe to the monthly XMPP newsletter
    Subscribe

    Also check out our RSS Feed !

    Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board .

    Newsletter Contributors & Translations

    This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

    • English (original): xmpp.org
      • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, singpolyma, XSF iTeam
    • French: jabberfr.org and linuxfr.org
      • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
    • Italian: notes.nicfab.eu
      • Translators: nicola
    • Spanish: [xmpp.org/categories/newsletter/]
      • Translators: Gonzalo Raúl Nemmi

    Help us to build the newsletter

    This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad . At the end of each month, the pad’s content is merged into the XSF Github repository . We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

    Tasks we do on a regular basis:

    • gathering news in the XMPP universe
    • short summaries of news and events
    • summary of the monthly communication on extensions (XEPs)
    • review of the newsletter draft
    • preparation of media images
    • translations
    • communication via media accounts

    License

    This newsletter is published under CC BY-SA license .

    • wifi_tethering open_in_new

      This post is public

      xmpp.org /2024/06/the-xmpp-newsletter-may-2024/

    • chevron_right

      ProcessOne: Understanding messaging protocols: XMPP and Matrix

      news.movim.eu / PlanetJabber · Tuesday, 4 June - 16:01 · 5 minutes

    In the world of real-time communication, two prominent protocols often come into discussion: XMPP and Matrix. Both protocols aim to provide robust and secure messaging solutions, but they differ in architecture, features, and community adoption. This article delves into the key differences and similarities between XMPP and Matrix to help you understand which might be better suited for your needs.

    What is XMPP?

    Overview

    XMPP (Extensible Messaging and Presence Protocol) is an open-standard communication protocol originally developed for instant messaging (IM). It was designed as the Jabber protocol in 1999 to aggregate communication across a number of options, such as ICQ, Yahoo Messenger, and MSN. It was standardized by the IETF as RFC 3920 and RFC 3921 in 2004, and later revised as RFC 6120 and RFC 6121 in 2011.

    Key Features

    • Decentralized Architecture : XMPP operates on a decentralized network of servers. The protocol is said to be federated. The network of all interconnected XMPP servers is called the XMPP federation.
    • Extensibility : The protocol is highly extensible through XMPP Extension Protocols (XEPs). There are currently more than 400 extensions covering a broad range of use cases like social networking and Internet of Things features through PubSub extensions, Groupchat (aka MUC, Multi-user chat), and VoIP with the Jingle protocol.
    • Security : Supports TLS for encryption and SASL for authentication. End-to-end encryption is available through the OMEMO extension.
    • Interoperability : Widely adopted with numerous clients and servers available.
    • Gateways : Built-in support for gateways to other protocols, allowing for communication across different messaging systems.

    Network Protocol Design

    • TCP-Level Stream Protocol : XMPP is based on a TCP-level stream protocol using XML and namespaces. This extensibility while maintaining schema consistency is key. It can also run on top of other protocols like WebSocket or HTTP through the concept of binding.

    Use Cases

    • Instant messaging
    • Presence information
    • Multi-user chat (MUC)
    • Social networks
    • Voice and video calls (with extensions)
    • Internet of Things
    • Massive messaging (massive scale messaging platforms like WhatsApp)

    What is Matrix?

    Overview

    Matrix is an open standard protocol for real-time communication, designed to provide interoperability between different messaging systems. It was introduced in 2014 by the Matrix.org Foundation.

    Key Features

    • Decentralized Architecture : Like XMPP, Matrix is also decentralized and supports a federated model.
    • Event-Based Model : Uses an event-based architecture where all communications are stored in a distributed database. The conversations are replicated on all servers in the federation that participate in the discussion.
    • End-to-End Encryption : Built-in end-to-end encryption using the Olm and Megolm libraries.
    • Bridging : Strong focus on bridging to other communication systems like Slack, IRC, and XMPP.

    Network Protocol Design

    • HTTP-Based Protocol : Matrix uses HTTP for communication and JSON for its data structure, making it suitable for web environments and easy to integrate with web technologies.

    Use Cases

    • Instant messaging
    • VoIP and video conferencing
    • Bridging different chat systems

    Comparison

    Architecture

    • XMPP : Uses a federated model to build a network of communication that works for both messaging and social networking. The content is not duplicated by default.
    • Matrix : Uses a federated model where each server stores a complete history of conversations, allowing for decentralized control and redundancy.

    XMPP is built around an event-based architecture to reach the largest possible scale. Matrix is built around a distributed model that may be more appealing to smaller community servers. As the conversations are distributed, it can cope more easily with servers suffering from frequent disconnections in the federated network.

    Extensibility

    • XMPP : Extensible through XEPs that are standardized by the XMPP Standards Foundation, allowing for a wide variety of additional features. As the protocol is based on XML, it can also be extended for custom client features, using your own namespace and schema.
    • Matrix : Extensible through modules and APIs, with a strong focus on bridging to other protocols. It is less flexible at the protocol level.

    Security

    • XMPP : Supports TLS for secure communication and SASL for authentication. End-to-end encryption is available through extensions like OMEMO.
    • Matrix : Supports TLS for secure communication. Built-in end-to-end encryption using Olm and Megolm, providing robust security out of the box.

    Both end-to-end encryption approaches are similar, as they are both based on the same double ratchet encryption algorithm made popular by the Signal messaging platform.

    Interoperability

    • XMPP : Known for its interoperability due to its long-standing presence and wide adoption. Includes built-in support for gateways to other protocols.
    • Matrix : Designed with interoperability in mind, with native support for bridging to other protocols. More recent gateways are available. They could be ported to work on both protocols (which would be neat).

    Scalability

    • XMPP : By design, XMPP has an edge in terms of scalability. XMPP is event-based and works as a broadcast hub for messages, making it efficient in handling a large number of concurrent users. It is proven to sustain millions of concurrent users.
    • Matrix : Matrix maps conversations to documents that are replicated across servers involved in the discussion. This means the document state needs to be merged and reconciled for each new posted message, which incurs significant overhead in terms of processing power, memory, and storage. Its use case is mainly “organization level” chat, supporting thousands of users, not millions.

    Community and Adoption

    • XMPP : Established and widely adopted with a large number of client and server implementations. This can be seen as a drawback, leading to intimidating choices of tools. However, this has proven to be a strength with many competing implementations that have proven to be interoperable. This is a validation of the robustness of the protocol. Initially developed by Cisco, it is an Internet Engineering Task Force standard used for massive scale deployments.
    • Matrix : Rapidly growing community with increasing adoption, particularly in open-source projects and decentralized applications. The main implementation is developed by Element, the company funded to grow the Matrix protocol.

    Conclusion

    Both XMPP and Matrix offer robust solutions for real-time communication with their own strengths. XMPP’s long history, extensibility, and efficient scalability make it a reliable choice for traditional instant messaging and presence-based applications, but also social networks, Internet of Things, and workflows that mix human users and devices. On the other hand, Matrix’s architecture, built-in end-to-end encryption, and focus on gateway development make it an excellent choice for those looking to integrate multiple communication systems or require secure corporate messaging through the Element client.

    Using a server like ejabberd is a future-proof approach, as it is multiprotocol by design. ejabberd supports XMPP, MQTT, SIP, can act as a VoIP and video call proxy (STUN/TURN), and can federate with the Matrix network. It is likely to support the Matrix client protocol as well in beta in the near future.

    Choosing between XMPP and Matrix depends largely on your specific needs, existing infrastructure, and future scalability requirements. Both protocols continue to evolve, offering exciting possibilities for real-time communication.


    Mistakes? If you spot a mistake, please reach out to share it! Thanks! I would like this document to be as accurate as possible.

    The post Understanding messaging protocols: XMPP and Matrix first appeared on ProcessOne .
    • chevron_right

      Remko Tronçon: Packaging Swift apps for Alpine Linux

      news.movim.eu / PlanetJabber · Sunday, 2 June - 00:00

    While trying to build my Age Apple Secure Enclave plugin , a small Swift CLI app, on Alpine Linux , I realized that the Swift toolchain doesn’t run on Alpine Linux. The upcoming Swift 6 will support (cross-)compilation to a static (musl-based) Linux target, but I suspect an Alpine Linux version of Swift itself isn’t going to land soon. So, I explored some alternatives for getting my Swift app on Alpine.

    • wifi_tethering open_in_new

      This post is public

      mko.re /blog/swift-alpine-packaging/

    • chevron_right

      Erlang Solutions: Blockchain Tech Deep Dive 2/4 | Myths vs. Realities

      news.movim.eu / PlanetJabber · Thursday, 30 May - 09:06 · 15 minutes

    This is the second part of our ‘Making Sense of Blockchain’ blog post series – you can read part 1 on ‘6 Blockchain Principles’ here. This article is based on the original post by Dominic Perini here.

    Join our FinTech mailing list for more great content and industry and events news, sign up here >>

    With so much hype surrounding blockchain, we separate the reality from the myths to ensure delivery of the ROI and competitive advantage that you need.
    It’s not our aim here to discuss the data structure of blockchain itself, issues like those of transactions per second (TPS) or questions such as ‘what’s the best Merkle tree solution to adopt?’. Instead, we shall examine the state of maturity of blockchain technology and its alignment with the core principles that underpin a distributed ledger ecosystem.

    Blockchain technology aims to embrace the following high-level principles:

    7 founding principles of blockchain

    • Immutability
    • Decentralisation
    • ‘Workable’ consensus
    • Distribution and resilience
    • Transactional automation (including ‘smart contracts’)
    • Transparency and Trust
    • A link to the external world

    Immutability of history

    In an ideal world it would be desirable to preserve an accurate historical trace of events, and make sure this trace does not deteriorate over time, whether through natural events, human error or by the intervention of fraudulent actors. Artefacts produced in the analogue world face alterations over time while in the digital world the quantized / binary nature of stored information provides the opportunity for continuous corrections to prevent deterioration that might occur over time.

    Writing an immutable blockchain aims to retain a digital history that cannot be altered over time. This is particularly useful when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions.

    We should note that, on top of the inherent immutability of a well-designed and implemented blockchain, hashing algorithms provide a means to encode the information that gets written in the history so that the capacity to verify a trace/transaction can only be performed by actors possessing sufficient data to compute the one-way cascaded encoding/encryption. This is typically implemented on top of Merkle trees where hashes of concatenated hashes are computed.

    Legitimate questions can be raised about the guarantees for indefinitely storing an immutable data structure:

    • If this is an indefinitely growing history, where can it be stored once it grows beyond the capacity of the ledgers?
    • As the history size grows (and/or the computing power needed to validate further transactions increases) this reduces the number of potential participants in the ecosystem, leading to a de facto loss of decentralisation. At what point does this concentration of ‘power’ create concerns?
    • How does verification performance deteriorate as the history grows?
    • How does it deteriorate when a lot of data gets written on it concurrently by users?
    • How long is the segment of data that you replicate on each ledger node?
    • How much network traffic would such replication generate?
    • How much history is needed to be able to compute a new transaction?
    • What compromises need to be made on linearisation of the history, replication of the information, capacity to recover from anomalies and TPS throughput?


    Further to the above questions, how many replicas converging to a specific history (i.e. consensus) are needed for it to carry on existing? And in particular:

    • Can a fragmented network carry on writing to their known history?
    • Is an approach designed to ‘heal’ any discrepancies in the immutable history of transactions by rewarding the longest fork, fair and efficient?
    • Are the deterrents strong enough to prevent a group of ledgers forming their own fork that eventually reaches wider adoption?


    Furthermore, a new requirement to comply with the General Data Protection Regulations (GDPR) in Europe and ‘the right to be forgotten’ introduces new challenges to the perspective of keeping permanent and immutable traces indefinitely. This is important because fines for breaches of GDPR are potentially very severe. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. None of these approaches has yet been tested by the courts.

    The challenging aspect here is to decide upfront what is considered sensitive and what can safely be placed on the immutable history. A wrong choice can backfire at a later stage in the event that any involved actor manages to extract or trace sensitive information through the immutable history.

    Immutability represents one of the fundamental principles that motivate the research into blockchain technology, both private and public. The solutions explored so far have managed to provide a satisfactory response to the market needs via the introduction of history linearisation techniques, one-way hashing encryptions, merkle trees and off-chain storage, although the linearity of the immutable history comes at a cost (notably transaction volume).

    Decentralisation of control

    One of the reactions following the 2008 global financial crisis was against over-centralisation. This led to the exploration of various decentralised mechanisms. The proposition that individuals would like to enjoy the freedom to be independent of a central authority gained in popularity. Self-determination, democratic fairness and heterogeneity as a form of wealth are among the dominant values broadly recognised in Western (and, increasingly, non-Western) society. These values added weight to the movement that introducing decentralisation in a system is positive.

    With full decentralisation, there is no central authority to resolve potential transactional issues for us. Traditional, centralised systems have well developed anti-fraud and asset recovery mechanisms which people have become used to. Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

    There’s no point having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world then writing the combination on a whiteboard in the same room.

    Is the increased level of personal responsibility that goes with the proper implementation of a secure blockchain a price that users are willing to pay? Or, will they trade off some security in exchange for ease of use (and, by definition, more centralisation)?

    Consensus

    The consistent push towards decentralised forms of control and responsibility has brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. Several approaches have grown out of the blockchain industry, some competing and some complementary.

    There has also been a significant focus on the concept of governance within a blockchain ecosystem. This concerns the need to regulate the rates at which new blocks are added to the chain and the associated rewards for miners (in the case of blockchains using proof of work (POW) consensus methodologies). More generally, it is important to create incentives and deterrent mechanisms whereby interested actors contribute positively to the healthy continuation of chain growth.

    Besides serving as an economic deterrent against denial of service and spam attacks, POW approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Other similar approaches (proof of space, proof of bandwidth etc) followed, however, they all suffer from exposure to deviations from the intended fair distribution of control. Wealthy participants can, in fact, exploit these approaches to gain an advantage via purchasing high performance (CPU / memory / network bandwidth) dedicated hardware in large quantities and operating it in jurisdictions where electricity is relatively cheap. This results in overtaking the competition to obtain the reward, and the authority to mine new blocks, which has the inherent effect of centralising the control. Also, the huge energy consumption that comes with the inefficient nature of the competitive race to mine new blocks in POW consensus mechanisms has raised concerns about its environmental impact and economic sustainability.

    Proof of Stake (POS) and Proof of Importance (POI) are among the ideas introduced to drive consensus via the use of more social parameters, rather than computing resources. These two approaches link the authority to the accumulated digital asset/currency wealth or the measured productivity of the involved participants. Implementing POS and POI mechanisms, whilst guarding against the concentration of power/wealth, poses not insubstantial challenges for their architects and developers.

    More recently, semi-automatic approaches, driven by a human-curated group of ledgers, are putting in place solutions to overcome the limitations and arguable fairness of the above strategies. The Delegated Proof of Stake (DPOS) and Proof of Authority (POA) methods promise higher throughput and lower energy consumption, while the human element can ensure a more adaptive and flexible response to potential deviations caused by malicious actors attempting to exploit a vulnerability in the system.

    Distribution and resilience

    Apart from a decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer to peer (P2P) design paradigm. This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing.

    A centralised network, typical of mainframes and centralised services is clearly exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node. In the event that the central node breaks down or is congested, all the other nodes will be affected by disruptions.

    Decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. In fact, even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can reach the destination via an alternative route. This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack.

    Blockchain networks where a distributed topology is combined with a high redundancy of ledgers backing a history have occasionally been declared ‘unhackable’ by enthusiasts or, as some more prudent debaters say, ‘difficult to hack’. There is truth in this, especially when it comes to very large networks such as that of Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (principally because the cost of conducting a successful malicious attack becomes prohibitive).

    Although a distributed topology can provide an effective response to failures or traffic spikes, you need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historical high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

    Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

    Automation

    In order to sustain a coherent, fair and consistent blockchain and surrounding ecosystem, a high degree of automation is required. Existing areas with a high demand for automation include those common to most distributed systems. For instance; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. In the context of blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

    The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project.

    The ability to define how to operate an asset exchange, by which conditions and actioned following which triggers, has attracted many blockchain enthusiasts. Some of the most common applications of smart contracts involve lotteries, trade of digital assets and derivative trading. While there is clearly exciting potential unleashed by the introduction of smart contracts , it is also true that it is still an area with a high entry barrier . Only skilled developers that are willing to invest time in learning Domain Specific Languages (DSL) have access to the actual creation and modification of these contracts.

    The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly-designed contracts cannot properly rollback or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

    Another area in high need for automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

    Clearly, the removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back a centralised control but also reduce the automation of governance.

    We expect this to be one of the major areas where blockchain has to evolve in order to succeed in getting widespread market adoption.

    Transparency and trust

    In order to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

    Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

    Besides transparency, trust is another critical feature that users legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

    Link to the external world

    The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. It is safe to say that there would be less interest if we were to accept that a blockchain can only operate under the restrictive boundaries of the digital world, without connecting to the analog real world in which we live.

    Technologies used to overcome these limitations including cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

    Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrency. The same applies to a wide range of other cryptocurrencies with the exception of fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world.

    Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary in order to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

    * originally published 2018 by Dominic Perini

    For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget.

    Let’s talk

    If you want to start a conversation about engaging us for your fintech project or talk about partnering and collaboration opportunities, please send our Fintech Lead, Michael Jaiyeola, an email or connect with him via Linkedin.

    The post Blockchain Tech Deep Dive 2/4 | Myths vs. Realities appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/blockchain-tech-deep-dive-2-4-myths-vs-realities/

    • chevron_right

      The XMPP Standards Foundation: Scaling up with MongooseIM 6.2.1

      news.movim.eu / PlanetJabber · Tuesday, 28 May - 00:00 · 3 minutes

    MongooseIM is a scalable, extensible and efficient real-time messaging server that allows organisations to build cost-effective communication solutions. Built on the XMPP server, MongooseIM is specifically designed for businesses facing the challenge of large deployments, where real-time communication and user experience are critical. The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend , which simplifies and enhances its scalability.

    It is difficult to predict how much traffic your XMPP server will need to handle. This is why MongooseIM offers several means of scalability. Firstly, even one machine can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. As a result, it is recommended to have a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol . Having at least three nodes in the cluster allows you to perform a rolling upgrade , where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance. During such an upgrade, you can increase hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier, because you only need to add new nodes to the already deployed cluster.

    Mnesia

    Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey, because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

    1. Consistency issues, which tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
    2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
    3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

    First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data shared between the cluster nodes. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

    Introducing CETS

    Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to keep all your persistent data.

    Getting rid of Mnesia removes a lot of important obstacles. For example, if you are using Kubernetes , MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up horizontal autoscaling for your installation.

    See it in action

    If you want quickly set up a working autoscaled MIM cluster using Helm, see the detailed blog post . For more information, consult the documentation , GitHub or the product page . You can try MongooseIM online as well.

    Read about Erlang Solution as sponsor of the XSF .

    erlang-solutions.png
    • wifi_tethering open_in_new

      This post is public

      xmpp.org /2024/05/scaling-up-with-mongooseim-6.2.1/

    • chevron_right

      Ignite Realtime Blog: New Openfire plugin: XMPP Web!

      news.movim.eu / PlanetJabber · Sunday, 26 May - 17:50 · 1 minute

    We are excited to be able to announce the immediate availability of a new plugin for Openfire: XMPP Web!

    This new plugin for the real-time communications server provided by the Ignite Realtime community allows you to install the third-party webclient named ‘ XMPP Web ’ in mere seconds! By installing this new plugin, the web client is immediately ready for use.

    This new plugin compliments others that similarly allow to deploy a web client with great ease, like Candy , inVerse and JSXC ! With the addition of XMPP Web, the selection of easy-to-install clients for your users to use becomes even larger!

    The XMPP Web plugin for Openfire is based on release 0.10.2 of the upstream project, which currently is the latest release. It will automatically become available for installation in the admin console of your Openfire server in the next few days. Alternatively, you can download it immediately from its archive page.

    Do you think this is a good addition to the suite of plugins? Do you have any questions or concerns? Do you just want to say hi? Please stop by our community forum or our live groupchat !

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Balancing Innovation and Technical Debt

      news.movim.eu / PlanetJabber · Thursday, 23 May - 10:58 · 10 minutes

    Let’s explore the delicate balance between innovation and technical debt.

    We will look into actionable strategies for managing debt effectively while optimising our infrastructure for resilience and agility.

    Balancing acts and trade-offs

    I was having this conversation with a close acquaintance not long ago. He’s setting up his new startup, filling a market gap he’s found, rushed before the gap closes in. It’s a common starting point for many entrepreneurs. You have an idea you need to implement, and until it is implemented and (hopefully) sold, there is no revenue, all while someone else can close the gap before you do. Time-to-market is key.

    While there’s no revenue, you acquire debt. But while reasonably careful to keep it under control, you pay the Financial Debt off with a different kind of debt: Technical Debt . You choose to make a trade-off here, a trade-off that all too often is taken without awareness. This trade-off between debts requires careful thinking too, just as much as financial debt is an obvious risk, so is a technical one.

    Let’s define these debts. Technical is the accumulated cost of shortcuts or deferred maintenance in software development and IT infrastructure. Financial is the borrowing of funds to finance business operations or investments. They share a common thread: the trade-off between short-term gains and long-term sustainability .

    Just like financial debt can provide immediate capital for growth, it can also drag the business into financial inflexibility and burdensome interest rates. Technical debt expedites product development or reduces time-to-market, at the expense of increased maintenance, reduced scalability, and decreased agility . It is an often overlooked aspect of a technological investment, whose prompt care can have a huge impact on the lifespan of the business. As an enterprise must manage its financial leverage to maintain solvency and liquidity, it must also manage its technical debt to ensure the reliability, scalability, and maintainability of their systems and software.

    The Economics of Technical Debt

    Consider the example of a rapidly growing e-commerce platform: appeal attracts demand, demand requires resources, and resources mean increased vulnerability: the increasing user data and resources attract threats, aiming to disrupt services, steal sensitive data, or cause reputational harm. In this environment, the platform’s success is determined by its ability to strike a delicate balance between serving legitimate customers and thwarting malicious actors, where both play ever-increasing proportions.

    Early on, the platform prioritised rapid development and deployment of new features; however, in their haste to innovate, the technical team accumulated debt by taking shortcuts and deferring critical maintenance tasks. What results from this is a platform that is increasingly fragile and inflexible, leaving it vulnerable to disruptive attacks and more agile competitors . Meanwhile, reasonably, the platform’s financial team kept allocating capital to funding marketing campaigns, product launches, and strategic acquisitions, under pressure to maximise profitability and shareholder value; however, they neglected to allocate sufficient resources towards cybersecurity initiatives, viewing them as discretionary expenses rather than critical investments in risk mitigation and resilience .

    Technical currencies

    If we’re talking about debt, and drawing a parallel with financial terms, let’s complete the parallel. By establishing the concept of currencies, we can build quantifiable metrics of value that reflect the health and resilience of digital assets. Code coverage, for instance, measures the proportion of codebase exercised by automated tests, providing insights into the potential presence of untested or under-tested code paths. In this line, tests and documentation are the two assets that pay the highest technical debt.

    See for example how coverage for MongooseIM has been continuously trending higher .

    Similarly, Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of integrating code changes, running automated tests, verifying engineering work, and deploying applications to diverse environments, enabling teams to deliver software updates frequently and with confidence. By streamlining the development workflow and reducing manual intervention, CI/CD pipelines enhance productivity, accelerate time-to-market, and minimise the risk of human error. Humans have bad days and sleepless nights, well-developed automation doesn’t.

    Additionally, valuations on code quality that are diligently tracked on the organisation’s ticketing system provide valuable insights into the evolution of software assets and the effectiveness of ongoing efforts to address technical debt and improve code maintainability. These valuations enable organisations to prioritise repayment efforts, allocating resources effectively.

    Repaying Technical Debt

    The longer any debt remains unpaid, the greater its impact on the organisation — (technical) debt accrues “interest” over time. But, much like in finances, a debt is paid with available capital, and choosing a payment strategy can make a difference in whether capital is wasted or successfully (re)invested:

    1. Priorities and Plans : Identify and prioritise areas of technical debt based on their impact on the system’s performance, stability, and maintainability. Develop a plan that outlines the steps needed to address each aspect of technical debt systematically.
    2. Refactoring : Allocate time and resources to refactor code and systems to improve their structure, readability, and maintainability. Break down large, complex components into smaller, more manageable units, and eliminate duplicate or unnecessary code. See for example how we battled technical debt in MongooseIM .
    3. Automated Testing : Invest in automated testing frameworks and practices to increase test coverage and identify regression issues early in the development process. Implement continuous integration and continuous deployment (CI/CD) pipelines to automate the testing and deployment of code changes. Establishing this pipeline is always the first step into any new project we join and we’ve become familiar with diverse CI technologies like GitHub Actions , CircleCI , GitlabCI, or Jenkins.
    4. Documentation : Enhance documentation efforts to improve understanding and reduce ambiguity in the codebase. Document design decisions, architectural patterns, and coding conventions to facilitate collaboration and knowledge sharing among team members. Choose technologies that facilitate and enhance documentation work .

    Repayment assets

    Repayment assets are resources or strategies that can be leveraged to make debt repayment financially viable. Here are some key repayment assets to consider:

    1. Training and Education : Provide training and education opportunities for developers to enhance their skills and knowledge in areas such as software design principles, coding best practices, and emerging technologies. Encourage continuous learning and professional development to empower developers to make informed decisions and implement effective solutions.
    2. Technical Debt Reviews : Conduct regular technical debt reviews to assess the current state of the codebase, identify areas of concern, and track progress in addressing technical debt over time. Use metrics and KPIs to measure the impact of technical debt reduction efforts and inform decision-making.
    3. Collaboration and Communication : Foster a culture of collaboration and communication among development teams, stakeholders, and other relevant parties. Encourage open discussions about technical debt, its implications, and potential strategies for repayment, and involve stakeholders in decision-making processes.
    4. Incremental Improvement : Break down technical debt repayment efforts into smaller, manageable tasks and tackle them incrementally. Focus on making gradual improvements over time rather than attempting to address all technical debt issues at once, prioritising high-impact and low-effort tasks to maximise efficiency and effectiveness.

    Don’t acquire more debt than you have to

    While debt is a quintessential aspect of entrepreneurship, acquiring it unwisely is obviously shooting in one’s foot. You’ll have to make many decisions and choose over many trade-offs, so you better be well-informed before putting your finger on the red buttons.

    Your service will require infrastructure

    Whether you choose one vendor over another or decide to go self-hosted, use containerised technologies, so that future changes to better infrastructures are possible. Containers also provide a consistent environment for development, testing and production. Choose technologies that are good citizens in containerised environments .

    Your service will require hardware resources

    Whether you choose one or another hardware architecture or any amount of memory, use runtimes that can efficiently use and adapt to any given hardware, so that future changes to better hardware are fruitful. For example Erlang’s concurrency model is famous for automatically taking advantage of any number of cores, and with technologies like Elixir’s Nx you can take advantage of esoteric GPUs and TPUs hardware for your machine learning tasks.

    Your service will require agility

    The market will push your offerings to its limit, in a never-ending stream of requests for new functionality and changes to your service. Your code will need to change, and respond to changes. From Elixir ‘s metaprogramming and language extensibility to Gleam ‘s strong type-safety, prioritise tools that likewise aid your developers to change things safely and powerfully.

    Your service will require resiliency

    There are two philosophies in the culture of error handling: either it is mathematically proven that errors cannot happen – Haskell’s approach – or it is assumed they can’t always be avoided and we need to learn to handle them – Erlang’s approach. Wise technologies take one starting point as an a-priori foundation of the technology and, a-posteriori, deal with the other end. Choose wisely your point on the scale, and be wary of technologies that don’t take a safe stance. Errors can happen: electricity goes down, cables are cut, and attackers attack. Programmers have bad sleepless nights or get sick. Take a stance, before errors bite your service.

    Your service will require availability

    No fancy unique idea will sell if it can’t be bought, and no service will be used if it is not there to begin with. Unavailability takes an exponential toll on your revenue, so prioritise availability. Choose technologies that can handle not just failure, but even upgrades (!), without downtime . And to have real availability, you always need at least two computers, in case one dies: choose technologies that make many independent computers cooperate easily and can take over another’s work transparently.

    A Case Study: A Balancing Act in Traffic Management

    A chat system, like many web services, handles a countably infinite number of independent users. It is a heavily network-based application that needs to respond to requests that are independent of each other in a timely and fair manner. It is an embarrassingly parallel problem, messages can be processed independently of each other, but it is also a challenge of soft real-time properties, where messages should be processed sufficiently soon for a human to have a good user experience. It also faces the challenge of bad actors, which makes requests blacklisting and throttling necessary.

    MongooseIM is one such system. It is written in Erlang, and in its architecture, every user is handled by one actor.

    It is containerised, and easily uses all available resources efficiently and smoothly, adapting to any change of hardware, from small embedded systems to massive mainframes. Its architecture uses the Publish-Subscribe programming pattern heavily, and because Erlang is a functional language, functions are first-class citizens, and therefore functions are installed to handle all sorts of events extensively because we never know what new functionality we will need to implement in the future.

    One important event is a new session starting: mechanisms for blacklisting are plenty, whether they’re based on specific identifiers, IP regions, or even modern AI-based behaviour analysis, we can’t predict the future,  so we simply publish the “session opened” event and leave for future us to install the right handler when is needed.

    Another important event is that of a simple message being sent. What if bad actors have successfully opened sessions and start flooding the system, consuming the CPU and Database unnecessarily? Again, changing requirements might dictate the system is to handle some users with preferential treatment. One default option is to slow down all message processing within some reasonable rate, for which we use a traffic shaping mechanism called the Token Bucket algorithm, implemented in our library Opuntia – named that way because if you touch it too fast, it stings you.

    You can read more about how scalable MongooseIM is in this article, where we pushed it to its limit . And while we continuously load-test our server, we haven’t done another round of limit-pushing since then, stay tuned for a future blog when we do just this!

    Lessons Learned

    Technical Debt has an inherent value akin to Financial Debt. Choosing the right tool for the job means acquiring the right Technical Debt when needed – leveraging strategies, partnerships, and solutions, that prioritise resilience, agility, and long-term sustainability.

    The post Balancing Innovation and Technical Debt appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/balancing-innovation-and-technical-debt/

    • chevron_right

      JMP: Newsletter: SMS Routes, RCS, and more!

      news.movim.eu / PlanetJabber · Tuesday, 21 May - 19:22 · 3 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    SMS Censorship, New Routes

    We have written before about the increasing levels of censorship across the SMS network. When we published that article, we had no idea just how bad things were about to get. Our main SMS route decided at the beginning of April to begin censoring all messages both ways containing many common profanities. There was quite some back and forth about this, but in the end this carrier has declared that the SMS network is not meant for person-to-person communication and they don’t believe in allowing any profanity to cross their network.

    This obviously caused us to dramatically step up the priority of integration with other SMS routes, work which is now nearing completion. We expect very soon to be offering long-term customers with new options which will not only dramatically reduce the censorship issue, but also in some cases remove the max-10 group text limit, dramatically improve acceptance by online services , and more.

    RCS

    We often receive requests asking when JMP will add support for RCS, to complement our existing SMS and MMS offerings. We are happy to announce that we have RCS access in internal testing now. The currently-possible access is better suited to business use than personal use, though a mix of both is certainly possible. We are assured that better access is coming later in the year, and will keep you all posted on how that progresses. For now if you are interested in testing this, especially if you are a business user, please do let us know and we’ll let you know when we are ready to start some testing.

    One thing to note is that “RCS” means different things to different people. The main RCS features we currently have access to are typing notifications, displayed/read notifications, and higher-quality media transmission.

    Cheogram Android

    Cheogram Android 2.15.3-1 was released this month, with bug fixes and new features including:

    • Major visual refresh, including optional Material You
    • Better audio routing for calls
    • More customizable custom colour theme
    • Conversation read-status sync with other supporting apps
    • Don’t compress animated images
    • Do not default to the network country when there is no SIM (for phone number format)
    • Delayed-send messages
    • Message loading performance improvements

    New GeoApp Experiment

    We love OpenStreetMap , but some of us have found existing geocoder/search options lacking when it comes to searching by business name, street address, etc. As an experimental way to temporarily bridge that gap, we have produced a prototype Android app ( source code ) that searches Google Maps and allows you to open search results in any mapping app you have installed. If people like this, we may also extend it with a server-side component that hides all PII, including IP addresses, from Google, for a small monthly fee. For now, the prototype is free to test and will install as “Maps+” in your launcher until we come up with a better name (suggestions welcome!).

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/may-newsletter-2024