phone

    • chevron_right

      Prosodical Thoughts: Prosody 0.11.11 released

      The Prosody Team • news.movim.eu / PlanetJabber • 20 December, 2021 • 1 minute

    We are pleased to announce a new minor release from our stable branch.

    This release contains some fixes to PEP to control memory usage, alongwith a small batch of fixes for issues discovered since the lastrelease.

    This will likely be the last release of the 0.11 branch.

    A summary of changes in this release:

    Fixes and improvements

    • net.server_epoll: Prioritize network events over timers to improve performance under heavy load
    • mod_pep: Add some memory usage limits
    • mod_pep: Prevent creation of services for non-existent users
    • mod_pep: Free resources on user deletion (needed a restart previously)

    Minor changes

    • mod_pep: Free resources on reload
    • mod_c2s: Indicate stream secure state in error text when no stream features to offer
    • MUC: Fix logic for access to affiliation lists
    • net.server_epoll: Improvements to shutdown procedure #1670
    • net.server_epoll: Fix potential issue with rescheduling of timers
    • prosodyctl: Fix to ensure LuaFileSystem is loaded when needed
    • util.startup: Fix handling of unknown command line flags (e.g. -h )
    • Fix version number reported as ‘unknown’ on *BSD

    Download

    As usual, download instructions for many platforms can be found on our download page

    If you have any questions, comments or other issues with this release, let us know!

    • wifi_tethering open_in_new

      This post is public

      blog.prosody.im /prosody-0.11.11-released/

    • chevron_right

      Peter Saint-Andre: A Friend by Any Other Name

      Peter Saint-Andre • news.movim.eu / PlanetJabber • 19 December, 2021

    It is said that when the ancient Greek philosopher Epicurus died, he left behind thousands of friends. This was 2300 years before Facebook, so how could he have befriended so many people?...
    • wifi_tethering open_in_new

      This post is public

      stpeter.im /journal/1684.html

    • chevron_right

      Ignite Realtime Blog: Openfire 4.6.6 and 4.5.5 releases (Log4j-only changes)

      guus • news.movim.eu / PlanetJabber • 16 December, 2021 • 4 minutes

    As we’re monitoring developments around the recent Log4j vulnerabilities , we’ve decided to provide another update for Openfire to pull in the latests available updates from Log4j.

    Since the previous release, the Log4j team released a new version (2.16.0) of their library, that provides better protection against the original vulnerability ( CVE-2021-44228 ), but also guards against a newly discovered vulnerability ( CVE-2021-45046 ) in Log4j.

    The Ignite Realtime community has decided to immediately make available new releases of Openfire that include this newer version of Log4j: Openfire 4.6.6 and Openfire 4.5.5.

    In addition to upgrading the Log4j libraries to version 2.16.0, we have put in place the mitigation measures that were defined for these CVEs. It’s important to note that these mitigation measures are known to be insufficient to fully protect against the vulnerabilities. However, the update to version 2.16.0 of Log4j makes these measures redundant. We have opted to include them anyway, as we know that many of you modify Openfire to a great extent. If such modifications would inadvertently re-introduce a vulnerable version of Log4j, at least some mitigation is in place. No changes other than these Log4j-related changes are included in the releases that we are publishing today.

    We are aware that for some, the process of deploying a new major version of Openfire is not a trivial matter, as it may encompass a lot more than only performing the update of the executables. Depending on regulations that are in place, this process can require a lot of effort and take a long time to complete. To facilitate users that currently use an older version of Openfire, we are also making available a new release in the older 4.5 branch of Openfire that pulls in the Log4j update. An upgrade to that version will, for some, require a lot less effort. Note well: although we are making available a new version in the 4.5 branch of Openfire, we strongly recommend that you upgrade to the latest version of Openfire (currently in the 4.6 branch), as that includes important fixes and improvements that are not available in 4.5.

    The following sha256 checksums are valid for the Openfire 4.6.6 distributables:

    507b4899fb1c84b0ffd95c29278eeefd56ac63849bb730192b26779997ada21b  openfire-4.6.6-1.i686.rpmd2913d913449a9e255b10ea6ee22a5967083a295038c21d3b761bb236c22e0cd  openfire-4.6.6-1.noarch.rpm02aa7af09286f25fbceef1ea27940e1696ced1e3a6c28b5e0ae094d409580734  openfire-4.6.6-1.x86_64.rpm3add3c877745dcc6aacd335cfc8fe1674567bb3b28728cfa6c008556c59a9e98  openfire_4.6.6_all.deb00c5ecbbf725de1093bfe3e5774b8c0e532742435439f70a4435fc5bed828b99  openfire_4_6_6_bundledJRE.exe4ff92208e62f0455295a8cf68d57e2d9e3ede15c71aaab26cf1a410dce5aba5b  openfire_4_6_6_bundledJRE_x64.exe2584a6b61f0d9447a868f9bfadb5892d56d854198604b3ace9b638b8c217cac4  openfire_4_6_6.dmg6cc42bfb60a5f8453c37d980c24c2a5ba48e1e1363ebfcc5d7f2e1deb6da5f17  openfire_4_6_6.exe6431a22d2dd9f077b9b2ee8949238c0f076ab34d43ee200a6873fa5453630bd6  openfire_4_6_6.tar.gzec8da5fdc93065df9bf41c0f4aebd6bb47f1dea11dcc96665ac0105f035378b2  openfire_4_6_6_x64.exeaf68252b98b8af6afb0753b4054adcf4cab1968579eaaf644d4da663e9461dce  openfire_4_6_6.zip

    For Openfire 4.5.5, the sha256 checksums are:

    247f0769e0a449c698ac9c23b658a02131ac6f774f4394dc9bb4e7f114159cc8  openfire-4.5.5-1.i686.rpm4603f92ce9822d1f43d27a9e15b859232cd09f391e9aeef0b99a782a03ecd12e  openfire-4.5.5-1.noarch.rpm9df54cbef30664635ed2977a21beded56fa120c5ff9e89b4cfa7466171344517  openfire-4.5.5-1.x86_64.rpm0815f07094fcfaf4e17aca3ea26f42835b5ff1b486475aff6b743e914709e788  openfire_4.5.5_all.debdff2e81da7457e3d8c1ee9e23ff43dd812f56db09df53588df7a5ea5622b1e6e  openfire_4_5_5_bundledJRE.exe96c2a4f5ed94dda76942ec7e540430c505448a2625a10f52cdc91c2dae0f720a  openfire_4_5_5_bundledJRE_x64.exea1ddd675b24b661186645786d1489cb6d80c90c2cae178992af509b5241fb275  openfire_4_5_5.dmg971b97bc9d405a03d2c3fba51a698cf92397b24104b28fec06b993b6d52568ce  openfire_4_5_5.exea5f199bf2347725b952a995c1cfbeb1b8e45c9a26c177100669eeed7679da742  openfire_4_5_5.tar.gzb5b55c5938b430fa50c702da6b8336be7f79d2c97eb09623dc0c9bd59663aead  openfire_4_5_5_x64.exe44f90a4f4f7ecebd7cffadc7f108e4bcb8b70dc77b36698d48efaf3eb7650c91  openfire_4_5_5.zip

    The process of upgrading is outlined in the Openfire upgrade guide . If you would prefer to enlist support in applying this update, various professional partners are available that can help.

    We are always happy to hear about your experiences, good or bad! Please consider dropping a note in the community forums or hang out with us in our web support groupchat .

    For other release announcements and news follow us on Twitter !

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Aleksander Lisiecki’s prize-winning eArangoDB at SpawnFest 2021

      Erlang Admin • news.movim.eu / PlanetJabber • 13 December, 2021 • 3 minutes

    What is SpawnFest?

    It’s tempting to say that SpawnFest is an event that doesn’t need an introduction, but we’ll give it one anyway. SpawnFest is an annual remote hackathon, where teams have exactly one weekend (48 hours to be exact) to create the best BEAM-related applications they can. It is the biggest event of its kind in the Erlang and Elixir community, with fantastic sponsors, prizes, participants and judges coming together to learn, inspire and hack. This year, we’re extremely proud to announce that our very own Aleksander Lisiecki claimed a podium finish in a number of categories for his project eArangoDB. His final haul was:  Maintainability – 1st 🥇 , Correctness – 2nd 🥈 and Completion – 3rd 🥉 . You can see the full results for overall winners and other categories here. We invited Aleksander to the blog to share more about his project, the inspiration and the competition as a whole:

    Why did you enter the competition?

    Aleksander: I enjoy the convenience of participating in a remote competition, which is self-explanatory regarding the current COVID situation. Another advantage is that all teams are accepted, which is not the case in other hackathons. And last but not least the prizes this year are awesome!

    What inspired this project?

    I have had an interest in graph databases for a while. The possibilities they give are exciting, and they can be used in a variety of really interesting ways from social media modelling to fraud detection and many more. In last year’s SpawnFest I worked on a project to provide an API from Erlang to Noe4j, a graph database. This year I wanted to try doing a similar concept but for ArangoDB. It is a more complex challenge as it is not a pure graph but a multi-model database. There were many more API calls to be implemented. I wanted to have an option to choose between different databases picking the best one depending on the use case and requirements.

    Sounds great, where can we see the work?

    A full demo with code and description is available here: https://github.com/spawnfest/eArangoDB/tree/master/demo/demo_cities

    Demo graph

    The judges loved your project, how did you feel it went?

    Overall I am satisfied with the result. I managed to implement the main part of my project. There are some rough edges here and there, but that is always the case with hackathon-style projects.

    Any thoughts for adding on how this project could be improved in the future?

    There is always space for improvements. The competition only lasts for 48 hours, which means that you are not able to polish the project with a shiny finish, but it is important to have the basic concept working. In the future, I think it will be beneficial to use property-based testing in test suites. Another thing left to do is to implement some validation options and some helpers regarding response handling, especially focusing on more readable error messages. The project might also benefit from adding logging or, even better, adding telemetry events.

    How do you find remote vs in-person hackathons?

    In a remote hackathon usually, all teams are accepted. That is not the case in most hackathons, as they happen in person, and organisers have to limit the number of participants due to limited room size. Moreover, you work from the comfort of your desk, using the equipment setup you have built and improved over the years. Last but not least, you can sleep in your bed, which is not the case for in-person hackathons. On the other hand, if a hackathon happens in person, there are some extra attractions available like free pizza and coffee, some games to be played in the break like foosball or some board games.

    Any advice for participants for next year?

    SpawnFest, like any other hackathon, is about two things: a great idea and bringing the idea to life, or at least a prototype of it. I highly recommend you start thinking about ideas as soon as possible. Do not go too big, it is better to pick a slightly smaller project and deliver it within a limited time than it is to try to implement a huge idea that you won’t have time to finish. Read the rules carefully and check what the prizes are awarded for, and focus on those aspects of your project.

    Last but not least, do not forget that it is about having fun! See you next year at SpawnFest!

    The post Aleksander Lisiecki’s prize-winning eArangoDB at SpawnFest 2021 appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/aleksander-lisieckis-prize-winning-earangodb-at-spawnfest-2021/

    • chevron_right

      Isode: Successfully Managing HF Radio Networks

      admin • news.movim.eu / PlanetJabber • 13 December, 2021 • 2 minutes

    With the potential for new technologies to cause interference to traditional communications networks and even space itself at the risk of becoming weaponised, it is important to make sure that you always have a backup plan for your communications ready and waiting.

    Should the worst happen and your primary network, typically SatCom, go down you need to ensure that you can still communicate with your forces wherever they are, and that communication needs to be fast,  simple and reliable. It also needs to be suitable for operation within degraded and denied environments.

    That’s where HF Radio has a distinct advantage, utilising the ionosphere itself to relay communications and long-range radio signals. If you’re interested you can read more about the benefits of communications over HF Radio and how Isode is developing HF technology here .

    When implementing new technologies, one of the challenges you can always expect to face is how you manage them and control how the important systems connect with one another. For HF Radio, that has always been a factor limiting its deployment, how do you ensure that mobile units remain connected to your HF network as they move from one location to the next?

    This can now be done by our latest HF Radio enhancement product, Icon Topo.

    Icon Topo is a state of the art, web-based management system for HF Radio networks. The management system allows an operator to monitor and control the location of Mobile Units such as ships or aircraft, ensuring that as they move from one HF Access Point to another they can remain connected to your communications network.

    The Icon Topo system allows you to manage your Mobile Units across multiple HF Networks, and plan a connection route for them as they do so, all from an easy forms-based interface. Removing any interruptions to service or downtime from applications as the MU moves across its intended path.

    You can read more on Icon Topo here .

    Alongside our HF management system, we have also recently developed our Red/Black solution to manage encrypted data over HF networks.

    Red/Black is a Web-based server that can provide control and monitoring of different devices and servers. This is intended to complement, not replace, primary device management tools. Red/Black servers can operate in a pair, to monitor and control devices across a secure boundary.

    Our Red/Black servers are designed to support HF radio systems through the display and management of communication chains, as seen below. They allow separation of, and passage for encrypted information across restricted networks from a ‘high’ side to a ‘low’ side.

    You can read more about our Red/Black solution here

    The above two products give you full oversight over your HF networks so that you can be confident you will retain complete control over what gets to connect to your HF network and how exactly they do it.

    If you’d like more information on our HF products, or are interested in a product demo then get in touch with us on sales@isode.com , alternatively you can fill out a contact form on our website and one of our team will get back to you.

    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/successfully-managing-hf-radio-networks/

    • chevron_right

      Ignite Realtime Blog: Openfire 4.6.5 released

      guus • news.movim.eu / PlanetJabber • 10 December, 2021 • 1 minute

    Although we’re preparing for the Openfire 4.7.0 release, the recently discovered vulnerability in the Apache Log4j utility prompted us to push an immediate release of Openfire to address that issue. This release, Openfire 4.6.5, is available now.

    We urge you to update as soon as possible. If that’s not feasible, then we advise you to apply the documented workaround (in the form of adding the following argument in the start script for Openfire: -Dlog4j2.formatMsgNoLookups=true ) and/or look into applying other mitigating actions.

    The process of upgrading is outlined in the Openfire upgrade guide . Please note that, if desired, a significant amount of professional partners is available that can provide commercial support.

    You can find Openfire release artifacts on the download page . These are the the applicable sha256sum s:

    926e852abfe67970a4a64b7a58d16adbd3ae65269921288909d2a353457ac350  openfire-4.6.5-1.i686.rpm5041fd66f5cf4642d25012642d827ad80c40057ba66f79aad04918edc94085ec  openfire-4.6.5-1.noarch.rpmf1d7ed2d5d5bbd12c3af896329df48f97b73ae5435980b524248760a246552f6  openfire-4.6.5-1.x86_64.rpmda113f737514457209194024f57a90f52f499fefbf0a9eb3e3d888b24f214ece  openfire_4.6.5_all.debc16e13348767b489aef905d912eafca9650428af16a729b63a208fdbe97c9783  openfire_4_6_5_bundledJRE.exee03cd4e5b2a76b203540580ca2714541ee86b1ef3b677d5c312d099567674f2d  openfire_4_6_5_bundledJRE_x64.exe28d628db9cce3cfb7acfa19977235b569729bcebff65a511dd882a4c1b554d6c  openfire_4_6_5.dmgcb1c4a5f888cbeeb6bbfd29460c8095941cecddd8c5f03b3d3f1ca412a995e81  openfire_4_6_5.exefcc3d45e9b80536b463fedbb959ff1e4f5fc5cef180502f6810c0f025a01f4ac  openfire_4_6_5.tar.gzfe216d1eecb23050ebbf28f7afa8930ca167d99516051c3f5e03d545e183cf4c  openfire_4_6_5_x64.exefd0f853b249a8853da51b056f1e6b31d04c49763394321dbd60abb8cef9df940  openfire_4_6_5.zip

    Apart from addressing the log4j issue, this release includes a small number of other modifications, as documented in the changelog .

    We’re always happy to hear about your experiences, good or bad! Please consider dropping a note in the community forums or hang out with us in our web support groupchat .

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic

    • chevron_right

      ProcessOne: ejabberd 21.12

      Jérôme Sautret • news.movim.eu / PlanetJabber • 9 December, 2021 • 4 minutes

    This new ejabberd 21.12 release comes after five months of work, contains more than one hundred changes, many of them are major improvements or features, and several bug fixes.

    ejabberd 21.12 released

    When upgrading from previous versions, please notice: there’s a change in mod_register_web behaviour, and PosgreSQL database, please take a look if they affect your installation.

    A more detailed explanation of those topics:

    Optimized MucSub and Multicast processing

    More efficient processing of MucSub and Multicast ( XEP-0033 ) messages addressed to big number of addresses.

    Support MUC Hats

    MUC Hats ( XEP-0317 ) defines a more extensible model for roles and affiliations in Multi-User Chat rooms. This protocol was deferred, but it is supported by several clients and servers. ejabberd’s implementation supports both the XEP schema, and also the Conversejs/Prosody custom schema.

    New mod_conversejs

    This module serves a simple page to allow the Converse.js XMPP web browser client connect to ejabberd. It can use ejabberd’s Websockets or BOSH (HTTP-Bind).

    By default this module points to the public online client available at converse.js. Alternatively, you can download the client and host it locally with a configuration like this:

    hosts:  - localhostlisten:  -    port: 5280    ip: "::"    module: ejabberd_http    tls: false    request_handlers:      /websocket: ejabberd_http_ws      /conversejs: mod_conversejs      /converse_files: mod_http_fileservermodules:  mod_conversejs:    websocket_url: "ws://localhost:5280/websocket"    conversejs_script: "http://localhost:5280/converse_files/converse.min.js"    conversejs_css: "http://localhost:5280/converse_files/converse.min.css"  mod_http_fileserver:    docroot: "/home/ejabberd/conversejs-9.0.0/package/dist"    accesslog: "/var/log/ejabberd/fileserver-access.log"

    Many PubSub improvements

    Add delete_old_pubsub_items command.
    Add a command for keeping only the specified number of items on each node and removing all older items. This might be especially useful if nodes may be configured to have no ‘max_items’ limit.

    Add delete_expired_pubsub_items command
    Support XEP-0060’s pubsub#item_expire feature by adding a command for deleting expired PubSub items.

    Fix get_max_items_node/1 specification
    Make it explicit that the get_max_items_node/1 function returns ?MAXITEMS if the ‘max_items_node’ option isn’t specified. The function didn’t actually fall back to ‘undefined’ (but to the ‘max_items_node’ default; i.e., ?MAXITEMS) anyway. This change just clarifies the behavior and adjusts the function specification accordingly.

    Improvements in the ejabberd Documentation web

    Added many cross-links between modules, options, and specific sections.

    Added a new API Tags page similar to “ejabberdctl help tags”.

    Improved the API Reference page, so commands show the tags and the definer module.

    Configuration changes

    mod_register_web is now affected by the restrictions that you configure in mod_register ( #3688 ).

    mod_register gets a new option, allow_modules , to restrict what modules can register new accounts. This is useful if you want to allow only registration using mod_register_web , for example.

    PosgreSQL changes

    Added to PgSQL’s new schema missing SQL migration for table push_session ( #3656 )

    Fixed in PgSQL’s new schema the vcard_search definition ( #3695 ).
    How to update an existing database:

    ALTER TABLE vcard_search DROP CONSTRAINT vcard_search_pkey;ALTER TABLE vcard_search ADD PRIMARY KEY (server_host, lusername);

    Summary of changes:

    Commands

    • create_room_with_opts: Fixed when using SQL storage ( #3700 )

    • change_room_option: Add missing fields from config inside mod_muc_admin:change_options

    • piefxis: Fixed arguments of all commands

    Modules

    • mod_caps: Don’t forget caps on XEP-0198 resumption

    • mod_conversejs: New module to serve a simple page for Converse.js

    • mod_http_upload_quota: Avoid ‘max_days’ race

    • mod_muc: Support MUC hats (XEP-0317, conversejs/prosody compatible)

    • mod_muc: Optimize MucSub processing

    • mod_muc: Fix exception in mucsub {un}subscription events multicast handler

    • mod_multicast: Improve and optimize multicast routing code

    • mod_offline: Allow storing non-composing x:events in offline

    • mod_ping: Send ping from server, not bare user JID ( #3658 )

    • mod_push: Fix handling of MUC/Sub messages ( #3651 )

    • mod_register: New allow_modules option to restrict registration modules

    • mod_register_web: Handle unknown host gracefully

    • mod_register_web: Use mod_register configured restrictions ( #3688 )

    PubSub

    • Add delete_expired_pubsub_items command

    • Add delete_old_pubsub_items command

    • Optimize publishing on large nodes (SQL)

    • Support unlimited number of items

    • Support ‘max_items=max’ node configuration ( #3666 )

    • Bump default value for ‘max_items’ limit from 10 to 1000 ( #3652 )

    • Use configured ‘max_items’ by default

    • node_flat: Avoid catch-all clauses for RSM

    • node_flat_sql: Avoid catch-all clauses for RSM

    SQL

    • Use INSERT … ON CONFLICT in SQL_UPSERT for PostgreSQL >= 9.5

    • mod_mam export: assign MUC entries to the MUC service ( #3680 )

    • MySQL: Fix typo when creating index ( #3654 )

    • PgSQL: Add SASL auth support, PostgreSQL 14 ( #3691 )

    • PgSQL: Add missing SQL migration for table push_session ( #3656 )

    • PgSQL: Fix vcard_search definition in pgsql new schema ( #3695 )

    Other

    • captcha-ng.sh: “sort -R” command not POSIX, added “shuf” and “cat” as fallback ( #3660 )

    • Make s2s connection table cleanup more robust

    • Update export/import of scram password to XEP-0227 1.1 ( #3676 )

    • Update Jose to 1.11.1 (the last in hex.pm correctly versioned)

    ejabberd 21.12 download & feedback

    As usual, the release is tagged in the Git source code repository on Github .

    The source package and binary installers are available at ejabberd XMPP & MQTT server download page .

    If you suspect that you’ve found a bug, please search or fill a bug report on Github .

    The post ejabberd 21.12first appeared on ProcessOne.
    • chevron_right

      Erlang Solutions: Blockchain Tech Deep Dive 2/4 | Myths vs Realities

      Erlang Admin • news.movim.eu / PlanetJabber • 8 December, 2021 • 17 minutes

    This is the second part of our ‘Making Sense of Blockchain’ blog post series – you can read part 1 on ‘6 Blockchain Principles’ here.

    Join our FinTech mailing list for more great content and industry and events news, sign up here >>

    Theme II

    Blockchain: Myths vs Reality *

    * originally published 2018 by Dominic Perini

    With so much hype surrounding blockchain, we separate the reality from the myths to ensure delivery of the ROI and competitive advantage that you need.

    It’s not our aim here to discuss the data structure of blockchain itself, issues like those of transactions per second (TPS) or questions such as ‘what’s the best Merkle tree solution to adopt?’. Instead, we shall examine the state of maturity of blockchain technology and its alignment with the core principles that underpin a distributed ledger ecosystem.

    7 Founding principles of blockchain

    Blockchain technology aims to embrace the following high-level principles:

    • Immutability
    • Decentralisation
    • ‘Workable’ consensus
    • Distribution and resilience
    • Transactional automation (including ‘smart contracts’)
    • Transparency and Trust
    • A link to the external world

    Many of the themes which Dominic talks about in Myths vs Reality are those being debated across academia and industry at the moment. For instance, we recently spoke about anonymity with Keith Bear, Fellow at the Cambridge University’s Centre for Alternative Finance (CCAF), who agreed that the context of use cases is key: ‘it all depends on the type of networks which we are looking at. For example, with private permissioned blockchains identity is important to financial institutions because of AML/KYC requirements.’. In addition, Arzu Toren (global banker of over 20 years experience and blockchain researcher) told us, quite rightly: ‘enterprises want to ensure that information isn’t revealed to competitors through analysis of their information’ another key consideration.

    Decentralisation is another segment of blockchain which is hitting the headlines. Here, CCAF research has discovered that ‘organisations are using elements of blockchain technology but parts of the architecture are being centralised and so not fully embracing the fully distributed state found with Bitcoin and Ethereum’.

    1. Immutability of histor y

    In an ideal world it would be desirable to preserve an accurate historical trace of events, and make sure this trace does not deteriorate over time, whether through natural events, human error or by the intervention of fraudulent actors. Artefacts produced in the analogue world face alterations over time while in the digital world the quantized / binary nature of stored information provides the opportunity for continuous corrections to prevent deterioration that might occur over time.

    Writing an immutable blockchain aims to retain a digital history that cannot be altered over time. This is particularly useful when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions.

    We should note that, on top of the inherent immutability of a well-designed and implemented blockchain, hashing algorithms provide a means to encode the information that gets written in the history so that the capacity to verify a trace/transaction can only be performed by actors possessing sufficient data to compute the one-way cascaded encoding/encryption. This is typically implemented on top of Merkle trees where hashes of concatenated hashes are computed.


    Legitimate questions can be raised about the guarantees for indefinitely storing an immutable data structure:

    • If this is an indefinitely growing history, where can it be stored once it grows beyond the capacity of the ledgers?
    • As the history size grows (and/or the computing power needed to validate further transactions increases) this reduces the number of potential participants in the ecosystem, leading to a de facto loss of decentralisation. At what point does this concentration of ‘power’ create concerns?
    • How does verification performance deteriorate as the history grows?
    • How does it deteriorate when a lot of data gets written on it concurrently by users?
    • How long is the segment of data that you replicate on each ledger node?
    • How much network traffic would such replication generate?
    • How much history is needed to be able to compute a new transaction?
    • What compromises need to be made on linearisation of the history, replication of the information, capacity to recover from anomalies and TPS throughput?


    Further to the above questions, how many replicas converging to a specific history (i.e. consensus) are needed for it to carry on existing? And in particular:

    • Can a fragmented network carry on writing to their known history?
    • Is an approach designed to ‘heal’ any discrepancies in the immutable history of transactions by rewarding the longest fork, fair and efficient?
    • Are the deterrents strong enough to prevent a group of ledgers forming their own fork that eventually reaches wider adoption?


    Furthermore, a new requirement to comply with the General Data Protection Regulations (GDPR) in Europe and ‘the right to be forgotten’ introduces new challenges to the perspective of keeping permanent and immutable traces indefinitely. This is important because fines for breaches of GDPR are potentially very severe. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. None of these approaches has yet been tested by the courts.

    Regarding anonymity on blockchains, Keith Bear, Fellow at the Cambridge Centre for Alternative Finance (CCAF), explains: ‘it all depends on the type of networks which we are looking at. For example, with private permissioned blockchains identity is important to financial institutions because of AML/KYC requirements.’.

    The challenging aspect here is to decide upfront what is considered sensitive and what can safely be placed on the immutable history. A wrong choice can backfire at a later stage in the event that any involved actor manages to extract or trace sensitive information through the immutable history.

    Immutability represents one of the fundamental principles that motivate the research into blockchain technology, both private and public. The solutions explored so far have managed to provide a satisfactory response to the market needs via the introduction of history linearisation techniques, one-way hashing encryptions, merkle trees and off-chain storage, although the linearity of the immutable history comes at a cost (notably transaction volume).

    2. Decentralisation of control

    One of the reactions following the 2008 global financial crisis was against over-centralisation. This led to the exploration of various decentralised mechanisms. The proposition that individuals would like to enjoy the freedom to be independent of a central authority gained in popularity. Self-determination, democratic fairness and heterogeneity as a form of wealth are among the dominant values broadly recognised in Western (and, increasingly, non-Western) society. These values added weight to the movement that introducing decentralisation in a system is positive.

    With full decentralisation, there is no central authority to resolve potential transactional issues for us. Traditional, centralised systems have well developed anti-fraud and asset recovery mechanisms which people have become used to. Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

    There’s no point having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world then writing the combination on a whiteboard in the same room.

    Is the increased level of personal responsibility that goes with the proper implementation of a secure blockchain a price that users are willing to pay? Or, will they trade off some security in exchange for ease of use (and, by definition, more centralisation)?

    CCAF’s research has discovered that ‘organisations are using elements of blockchain technology but parts of the architecture are being centralised and so not fully embracing the fully distributed state found with Bitcoin and Ethereum.’ .

    3. Consensus

    The consistent push towards decentralised forms of control and responsibility has brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. Several approaches have grown out of the blockchain industry, some competing and some complementary.

    There has also been a significant focus on the concept of governance within a blockchain ecosystem. This concerns the need to regulate the rates at which new blocks are added to the chain and the associated rewards for miners (in the case of blockchains using proof of work (POW) consensus methodologies). More generally, it is important to create incentives and deterrent mechanisms whereby interested actors contribute positively to the healthy continuation of chain growth.

    Besides serving as an economic deterrent against denial of service and spam attacks, POW approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Other similar approaches (proof of space, proof of bandwidth etc) followed, however, they all suffer from exposure to deviations from the intended fair distribution of control. Wealthy participants can, in fact, exploit these approaches to gain an advantage via purchasing high performance (CPU / memory / network bandwidth) dedicated hardware in large quantities and operating it in jurisdictions where electricity is relatively cheap. This results in overtaking the competition to obtain the reward, and the authority to mine new blocks, which has the inherent effect of centralising the control. Also, the huge energy consumption that comes with the inefficient nature of the competitive race to mine new blocks in POW consensus mechanisms has raised concerns about its environmental impact and economic sustainability.

    Proof of Stake (POS) and Proof of Importance (POI) are among the ideas introduced to drive consensus via the use of more social parameters, rather than computing resources. These two approaches link the authority to the accumulated digital asset/currency wealth or the measured productivity of the involved participants. Implementing POS and POI mechanisms, whilst guarding against the concentration of power/wealth, poses not insubstantial challenges for their architects and developers.

    More recently, semi-automatic approaches, driven by a human-curated group of ledgers, are putting in place solutions to overcome the limitations and arguable fairness of the above strategies. The Delegated Proof of Stake (DPOS) and Proof of Authority (POA) methods promise higher throughput and lower energy consumption, while the human element can ensure a more adaptive and flexible response to potential deviations caused by malicious actors attempting to exploit a vulnerability in the system.

    4. Distribution and resilience

    Apart from a decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer to peer (P2P) design paradigm. This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing.

    A centralised network, typical of mainframes and centralised services is clearly exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node. In the event that the central node breaks down or is congested, all the other nodes will be affected by disruptions.

    Decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. In fact, even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can reach the destination via an alternative route. This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack.

    Blockchain networks where a distributed topology is combined with a high redundancy of ledgers backing a history have occasionally been declared ‘unhackable’ by enthusiasts or, as some more prudent debaters say, ‘difficult to hack’. There is truth in this, especially when it comes to very large networks such as that of Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (principally because the cost of conducting a successful malicious attack becomes prohibitive).

    Although a distributed topology can provide an effective response to failures or traffic spikes, you need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historical high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

    Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

    5. Automation

    In order to sustain a coherent, fair and consistent blockchain and surrounding ecosystem, a high degree of automation is required. Existing areas with a high demand for automation include those common to most distributed systems. For instance; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. In the context of blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

    The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project.

    The ability to define how to operate an asset exchange, by which conditions and actioned following which triggers, has attracted many blockchain enthusiasts. Some of the most common applications of smart contracts involve lotteries, trade of digital assets and derivative trading. While there is clearly exciting potential unleashed by the introduction of smart contracts , it is also true that it is still an area with a high entry barrier . Only skilled developers that are willing to invest time in learning Domain Specific Languages (DSL) have access to the actual creation and modification of these contracts.

    The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly-designed contracts cannot properly rollback or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

    Another area in high need for automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

    Clearly, the removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back a centralised control but also reduce the automation of governance.

    We expect this to be one of the major areas where blockchain has to evolve in order to succeed in getting widespread market adoption.

    6. Transparency and Trust

    In order to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

    Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

    Besides transparency, trust is another critical feature that users legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

    7. Link to the external world

    The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. It is safe to say that there would be less interest if we were to accept that a blockchain can only operate under the restrictive boundaries of the digital world, without connecting to the analog real world in which we live.

    Technologies used to overcome these limitations including cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

    Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrency. The same applies to a wide range of other cryptocurrencies with the exception of fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world.

    Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary in order to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

    Where next?

    The challenges the market is facing can be interpreted as an opportunity for ambitious entrepreneurs to step in and resolve the pending fundamental issues of consensus, governance, automation and linking to the real world. It is vital that the people you work with have the competence, the technology and a good understanding of the issues to be resolved , in order to successfully drive this research forward.

    As discussed in the introduction, this article focuses on assessing the state of advancement of the technology in response to the motivating principles of blockchain. That said, Erlang Solutions and those with whom we work are actively working on solutions that we will continue to share with our network and partners moving forward.

    For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget.

    Get in touch with your blockchain project queries general@erlang-solutions.com or via the contact us form.

    The post Blockchain Tech Deep Dive 2/4 | Myths vs Realities appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/making-sense-of-blockchain-myths-v-reality/

    • chevron_right

      Ignite Realtime Blog: Openfire 4.7.0 beta & Hazelcast plugin 2.6.0 releases!

      guus • news.movim.eu / PlanetJabber • 6 December, 2021 • 2 minutes

    After a long few months full of hard work, we are happy to tell you that we are close to a 4.7.0 release for Openfire!

    This next version of our real time communications server has received a lot of improvements and bug fixes .

    A key area of the code that has received updates is the Multi-User Chat (MUC) implementation, with the objective of making that more stable in clustered environments. As always, but especially so if you’re making use of this functionality, we’d very much appreciate your feedback!

    Together with this beta release, we’ve also released a new version of the Hazelcast plugin, which is the plugin that adds Clustering support to Openfire. This release, version 2.6.0, goes hand-in-hand with the Openfire 4.7.0 release: Version 2.6.0 of the Hazelcast plugin will not work for any version of Openfire prior to the 4.7.0 (including the beta) release, while Openfire 4.7.0 (beta) and later will require at least version 2.6.0 of the Hazelcast plugin to enable clustering.

    We value your feedback!

    All feedback that we receive based on this beta will help us to improve the non-beta release. Even if you test this beta and find no issues, that’d be helpful for us to know! Please leave your feedback as a comment to this blogpost, or create a posting in our community’s forum ! If you want to chat with one of the developers, feel free to join the open_chat chatroom. We provide a web client with direct access , or you can join using your XMPP client of choice at open_chat@conference.igniterealtime.org .

    The beta release of Openfire 4.7.0 is available at our downloads page for beta releases . You can download version 2.6.0 of the Hazelcast plugin from that plugin’s archive page .

    The sha256sum values for the Openfire artifacts are listed below:

    6cf4cc7a0a834b23cdea0ade198c724ee4f0be5560f1d16321414497c4f5903b  openfire-4.7.0-0.2.beta.noarch.rpmceee17f76d2645f2fcc29f5a84efbcc7fc376f7e01ef05ba28b65e850fff96f8  openfire_4.7.0.beta_all.deb71e1a373420e048ee4c5921bcb54b4efa2357aaf9e57501284f20f0edeaa0428  openfire_4_7_0_beta.dmg7d1f248e03c60dbb071eeee6abf671932c47e154fae888bee9293f875323fdd7  openfire_4_7_0_beta.exe44bebac3da5f5d5d00270dcc303b34b6efce4c7c2bd1bfd4520827f6e8a58549  openfire_4_7_0_beta.tar.gz72f1cd7dc2015ec70902b3b18a5248651a6e3091ba12ebf8d2767834b057c17f  openfire_4_7_0_beta_x64.exe76e4cf8da32b062ed719812792849bcfc475a22208e721a5f1b7dcddda45a4a4  openfire_4_7_0_beta.zip

    Please note that starting with this release, a Java Runtime Environment is no longer distributed with Openfire.

    Thank you for using Openfire!

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic