phone

    • chevron_right

      Erlang Solutions: How to Manage Your RabbitMQ Logs: Tips and Best Practices

      news.movim.eu / PlanetJabber • 4 July, 2023 • 12 minutes

    RabbitMQ is an open-source message broker software that allows you to build distributed systems and implement message-based architectures. It’s a reliable and scalable messaging system that enables efficient communication between different parts of your application. However, managing RabbitMQ logs can be a challenging task, especially when it’s deployed on a large cluster. In this article, we’ll take you through some basic know-how when it comes to logging on RabbitMQ, including how to configure RabbitMQ logs to store them at a particular location, how to change the format of the logs, how to rotate RabbitMQ log files, how to monitor and troubleshoot RabbitMQ logs

    W1d1EZ0PV7l_qzpGVI044oo_7NVtNlsbFIlfItLOHL6c-zUpUAWbSZ0eE0GkIjsvrW0MpbWR04c0cMp9gcRCPBrCuXB2aU1BKwD3u09Tyux7HWMl_tStF63ifprTSVzbdtSWWdj7KwSX_ny9TEYB4yI

    Understanding RabbitMQ Logs

    RabbitMQ logs are a form of recorded information generated by the RabbitMQ message broker system. These log messages provide a lot of information on events taking place in the RabbitMQ cluster. For example, a log message is generated whenever a new message is received in a queue, when that message is replicated in another queue, if a node goes down or comes up, or if any error is encountered.

    The log entries typically include timestamps, severity levels, and descriptive messages about the events that occurred. These logs can help administrators and developers diagnose problems, identify bottlenecks, analyse system performance, track message flows, and detect any anomalies or misconfigurations within the RabbitMQ infrastructure.

    Before managing your RabbitMQ logs, you should first understand the different types of logs that RabbitMQ produces. There are several types of RabbitMQ logs:

    • Crash Logs: produced when RabbitMQ crashes or encounters a fatal error.
    • Error Logs: contain information about errors that occur during RabbitMQ’s runtime.
    • Warning Logs: contain warnings about issues that may affect RabbitMQ’s performance.
    • Info Logs: contain general information about RabbitMQ’s runtime.
    • Debug Logs: contain detailed information about the RabbitMQ server’s internal processes and operations, such as message routing decisions, channel operations, and internal state changes.

    Not having these log messages can make debugging issues very difficult, as they help us understand what RabbitMQ is doing at any given point in time. So enabling RabbitMQ logging and storing these log messages for a certain period is crucial for troubleshooting and fixing problems fast.

    RabbitMQ logging location

    Use RabbitMQ management UI or rabbitmq-diagnostics -q log_location to find when a node stores its log file(s).

    By default, the logging directory is: /var/log/rabbitmq

    The default value for the log folder location is saved in an environment variable named RABBITMQ_LOGS . Changing this environment variable will change the RabbitMQ log folder.

    You can check the info of environment variables here .

    Or we can use the configuration file to change the logging folder. By default, this parameter is commented because the environment variable is set automatically while installing or deploying RabbitMQ. The default value of the log file location configuration in the rabbitmq.conf file is as follows:

    log.dir = /var/log/rabbitmq

    Configuring RabbitMQ Logging

    RabbitMQ offers various log levels, each having a different severity number. This number is important, as it decides, along with other factors, if a message has to be logged or not.

    l7H0hwBn_BIxXZnRfmMHm_xM-N1p2BQaGknKlhwgcJkeei_EwQwtD3S1lz_U-SpDqcgq7Ea-Xk4VJwx9aM-kxhXhJ1EJGkmh4JhvHrxQbnxHhZRwh3yyTUT0cWP6GvpJYyEmv4DKQqfrPYqFNNO_Xl8

    By default, RabbitMQ logs at the info level. However, you can adjust the log levels based on your needs. The none level means no logging.

    To make the default category log only errors or higher severity messages, use

    log.default.level = error

    Each output can use its own log level. If a message has lower severity than the output level, the message will not be logged.

    For example, if no outputs are configured to log debug messages, even if the category level is set to debug , the debug messages will not be logged.

    On the other hand, if an output is configured to log debug messages, it will get them from all categories, unless a category is configured with a less verbose level.

    Changing RabbitMQ log level

    RabbitMQ provides two methods of changing the log level: via the configuration files or running a command using CLI tools.

    Changing the log level in the configuration file requires a node restart for the change to take effect. But this change is permanent since it’s made directly in the configuration file. The configuration parameter for changing the log level is as follows:

    log.default.level = info

    You can change this value to any supported log level and restart the node, and the change will be reflected.

    The second method, using CLI tools, is transient. This change will be effective only until a node restarts. After a restart, the log level will be read from the configuration file again. This method comes in handy when you want to temporarily enable the RabbitMQ debug log level to perform some quick debugging:

    rabbitmqctl -n rabbit@target-host set_log_level debug

    The last option passed to this command is the desired log level. You can pass any of the supported log levels. Changing the log level this way does not require a node restart, and the changes will take effect immediately

    Log Messages Category

    Along with log levels, RabbitMQ offers log categories to better fine-tune logging and debugging. Log categories are various categories of messages, which can be visualised as logs coming from various modules in RabbitMQ.

    This is the list of log categories available in RabbitMQ:

    • Connection: Connection lifecycle event logs for certain messaging protocols
    • Channel: Errors and warning logs for certain messaging protocol channels
    • Queue: Mostly debug logs from message queues
    • Mirroring: Logs related to queue mirroring
    • Federation: Federation plugin logs
    • Upgrade: Verbose upgrade logs
    • Default: Generic log category; no ability to override file location for this category

    You can use these to override the default log levels configured at the global level. For example, suppose you’re trying to debug an issue and have configured the global log level as debug. But you want to avoid debug log messages for the upgrade category, or all log messages for the upgrade category.

    For this, configure the upgrade log category to not send any log messages:

    log.<category>.level = none

    Similarly, you can configure all messages from a certain category to log to a separate file, using the following configuration:

    log.<category>.file

    This way, RabbitMQ makes it easy to segregate log messages based on both the log category and the log level.

    Rotate Logs

    RabbitMQ logs can quickly fill up your disk space, so it’s essential to rotate the logs regularly. By rotating the logs, you can ensure that you have enough disk space for your RabbitMQ logs.

    Based on Date

    You can use a configuration parameter to automatically rotate log files based on the date or a given hour of the day:

    log.file.rotation.date = $D0

    The $D0 option will rotate a log file every 24 hours at midnight. But if you want to instead rotate the log file every day at 11 p.m., use the following configuration:

    log.file.rotation.date = $D23

    Based on File Size

    RabbitMQ also allows you to rotate log files based on their size. So, whenever the log file hits the configured file size, a new file is created:

    log.file.rotation.size = 10485760

    The configuration above rotates log files every time the file size hits 10 MB.

    Number of Archived Log Files to Retain

    When log file rotation is enabled, it’s also important to make sure you configure how many log files to retain. Otherwise, all older log files will be retained on disk, consuming unnecessary storage. RabbitMQ has a built-in configuration for this as well, and you can control the number of files to retain using the following configuration:

    log.file.rotation.count = 5

    In this example, five log files will be retained on the storage disk.

    Rotation Using Logrotate

    On Linux, BSD and other UNIX-like systems, logrotate is an alternative way of log file rotation and compression.RabbitMQ Debian and RPM packages will set up logrotate to run weekly on files located in default /var/log/rabbitmq directory. Rotation configuration can be found in /etc/logrotate.d/rabbitmq-server .

    How to Format RabbitMQ Logs

    Properly formatting logs is crucial for effective log management in RabbitMQ. It helps improve readability, enables efficient searching, and allows for easier analysis. In most production environments, you would want to use a third-party tool to analyse all of your logs. This becomes easy when the logs are formatted in a way that’s easy to understand by an outside tool. In most cases, this is the JSON format.

    You can use a parameter in the configuration file to change the format of the RabbitMQ log messages to JSON. This takes effect irrespective of the log level you have configured or the category of logs you’re logging.

    To change the log format to JSON, you just have to add or uncomment the following configuration in the configuration file:

    log.file.formatter = json

    This will convert all log messages written to a file to JSON format. Or, change the format of the logs written to the console or syslog to JSON using the following configurations, respectively:

    log.console.formatter = json

    log.syslog.formatter = json

    If you want to revert these changes, that is, if you don’t want the log messages to be in JSON format, then simply comment out these configuration parameters.

    RabbitMQ Logging Configuration

    RabbitMQ offers three target options for logging messages: to a file, the console, or syslog. By default, RabbiMQ directs all log messages to files. You can use the RabbitMQ configuration file to control this.

    Logging to File

    The following configuration will enable or disable RabbitMQ logging to a file:

    log.file = true

    There are numerous other configuration parameters for tuning RabbitMQ logs to files. For example, you can control the log level to files, create separate files for each log category, use the built-in log rotation feature, or even integrate with a third-party log rotation package, as discussed above. When you want to stop logging messages this way, all you have to do is change the configuration value to false.

    Logging to Console

    Logging to the console is similar to logging to a file. You have a configuration in the RabbitMQ configuration file to either enable or disable logging to the console. The configuration is as follows:

    log.console = true

    As with logging to a file, you can change the value to false to stop the logging. Also, there are numerous options for formatting log messages sent to the console.

    Logging to Syslog

    Logging to syslog takes a bit more configuration than logging to a file or the console. Once you enable logging to syslog, you also have to provide other related configurations, like the transport mechanism (TCP or UDP), transport protocol (RFC 3164 or RFC 5424), syslog service IP address or hostname, port, TLS configuration if any, and more.

    To enable logging to syslog, use the following configuration:

    log.syslog = true

    By default, RabbitMQ uses the UDP transport mechanism on port 514 and the RFC 3164 protocol. To change the transport and the protocol, use the following two configurations:

    log.syslog.transport = tcp

    log.syslog.protocol = rfc5424

    If you want to use TLS, do so using the following configuration:

    log.syslog.transport = tls

    log.syslog.protocol = rfc5424

    log.syslog.ssl_options.cacertfile = /path/to/ca_certificate.pem

    log.syslog.ssl_options.certfile = /path/to/client_certificate.pem

    log.syslog.ssl_options.keyfile = /path/to/client_key.pem

    Next, you need to configure the syslog IP address or hostname and the port using the following configuration:

    log.syslog.ip = 10.10.10.10

    or

    log.syslog.host = my.syslog-server.local

    and

    log.syslog.port = 1514

    Along with these, you can also change the format of the log messages to JSON as we discussed earlier.

    Sample Logging Configuration

    ## See https://rabbitmq.com/logging.html for details.
    ## Log directory, taken from the RABBITMQ_LOG_BASE env variable by default.
    # log.dir = /var/log/rabbitmq
    
    
    ## Logging to file. Can be false or a filename.
    ## Default:
    # log.file = rabbit.log
    
    ## To disable logging to a file
    # log.file = false
    
    ## Log level for file logging
    ## See https://www.rabbitmq.com/logging.html#log-levels for details
    ##
    # log.file.level = info
    # log.file.formatter = json (Configure JSON format)
    
    ## File rotation config. No rotation by default.
    ## See https://www.rabbitmq.com/logging.html#log-rotation for details
    ## DO NOT SET rotation date to ''. Leave the value unset if "" is the desired value
    # log.file.rotation.date = $D0
    # log.file.rotation.size = 0
    
    ## Logging to console (can be true or false)
    # log.console = false
    
    ## Log level for console logging
    # log.console.level = info
    
    ## Logging to the amq.rabbitmq.log exchange (can be true or false)
    ## See https://www.rabbitmq.com/logging.html#log-exchange for details
    # log.exchange = false
    
    ## Log level to use when logging to the amq.rabbitmq.log exchange
    # log.exchange.level = info
    

    Monitoring RabbitMQ Logs

    Monitoring your RabbitMQ logs in real-time can help you identify issues before they become critical. There are several monitoring tools available, such as AppDynamics , DataDog , Prometheus , Sematext ,….

    NyKHiXHYGhHaJmsrF8JhPC3j60AELaUvSNGM-Q33nS7538MTGHgCl5Ryezf4GyQQG4yKCJYhy4A0dFJt3_foeMT8QKKHNDa1e7n1sTQeZBvgkNkQ7NkpLyfIL931h-Bh57rXVNmWJna4i1Y6OPELsjw

    Monitoring RabbitMQ with Sematext

    Troubleshooting RabbitMQ Logs

    Managing RabbitMQ logs is an essential part of maintaining a reliable and efficient RabbitMQ setup. However, even with the best logging practices in place, issues can still arise.

    I usually work with the RabbitMQ logs follow these steps:

    • Check the Log Files Regularly: Checking the logs on a regular basis can help you identify any issues as they arise, allowing you to address them promptly, also help you to track trends and identify patterns of behavior that may indicate potential issues.
    • Look for Error Messages: When reviewing your RabbitMQ logs, pay close attention to any error messages that appear. Error messages can provide valuable clues about potential issues with your RabbitMQ setup.
    • Check RabbitMQ Status: Using the RabbitMQ management console or command-line tools, check the status of your RabbitMQ instance by typing:”rabbitmqctl status”
      Look for any warnings or errors that may indicate potential issues. Address any warnings or errors promptly to prevent them from becoming larger issues down the road.
    • Check Network Connections: Check network connections and firewall settings to ensure that they are properly configured. Network issues can cause a wide range of issues with your RabbitMQ setup, so it’s essential to make sure that your network is configured correctly.
    • Monitor Resource Usage: Monitor resource usage on your RabbitMQ server, including CPU, memory, and disk space. If any of these resources are running low, it can cause issues with your RabbitMQ instance.

    Conclusion

    Remember to set appropriate log levels, use a monitoring tool to keep track of your logs, and be proactive in identifying and resolving issues. With these practices in place, you can make sure that your RabbitMQ system is reliable and performing at its best.

    If you have further questions, our team of expert consultants are on hand and ready to chat. Just head to our contact page.

    The post How to Manage Your RabbitMQ Logs: Tips and Best Practices appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/how-to-manage-your-rabbitmq-logs-tips-and-best-practices/

    • chevron_right

      Erlang Solutions: The business value behind green coding

      news.movim.eu / PlanetJabber • 29 June, 2023 • 5 minutes

    Most large businesses – and many smaller ones – now have a sustainability strategy. Measuring Environmental, Social and Governance (ESG) impact has been transformed from a fringe activity into a fundamental differentiator.

    That’s partly because organisations want to do the right thing – climate change affects everyone, after all – and also because sustainability makes sound business sense.

    B2C businesses are dealing with increasingly environmentally aware consumers. B2B businesses are selling to companies with their own ESG and sustainability targets. Large organisations of all kinds need to green their supply chains as well as themselves.

    On top of it all, regulation around ESG is only going one way. Progressive companies are fighting to get ahead of the curve. As far as non-compliance is concerned, prevention is better than cure.

    In which case, how might organisations achieve greater sustainability? There are many ways, but in a digital world green coding is an increasingly essential ingredient in any corporate sustainability strategy. It can also provide a range of other business benefit.

    IT energy consumption: the elephant in the room

    According to the Association for Computing Machinery, energy consumption at data centres has doubled in the last decade . The ICT sector is responsible for between 1.8% and 3.9% of greenhouse gas emissions globally.

    That should come as no surprise. The expansion of our digital horizons in the last ten years or so has been remarkable. Every business is, to some degree or other, a digital business.

    Digital-first organisations create vast amounts of data, and then set powerful new technologies to work, analysing it for insight. AI, analytics, the IoT, VR…they’re all changing the world, and adding significantly to its energy consumption.

    The need for green coding in a data-hungry world

    The direction of travel here is only one way. The explosion in data is ramping up energy use in data centres, raising costs and making sustainability targets more difficult to achieve.

    Source: Statista

    At the same time, the vast expansion of computing power over the last couple of decades has created a sense of freedom among software engineers. As technology constraints have diminished, code has become longer and more complex. Coders have worried less about the process and more about the outcome.

    To take one example, open-source code libraries have allowed programmers to reuse pre-produced pieces of code in their projects. This can certainly be useful, accelerating time to market for under-pressure projects. And as we all know, recycling can also be sustainable, when it’s the right code for the task.

    But often it isn’t, or at least not quite. An ‘off-the-shelf’ approach to software engineering – when code is reused simply to cut development time – can result in more lines of code than is necessary. That in turn requires more processing power and demands more energy use.

    In addition, the almost limitless computing capabilities that modern developers have at their fingertips have led to major leaps in file size. Why reduce image or video quality, even fractionally, if you really don’t have to?

    This is all in stark contrast to the coding practices of the early years of the 21st century when technological limitations demanded greater coding discipline.

    Green coding is many things but one of them is a return to coding with stricter limits and a focus on taking the shortest possible path to a desired outcome.

    Source: OurWorldinData.org

    Lean coding is green coding

    In other words, to reduce energy use in software, applications and websites, programmers are adopting the principles of lean coding.

    In short, this means writing code that requires less processing power without compromising quality. This is a win-win, because when it’s done well, lean coding can have business benefits beyond contributing to sustainability targets.

    Efficient, purpose-built code can accelerate applications and improve user experiences. And because it uses less energy, it can reduce energy bills.

    There are some easy wins in this. Many – perhaps most – file sizes can be reduced. Coders need to recognise where high-quality media is necessary, and where lower-quality alternatives will do just as well. There is a balance to be struck.

    Similarly, writing tight, tailored code will increase efficiency and lower energy consumption. And because efficient code takes less time to process, it’s also likely to produce better results.

    Writing good code leads to better user experiences because applications are faster and more efficient. And focusing on user-friendliness is also a principle of green coding.

    Applications that are more intuitive mean users spend less time doing what they need to do. The more user-friendly software is, the more energy efficient it tends to be, too.

    The importance of language in green coding

    In all this, your choice of coding tools takes on new significance. Some languages – Elixir is one – are designed with efficiency in mind. That tends to mean the software they produce uses less energy as well.

    Elixir, which is based on Erlang, is a lightweight language, consuming relatively little in terms of memory or processor power. It’s often the most sustainable option in places where many processes happen at the same time, such as a web server.

    Because it utilises the Erlang Virtual Machine (VM), Elixir is ideal for running low-resource, high-availability processes. It’s also simple and easy to understand, requiring less code overall than many counterparts. That makes the end application more sustainable, and it also reduces the number of computing hours needed in its development.

    Elixir is written in clear, easy-to-understand syntax, so it’s easy to maintain and update as circumstances demand. In addition, comprehensive in-built testing tools make it easier to write code that works first time – another sustainability win.

    Sustainability and business value

    In other words, in the right circumstances, Elixir is an excellent tool for writing lean, well-tested and readable code, which is as energy efficient as code gets.

    And as we’ve seen, the business value of green code goes beyond ESG compliance. It can lower bills. It also shows customers, prospects and potential employees how seriously your organisation takes environmental responsibility.

    Lean programming tells the world that your green commitment is far from just skin deep. It reaches the core of your infrastructure.

    Sustainability targets will only become more stringent as net zero deadlines inch closer. The code that runs our digital world will play a significant role in creating a leaner, greener future. Businesses that adopt green coding now can gain a competitive advantage.
    For more information on Elixir and how it can help green your development processes, please see our website or drop us a line .

    The post The business value behind green coding appeared first on Erlang Solutions .

    • chevron_right

      Ignite Realtime Blog: Openfire inVerse plugin v10.1.4-1 release!

      news.movim.eu / PlanetJabber • 27 June, 2023

    The Ignite Realtime community is happy to announce the immediate release of version “10.1.4 release 1” of the inVerse plugin for Openfire!

    The inVerse plugin adds a Converse-based web client to Openfire ( Converse is a third party implementation). With this plugin, you’ll be able to set up a fully functional Converse-based chat clients with just a few mouse-clicks!

    This update includes an update of the converse.js library (to version 10.1.4), which brings various improvements .

    Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the inVerse plugin archive page .

    If you have any questions, please stop by our community forum or our live groupchat .

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Isode: Harrier 3.3 – New Capabilities

      news.movim.eu / PlanetJabber • 27 June, 2023 • 1 minute

    Harrier is our Military Messaging client. It provides a modern, secure web UI that supports SMTP, STANAG 4406 and ACP 127. Harrier allows authorised users to access role-based mailboxes and respond as a role within an organisation rather than as an individual.

    Harrier Inbox view (behind) showing Military Messaging security label and priority parameters; and Message view (in front). Harrier Inbox view (behind) showing Military Messaging security label and priority
    parameters; and Message view (in front).

    The following changes have been made with the 3.3 release:

    Integration with IRIS WebForms

    Harrier’s generic support for MTF (Message Text Format) has been extended by provision of a close integration with Systematic IRIS WebForms . This provides convenient creation and display of MTFs using the IRIS WebForms UI within Harrier.

    IRIS Forms message attachment in Harrier Military Messaging Client IRIS Forms message attachment

    Further examples and an in-depth description can be found in the Isode white paper C2 Systems using MTF and Messaging .

    Browser Support Enhancements

    New session handling, which allows a users to open multiple sessions per browser and multiple views.  This enables a user to easily access multiple mailboxes at the same time.

    PKCS#11 HSM Support

    PKCS#11 HSM (Hardware Security Module) support is added. This has been tested with HSMs from Nitrokey, Yubico, Gemalto and the SoftHSM software. This provides two capabilities, which can be managed using Cobalt 1.4.

    1. The private key for the server, protecting HTTPS access.
    2. Private keys for Users, Roles and Organizations. supporting message signing and encryption.

    Other Enhancements

    • Audit logging when user prints a message
    • Option to enforce security label access control checks.  By default, these are advisory, with enforcement generally provided by M-Switch.
    • Default security label in forward and reply to the label of the message being replied to or forwarded.
    • Option to configure backup servers for IMAP, SMTP and LDAP to provide resilience in event of primary server failing.
    • Option to use local timezone instead of Zulu for DTG, Filing Time and Scan Listing.
    • When using Zulu timezone, show local time in tool tip.
    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/harrier-3-3-new-capabilities/

    • chevron_right

      Erlang Solutions: IoT Complexity Made Simple with the Versatility of Erlang and Elixir

      news.movim.eu / PlanetJabber • 27 June, 2023 • 16 minutes

    Part A: Current Context and Challenges

    The world is on the brink of a transformative industrial revolution known as Industry 4.0. This fourth industrial revolution is revolutionising our lives, work, and interactions on an unprecedented scale. The convergence of technology, Artificial Intelligence (AI), and the Internet of Things (IoT) has enabled highly sophisticated and interconnected systems. These systems and devices can communicate with each other, automate complex tasks, and even reshape our future.

    Automation and systematisation are particularly influential within this revolution, profoundly impacting our society’s future. From factories to smart cities, and autonomous vehicles to health monitoring devices, automation has taken a prominent role in business, industry, and education.

    For instance, AI automates tasks in factories, such as quality control and predictive maintenance. IoT powers smart cities that leverage data and technology to enhance residents’ lives. Big data improves healthcare by predicting diseases and developing new treatments.

    Despite the potential for new job creation and improved lives, Industry 4.0 also presents challenges. Automation will replace certain jobs, necessitating worker reskilling and retraining. Additionally, this paradigm shift demands new infrastructure like high-speed internet and data centres.

    Overall, Industry 4.0 is a transformative force with profound societal implications. It is crucial to understand its benefits and challenges in order to adequately prepare for the future.

    Most significant challenges to large-scale IoT adoption

    Interoperability and integration

    IoT devices and systems from various manufacturers often utilise different protocols, leading to connectivity and communication issues. This lack of interoperability presents challenges and fragmented solutions within the IoT landscape, restricting seamless device compatibility. Consequently, the absence of interoperability can impede the development and deployment of IoT applications, diminish the value of IoT data, and heighten the risk of security breaches.

    Erlang/Elixir provides all the tools to easily tackle new and existing protocols.

    Thanks to the BEAM being able to handle binary data without the issues currently found in traditional languages used for IoT such as C, .NET and Rust. This significantly speeds up development cycles and integration efforts.

    For example, the BEAM can be used to easily decode and encode binary data from different protocols, such as Zigbee, Z-Wave, and Bluetooth Low Energy. The BEAM can also be used to easily create and manage distributed systems, which is often required for IoT applications.

    The BEAM’s ability to handle binary data and create and manage distributed systems makes it a powerful tool for IoT development. By using Erlang/Elixir, developers can significantly speed up development cycles and integration efforts, and they can create more reliable and scalable IoT applications.

    In addition to this, it is worth highlighting that Erlang also plays a crucial role in the message broker ecosystem, due to its unique characteristics. Erlang’s inherent traits, such as low latency, fault tolerance, and massive concurrency, make it well-suited for deploying IoT solutions at an industrial scale. Low latency ensures that messages are delivered swiftly, enabling real-time responsiveness in IoT applications. Fault tolerance ensures that the system can continue operating even in the presence of failures or errors, crucial for maintaining uninterrupted connectivity in IoT environments. Furthermore, Erlang’s ability to handle massive concurrency allows it to efficiently manage the high volume of messages exchanged in IoT systems, ensuring seamless communication across a vast network of connected devices. These desired traits collectively emphasise the vital role a robust message broker backbone, built on Erlang’s foundations, holds in powering the IoT landscape.

    Security and privacy

    As the number of interconnected devices increases, so does the risk of cyberattacks. Even if the objective of such devices is to provide security. Such devices are often vulnerable to numerous types of security threats due to poor security practices, unpatched software or immature components being deployed in unsecured environments.

    IoT privacy

    For example, IoT devices can be used to launch denial-of-service attacks, steal sensitive data, or even control physical devices. The security risks associated with IoT devices are often exacerbated by the fact that IoT devices are often deployed in unsecured environments, such as home networks.

    Some of the most common security threats associated with IoT devices include:

    • Denial-of-service attacks: IoT devices can be used to launch denial-of-service attacks by flooding a network with traffic. This can make it difficult or impossible for legitimate users to access the network.
    • Data theft: IoT devices can be used to steal sensitive data, such as passwords, credit card numbers, or personal information. This data can then be used for identity theft, fraud, or other crimes.
    • Physical control: IoT devices can be used to control physical devices, such as thermostats, lights, or even cars. This could be used to cause damage, steal property, or even harm people.

    Erlang/Elixir provides programming facilities that render whole classes of errors and security threads obsolete and with the possibility of being able to run an Erlang virtual machine using different security models such as an unhackable/ultras-secure Operating System running on seL4, custom Linux-based firmware via Nerves or constrained re-implementations of the BEAM, such as AtomVM.

    Robust security and privacy policies are essential for safeguarding sensitive data collected and transmitted by IoT devices, which may include personal and confidential information and/or metadata.

    Some of the security features of the BEAM that can benefit IoT development include:

    • Fault tolerance: The BEAM is a fault-tolerant language, which means that it can continue to run even if some of its processes fail. This can help to prevent denial-of-service attacks.
    • Hot code reloading: The BEAM can reload code without restarting the process. This can help to protect against security vulnerabilities that are discovered in code.
    • Distributed systems: The BEAM is a distributed language, which means that it can be used to create applications that run on multiple machines. This can help to protect sensitive data from unauthorised access.

    Erlang/Elixir innovations allow developers to build systems and applications using mathematically verified kernels, rather than relying on traditional Linux solutions. This makes Erlang/Elixir ideal for high-security and mission-critical environments, including manufacturing, military, aerospace, and medical devices. In such industries, a failure or security breach could have devastating consequences.

    Developers can leverage Erlang/Elixir to build highly secure and reliable systems, enhancing mission safety and reducing susceptibility to attacks and failures.

    Scalability

    IoT networks need to handle numerous interconnected devices and facilitate the exchange of vast data volumes. The number of interconnected devices is steadily increasing, enabling specialised tasks and coordinated outcomes among devices. With the growing device connections, the network infrastructure must accommodate higher traffic, ensuring sufficient bandwidth and processing power. These situations demand near-real-time responses and low latency to gain a competitive edge.

    Erlang/Elixir have been designed to excel in such scenarios, where OTP provides all the necessary functionality to handle such loads in a predictable and scalable manner.

    Some of the benefits of using Erlang/Elixir for IoT development include:

    • Fault tolerance: This can help to prevent network outages.
    • Scalability: Erlang/Elixir is a scalable language, meaning it can be used to create applications that can handle a large number of devices. This can help to handle the increased traffic and processing power requirements of IoT networks.
    • (soft) Real-time performance: Erlang/Elixir is a soft real-time language, which means that it can be used to create applications that can respond to events in a timely manner. This can help to meet the low latency requirements of IoT networks.

    Data management

    With the huge volumes of data generated by IoT devices, it is essential to have effective data management systems to store, process and analyse the data. Identifying the most valuable data sets and using them effectively to create insights that support decision-making and performable actions is crucial. The Internet of Things (IoT) is a network of physical devices, vehicles, home appliances, and other objects that are embedded with sensors, software, and network connectivity to collect and exchange data. IoT devices are generating an unprecedented amount of data, which is being used to create new insights and applications.

    Erlang/Elixir provides not only the facilities to handle such volumes of data with ease but also the opportunity to identify meaningful data using machine-learning solutions such as Nx, Axon and Bumblebee in production-ready environments.

    Some of the benefits of using Erlang/Elixir for data management include:

    • Scalability: Erlang/Elixir is a scalable language, which means that it can be used to create applications that can handle large volumes of data.
    • Fault tolerance: Erlang/Elixir is a fault-tolerant language, which means that it can continue to run even if some of its processes fail. This can help to prevent data loss.
    • Machine learning: Erlang/Elixir has a number of machine learning libraries, such as Nx, Axon, and Bumblebee. These libraries can be used to identify meaningful data sets and create insights that support decision-making.

    By processing data, IoT devices and backbones can be made more intelligent and capable of autonomous decision-making. This can lead to improved efficiency, productivity, and safety.

    Energy consumption

    IoT requires energy to operate devices. With the increasing number of devices, consumption can become a significant challenge. Finding ways to optimise energy usage without sacrificing performance is crucial for large-scale IoT architectures. Work in this area has led the industry to create smaller and smaller chipsets and boards. Many IoT devices are battery-powered or otherwise constrained in terms of their energy supply limiting their operating time, therefore making them less practical in some scenarios. This is one of the reasons why hardware manufacturers are developing more efficient components that consume less energy. On the software side, specialised development frameworks and methodologies are being developed to enable more efficient and optimised code for IoT devices. Optimising energy consumption is a critical challenge for IoT adoption. Both hardware and software solutions are being developed to address these issues. Erlang/Elixir addresses such scenarios by using alternative BEAM implementations such as AtomVM. AtomVM was specifically created for such constrained devices and maintains the high-level development facilities that Erlang/Elixir offers, including fault tolerance, without increasing energy consumption.

    IoT energy

    Some of the benefits of using Erlang/Elixir and AtomVM for energy-efficient IoT development include:

    • Fault tolerance: This can help to prevent devices from crashing, which can lead to energy savings.
    • Energy efficiency: AtomVM is an energy-efficient BEAM implementation, which means that it uses less power than other BEAM implementations. This can help to extend the battery life of IoT devices.
    • High-level development: Erlang/Elixir is a high-level language, meaning that it is easy to learn and use. This can help to reduce the development time and cost of IoT applications.

    Deployment

    Deploying IoT devices at scale requires careful planning and coordination. This is a complex process, especially when deploying a large number of devices across multiple locations. It requires expertise in networking, security and device management. In addition, once devices are deployed they need to be maintained and updated on a regular basis. To address these challenges, organisations need to invest in the right tools and processes for managing their IoT deployments. This can include device management platforms that provide centralised management and monitoring of devices, security frameworks that ensure the devices are secure and protected, and data analysis tools that help to make sense of the vast amount of data generated by the devices.

    Erlang/Elixir are platforms that have been designed from the ground up to be able to handle massive amounts of concurrent processes and excel at handling vast amounts of data in a predictable fashion. Having your centralized solutions written in Erlang/Elixir will help you tackle the increasing complexity of managing millions of deployed devices at scale.

    By using these, developers can create more reliable, scalable, and real-time IoT applications that can meet the challenges of the IoT ecosystem. In addition to the benefits mentioned above, Erlang/Elixir also offers a number of other advantages for IoT development, including:

    • Strong community: A strong community of developers who are actively working on new features and libraries. This can be a valuable resource for developers who are new to the language.
    • Open-source: As an open-source language, it is free to use and modify. This can save developers money on licensing costs.

    Erlang/Elixir harnesses the power of these languages to create innovative, efficient, and resilient IoT applications.

    Use cases

    Smart Sensors

    Smart sensors are crucial components within large-scale IoT systems, serving a wide range of industries such as manufacturing, food processing, smart farming, retail, storage, healthcare, and Smart City applications. These sensors play a vital role in collecting and transmitting valuable data. To meet the diverse needs of these industries, smart sensors must be developed to support specific requirements and ensure reliable real-time operation. The design and implementation of these sensors are critical in enabling seamless data collection and facilitating efficient decision-making processes within IoT ecosystems.

    Security

    Implementing security upgrades and updates is crucial for ensuring the security of IoT devices and systems. Typically, these enhancements involve incorporating additional security patches into the operating system. It is imperative that IoT devices and systems are designed to support future upgrades and updates. Insufficient security measures can lead to severe security failures, especially in the face of intense competition within the industry. Due to the lack of standardisation in the IoT technology landscape, numerous SMEs and startups enter the IoT market without proper security measures, thereby heightening the risk of potential security vulnerabilities.

    Privacy

    Preserving personal privacy is a critical consideration in any system that involves the collection of user information. In the realm of IoT, devices gather personal data, which is then transmitted through wired or wireless networks. This information is often stored in a centralised database for future use.

    To safeguard the personal information of users, it is essential to prevent unauthorised access and hacking attempts. Sensitive details concerning a user’s habits, lifestyle, health, and more could be exploited if unauthorised access to data is possible. Therefore, it is imperative for IoT systems to adhere to privacy regulations and prioritise the protection of user privacy.

    By implementing robust security measures, such as encryption, access controls, and data anonymisation, IoT systems can ensure compliance with regulatory privacy requirements. These measures help mitigate the risks associated with unauthorised access and provide users with the confidence that their personal information is being handled securely.

    Connectivity

    The majority of IoT devices are connected to wireless networks, offering specific requirements and convenience for seamless connectivity. IoT systems employ various wireless transmission technologies, such as Wi-Fi, Bluetooth, LoRa WAN, SigFox, Zigbee, and more. Each of these systems possesses its own advantages and specifications, catering to different use cases.

    However, the implementation of multiple technology platforms poses challenges for developers, -particularly in the context of information sharing between devices and applications. Interoperability and data exchange become crucial considerations when integrating diverse wireless technologies within an IoT ecosystem.

    Developers must navigate the complexities of integrating and harmonising data flows between devices, utilising different wireless transmission technologies. This involves designing robust communication protocols and ensuring compatibility across the network infrastructure. By addressing these challenges, developers can optimise data exchange and foster seamless interoperability, enabling efficient communication and collaboration among IoT devices and applications.

    Furthermore, careful consideration should be given to factors such as network coverage, data transmission range, power consumption, and security requirements when selecting the appropriate wireless technology for specific IoT use cases. By leveraging the advantages and specifications of each wireless transmission technology, developers can tailor their IoT systems to meet the unique demands of their applications.

    Efficiently managing the diversity of wireless transmission technologies in IoT systems is a complex but necessary task. Through effective integration and collaboration, developers can harness the benefits of multiple wireless technologies and create interconnected IoT ecosystems that drive innovation and deliver valuable experiences to users.

    Compatibility

    As with any emerging technology, IoT is rapidly expanding across various domains to support a wide array of applications. With the rapid growth of the IoT landscape, competition among companies striving to establish their technologies as industry standards also increases. Introducing additional hardware requirements for compatibility can result in increased investments for both service providers and consumers.

    Ensuring the compatibility of IoT devices and networks with conventional transmission technologies becomes a crucial and challenging task. The sheer diversity of devices, applications, transmission technologies, and gateways presents a complex landscape to navigate. Achieving compatibility and seamless interoperability becomes essential for establishing an efficient IoT infrastructure.

    To create an effective and scalable IoT ecosystem, it becomes imperative to standardise the underlying technologies. This includes defining and monitoring network protocols, transmission bands, data rates, and processing requirements. Standardisation enables consistent and harmonious communication between IoT devices, facilitating seamless integration, and enhancing the overall interoperability of the system.

    Other challenges

    Complexity

    The complexity of IoT necessitates careful design, testing, and implementation to meet specific requirements. Erlang and Elixir offer powerful solutions that significantly reduce development efforts while effectively managing complexity.

    These languages offer developers high-level abstractions to streamline IoT development, making it more efficient to address challenges. They provide features such as concurrency models, fault-tolerant systems, and distributed computing, empowering developers to create robust and scalable solutions.

    Erlang and Elixir excel in handling low-level binary data through declarative approaches like pattern matching. This simplifies device and protocol interoperability, reducing complexity and facilitating smoother communication within the IoT ecosystem.

    By utilising Erlang and Elixir, developers gain access to a powerful toolset that eases complexity in IoT systems. High-level abstractions and declarative programming allow developers to focus on problem-solving rather than low-level implementation details, leading to more efficient development processes and increased productivity.

    The language ecosystem of Erlang and Elixir is specifically designed to tackle complexity in IoT applications. Developers can effortlessly navigate the complexities of IoT and deliver reliable, scalable solutions by utilising its advanced features and abstractions.

    Erlang and Elixir are invaluable assets in the IoT landscape, providing the necessary tools and frameworks to simplify development, enhance interoperability, and unlock the potential of complex IoT systems.

    Power

    IoT, like other smart systems, relies on power for its sensors, wireless devices, and gadgets. Efficiency in power consumption is essential. The community is developing embedded Erlang systems to address these concerns and expand the platform into new domains.

    Cloud access

    Most IoT will be using a centralised cloud-based process control system. In order to reduce latency and maintain real-time connectivity in a cloud base system are challenging.

    IoT cloud

    Part B: Conclusion

    Erlang and Elixir have emerged as robust and scalable alternatives for IoT, automation, and Artificial Intelligence solutions, showcasing their adaptability to evolving paradigms. These programming languages excel in constructing highly scalable and fault-tolerant systems. Erlang developed for the telecommunications industry, supports 24/7 concurrent operations and error recovery.

    Its powerful, robust message brokers serve as the backbone for IoT integration. With low latency, fault tolerance, and massive concurrency, Erlang is an ideal choice for building message brokers that meet IoT demands. This emphasises Erlang’s and the BEAM’s relevance for use with IoT systems.

    These can be hosted in different BEAM implementations to meet specific needs like power consumption and formal verification. They are versatile in low-power, aerospace, military, and healthcare applications. Specialised BEAM implementations are available for resource-limited environments, and formal verification ensures system accuracy.

    Erlang and Elixir are well-suited for building fault-tolerant systems in IoT and automation. They enable continuous operation under high demand or hardware failures. Elixir’s developer-friendly nature facilitates rapid software development without compromising quality.

    We look forward to taking you on this exciting journey as we explore even more of the powerful capabilities of Erlang and Elixir in revolutionising IoT and automation systems.

    The exploration doesn’t stop here. In our upcoming series, we’ll delve deeper into use cases that demonstrate how Erlang and Elixir effectively tackle the challenges posed by IoT.

    So stay tuned for more thought-provoking posts as we unlock the immense potential of these languages. Let’s continue the conversation!

    The post IoT Complexity Made Simple with the Versatility of Erlang and Elixir appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/iot-complexity-made-simple-erlang-elixir/

    • chevron_right

      Isode: M-Link 19.4 Limited Release

      news.movim.eu / PlanetJabber • 22 June, 2023 • 9 minutes

    M-Link 19.4 provides a very significant update and in particular provides the M-Link User Server product. It also provides M-Link MU Server, M-Link MU Gateway and M-Link Edge.   It does not provide M-Link IRC Gateway, which remains M-Link 17.0 only.

    M-Link 19.4 Limited Release is provided ahead of the full M-Link 19.4 release. M-Link 19.4 Limited Release is fully supported by Isode for production deployment. There is one significant difference with a standard Isode release:

    • Updates to M-Link 19.4 Limited Release will include additional functionality. This contrasts to standard Isode releases where updates are “bug fix only”. There will be a series of updates which will culminate in the full M-Link 19.4 release.

    Goals

    There are three reasons that this approach is being taken:

    1. To provide a preview for those interested to look at the new capabilities of M-Link 19.4.
    2. To enable production deployment of M-Link 19.4 ahead of full release for customers who do not need all of the features of the full M-Link 19.4 release.  M-Link 19.4 limited release provides ample functionality for a baseline XMPP user service.
    3. To enable customer review of what will be in M-Link 19.4 full release. We are planning to not provide all M-Link 17.0 capabilities in M-Link 19.4 full release. A list is provided below of the current plan. Based on feedback, we may bring more functionality into M-Link 19.4 full release. There is a trade-off between functionality and shipping date, which we will review with customers.

    Benefits

    M-Link 19.4 User Server and M-Link 19.4 MU Server offer significant benefits over M-Link 17.0:

    • M-Link 19.4 is fully Web managed, and M-Link Console is no longer used. This is the most visible difference relative to M-Link 17.0.  This enables management without installing anything on the management client.  It is helpful for deployments also using Web management in M-Link  Edge  and M-Link  MU Gateway (using either 19.3 or 19.4 versions).
    • Flexible link handling, as provided previously in M-Link 19.3
    1. Multiple links may be established with a peer.  These links may be prioritized, so that for example a SATCOM link will be used by default with fall back to HF in the event of primary link failure.  Fall forward is also supported, so that the SATCOM link is monitored and traffic will revert when it becomes available again.
    2. Automatic closure of idle remote peer sessions after configurable period.
    3. Support for inbound only links, primarily to support Icon-Topo.
    4. “Whitespace pings” to X2X (XEP-0361) sessions to improve failover after connectivity failures.
    • M-Link MU Server allows the HF Radio improvements of M-Link 19.3 MU Gateway to be used in a single server, rather than deploying M-Link 19.3 MU Gateway plus M-Link 17.0 User Server
    • The session monitoring improvements  previously provided in  M-Link 19.3
    1. Shows sessions of each type (S2S, X2X (XEP-0361), GCXP (M-Link Edge), and XEP-0365 (SLEP)) with information on direction and authentication
    2. Enable monitoring for selected sessions to show traffic, including ability to monitor session initialization.
    3. Statistics for sessions, including volume of data, and number of stanzas.
    4. Peer statistics, providing summary information and number of sessions for each peer.
    5. Statistics for the whole server, giving session information for the whole server.
    • Use of the capabilities previously provided M-Link 19.3 to provide metrics on activity to enable us to feed them into the a Prometheus database using the statsd protocol. Prometheus is a widely used time series database used to store metrics: https://prometheus.io/ . Grafana is a graphing front end often used with Prometheus: https://grafana.com/ .  Grafana provides dashboards to present information.  Isode will make available sample Grafana dashboards on request to evaluators and customers.  Metrics that can be presented include:
    1. Stanza count and rate for each peer
    2. Number of bytes sent and received for each link
    3. Number of sessions (C2S; S2S; GCXP; X2X; and XEP-0365 (SLEP))
    4. Message queue size for peers – important for low bandwidth links
    5. Message latency for each peer – important for high latency links

    M-Link 19.4 (Limited Release) Update Plan

    This section sets out the plan for providing updates to M-Link 19.4 (Limited Release)

    • User and Role provisioning (replacing Internet Mail View)
    • Special function mailboxes
    • Redirections
    • Standard SMTP distribution lists
    • Military Distribution Lists
    • Profiler Configuration
    • File Transfer by Email (FTBE) account provisioning

    Update 1: FMUC

    Federated Multi-User Chat (FMUC) is being added as the first update, as we are aware of customers who would be in a position to deploy 19.4 once this capability is added.

    Update 2: Archive Administration

    The initial archive capability is fully functional. Administration adds a number of functions, including the ability to export, back up and truncate the archive. These capabilities are seen as important for operational deployment of archiving.

    Update 3: Clustering

    Clustering is the largest piece of work and the most significant omission from the limited release. It is expected to take a number of months work to complete this, based on core work already done.

    Update 4: Miscellaneous

    There are a number of smaller tasks that are seen as essential for R19.4 final release, which will likely be provided incrementally. If any are seen as high priority for the preview, it would be possible to address prior to the clustering update.

    • Server-side XEP-0346 Form Discovery and Publishing (FDP). This will enable third-party clients to use FDP.
    • Certificate checking using CRL (Certificate Revocation List) and OCSP (Online Certificate Status Protocol).
    • Complete implementation of XEP-0163 Personal Eventing Protocol (PEP). This is mostly complete in the initial preview.
    • Administration. The initial preview supports single administrator with password managed by M-Link.
      • Option for multiple administrators
      • Option for administrators specified and authenticated by LDAP
      • Administrators with per-domain administration rights
    • Additional audit logging, so that 19.4 will provide all of the log messages that 17.0 does.
    • XEP-0198 Stream Management  support for C2S (initial preview supports it in S2S and XEP-0361)
    • Web monitoring of C2S connections
    • XEP-0237 Roster versioning
    • C2S SASL EXTERNAL to provide client strong authentication

    Update 5: Upgrade

    To provide an upgrade from M-Link 17.0. This capability is best developed last.

    Items Under Consideration for M-Link 19.4 Final Release

    There is a trade-off between functionality included and ship date. The following capabilities supported in in 17.0 are under consideration for inclusion in M-Link 19.4. We ask for customer review of these items.

    Unless we get clear feedback requesting inclusion of these features, we will not include them in 19.4 and will consider them as desirable features for a subsequent release.

    • XEP-0114 Jabber Component Protocol that allows use of third party components.
    • SASL GSSAPI support to enable client authentication using Windows SSO
    • Archiving PubSub events (on user and pubsub domains)
    • Configuring what to archive per domain (R17 supports: nothing, events only (create, destroy, join, etc), events and messages)
    • Assigning MUC affiliations to groups, to simplify MUC rights administration
    • XEP-0227 configuration support to facilitate server migration
    • “Send Announcement” to broadcast information to all users
    • PDF/A archiving to provide a simple and stable long term archive

    Features post 19.4

    After 19.4 Final Release is made, future releases will be provided on the normal Isode model of major and minor releases with updates as bug fix only.

    Customer feedback is requested to help us set priorities for these subsequent releases.

    M-Link IRC Gateway

    M-Link IRC gateway is the only M-Link product not provided in M-Link 19.4. M-Link 17.0 IRC Gateway works well as an independent product.

    When we do a new version, we plan to provide important new functionality and not simply move the 17.0 functionality into a new release.

    New Capabilities

    The R19.4 User Server focus has been to deliver functionality equivalent to 17.0 on the R19.3 base. After 19.4 we are considering adding new features. Customers are invited to provide requirements and to comment on the priority of identified potential new capabilities set out here:

    • FMUC Clustering.  M-Link 19.4 (and 17.0) FMUC nodes cannot be clustered.
    • FMUC use with M-Link IRC Gateway. Currently, IRC cannot be used with FMUC. This would be helpful for IRC deployment.
    • STANAG 4774/4778 Confidentiality Labels.
    • RFC 7395 Websocket support as an alternative to BOSH.
    • OAuth (RFC 6749) support
    • Support of PKCS#11 Hardware Security Modules

    M-Link 17.0 User Server Features not in R19.4

    This section sets out a number of 17.0 features that are not planned for R19.4. If there is a clear customer requirement, we could include in R19.4. We are interested in customer input on priority of these features relative to each other and to other potential work.

    The following capabilities are seen to have clear benefit and Isode expects to add them.

    • Security Label and related configuration for individual MUC Rooms.  In 19.4 this can be configured per MUC domain, so an equivalent capability can be obtained by using a MUC domain for each security setting required,
    • XEP-0012 Last Activity
    • Option to limit the number of concurrent sessions for a user
    • Management of PKI identities and certificates in R19.4 is done with PEM files, which is pragmatic.  Use of PKCS#10 Certificate Signing Requests is a more elegant approach.
    • XMPPS (port 5223) has clear security benefits. The 17.0 implementation has limited management which means that it is not generally useful in practice.

    The following capabilities are potentially desirable.  Customer feedback is sought.

    • XEP-0346 Form Discovery and Publishing (FDP)
      • WebApp viewer.  We believe this would be better done in a client (e.g., Swift).
      • Gateway Java app, which converted new FDP forms to text and submitted to MUC.
      • Administration of FDP data on Server.
    • Per-Domain Search Settings, so that users can be constrained as to which domains can be searched
    • Internal Access Control Lists, for example to permit M-Link Administrators to edit user rosters.
    • Generic PubSub administration

    Features in M-Link 17.0 that Isode plans to drop

    There are a number of features provided in M-Link 17.0 that Isode has no current plans to provide going forward, either because they are provided by other mechanisms or they are not seen to add value. These are listed here  primarily to validate that no customers need these functions.

    • Schematron blocking rules
      • These have been replaced with XSLT transform rules
    • IQ delegation that enables selected stanzas sent to users to be instead processed by a component
    • XEP-50 user preferences
      • This ad-hoc allowed users to set preferences overriding server defaults to indicate which types of stanzas they wanted to store in offline storage and whether to auto-accept or auto-subscribe presence.
    • Management of XEP-0191 block lists by XEP-0050 ad hoc
      • Management of block lists, where desired, is expected to be performed by XEP-0191
    • XEP-114 Component permissions
    • Pubsub presence, apart from that provided by PEP
    • XEP-78 (non-SASL authentication)
      • This is obsolete
    • Some internal APIs that are not longer needed
    • Support for a security label protocol (reverse engineered by Isode) used in the obsolete CDCIE product
    • Security Checklist
      • M-Link Console had a security checklist which checked the configuration to see if there was anything insecure
      • This does not make sense in context of the Web interface which aims to flag security issues in appropriate part of UI
    • Conversion of file based archive to Wabac
      • M-Link Console had an option to “Convert and import file-based archive…” in the “Archive” menu
      • This was needed to support archive migration from older versions of M-Link
    • Pubsub-based statistics. M-Link 17.0 recorded statistics using PubSub. M-Link 19.4 does this using Prometheus, which can be integrated with Grafana dashboards.
    • XMPP-based group discovery – the ability to use XMPP discovery on an object  and get a list of groups back.
    • XML-file archives
      • This was a write-only archive format used by older versions of M-Link before introduction of the current archive database. M-Link 17.0 continued to support this option.

    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/m-link-19-4-limited-release/

    • chevron_right

      Erlang Solutions: Lifting Your Loads for Maintainable Elixir Applications

      news.movim.eu / PlanetJabber • 14 June, 2023 • 9 minutes

    This post will discuss one particular aspect of designing Elixir applications using the Ecto library: separating data loading from using the data which is loaded.  I will lay out the situations and present some solutions, including a new library called ecto_require_associations .

    Applications will differ, but let’s look at this example from the Plausible Analytics repo[1]:

    def plans_for(user) do

    user = Repo.preload(user, :subscription)

    # … other code …

    raw_plans =

    cond do

    contains?(v1_plans, user.subscription) ->

    v1_plans

    contains?(v2_plans, user.subscription) ->

    v2_plans

    contains?(v3_plans, user.subscription) ->

    v3_plans

    contains?(sandbox_plans, user.subscription) ->

    sandbox_plans

    true ->

    # … other code …

    end

    # … other code …

    end

    Here the subscription association is preloaded for the given user, which is then immediately used as part of the logic of plans_for/1 . I’d like to try to convince you that in cases like this, it would be better to “lift” that loading of data out of the function.

    The Ecto Library

    Ecto was uses the “repository pattern”. With Elixir being a language drawing a lot of people from the Ruby community, this was a departure from the “active record pattern” used in the “ActiveRecord” library made popular by Ruby on Rails. This pattern uses classes to fetch data (e.g. User.find(id) ) and object methods to update data (e.g. user.save ). Since “encapsulation” is a fundamental aspect of object-oriented programming, it seems logical at first to hide away the database access in this way. Many found, however, that while encapsulation of business logic makes sense, also encapsulating the database logic often led to complexity when trying to control the lifecycle of a record.

    The repository pattern – as implemented by Ecto – requires explicit use of a “Repo” module for all queries to and from the database. With this separation of database access, Ecto.Schema modules can focus on defining application data and Ecto.Changeset module focus on validating and transforming application data.

    This separation is made very clear in Ecto’s README which states: “Ecto is also commonly used to map data from any source into Elixir structs, whether they are backed by a database or not.”

    Even if you’re fairly familiar with Ecto, I highly recommend watching Darin Wilson’s “Thinking in Ecto” presentation for a really good overview of the hows and whys of Ecto.

    Ecto’s separation of a repository layer makes even more sense in the context of functional programming.  For context in discussing solutions to the problem presented above, it’s important to understand a bit about functional programming.

    One big idea in functional programming (or specifically “purely functional programming”) is the notion of minimising and isolating side-effects. A function with side-effects is one which interacts with some global state such as memory, a file, or a database. Side-effects are commonly thought of as changing global state, but reading global state means that the output of the function could change depending on when it’s run.

    What benefits does avoiding side-effects give us?

    • Separating database access from operations on data is a great separation of concerns which can lead to more modular and therefore more maintainable code.
    • Defining a function where the output depends completely on the input makes the function easier to understand and therefore easier to use and maintain.
    • Automated tests for functions without side-effects can be much simpler because
    • you don’t need to setup external state.

    A Solution

    A first approach to lifting the Repo.preload in the example above would be to do just that. That would suddenly make the function “pure” and dependent only on the caller passing in a value for the user’s subscription field. The problem comes when the person writing code which calls plans_for/1 forgets to preload. Since Ecto defaults associations on schema structs to have an %Ecto.Association.NotLoaded{} value, this approach would lead to a confusing error message like:

    (KeyError) key :paddle_plan_id not found in: #Ecto.Association.NotLoaded<association :subscription is not loaded>

    This is because the contains/2 function accesses subscription.paddle_plan_id .

    So, it would probably be better to explicitly look to see if the subscription is loaded. We could do this with pattern matching in an additional function definition:

    def plans_for(%{subscription: %Ecto.Association.NotLoaded{}}), do: raise "Expected subscription to be preloaded"

    Or, if we want to avoid referencing the Ecto.Association.NotLoaded module in your application’s code, there’s even a function Ecto provides to allow you to check at runtime:

    def plans_for(user) do

    if(!Ecto.assoc_loaded?(user.subscription) do

    raise "Expected subscription to be preloaded")

    end
    # …

    This can get repetitive and potentially error-prone if you have a larger set of associations that you would like your function to depend on. I’ve created a small library called ecto_require_associations to take care of the details for you. If you’d like to load multiple associations you can use the same syntax used by Ecto’s preload function:

    def plans_for(user) do

    EctoRequireAssociations.ensure!(user, [:subscriptions, site_memberships: :site])
    # …

    The above call would check:

    • If the subscriptions association has been loaded for the user
    • If the site_memberships association has been loaded for the user
    • If the site association has been loaded on each site membership

    If, for example, one or more of the site memberships hasn’t been loaded then an exception is raised like:

    (ArgumentError) Expected association to be set: `site_memberships.site`

    It can even work on a list of records given to it, just like Repo.preload.

    Going Too Far The Other Way

    Hopefully I’ve convinced you that the above approach can be helpful for creating more maintainable code. At the same time, I want to caution against another potential problem on the “other side”. Let’s say we have a function to get a user like this:

    defmodule MyApp.Users do

    def get_user(id) do

    MyApp.Repo.get(id)

    |> MyApp.Repo.preload([:api_keys, :subscriptions, site_memberships: :site])

    end

    def get_admins do

    from(user in User, where: user.is_admin)

    |> MyApp.Repo.all()

    |> MyApp.Repo.preload([:api_keys, :subscriptions, site_memberships: :site])  end

    When loading data to support pure functions, it could be tempting to load everything that might be needed by all functions which have a user argument. The risk then becomes one of loading too much data. Functions like get_user and get_admins are likely to be used all over your application, and generally you won’t need all of the associations loaded. This is a scaling problem that isn’t a problem until your application gets popular.One common pattern to solve this is to simply have a preloads argument:

    defmodule MyApp.Users do

    def get_user(id, preloads \\ []) do

    MyApp.Repo.get(id)

    |> MyApp.Repo.preload(preloads)

    end

    def get_admins(preloads \\ []) do

    from(user in User, where: user.is_admin)

    |> MyApp.Repo.all()

    |> MyApp.Repo.preload(preloads)

    end

    end

    # Usage

    MyApp.Users.get_admins([:api_keys])

    This does solve the problem and allows you to load associations only where you need them. I would say, however, that this code falls into the same trap as the Active Record library by intertwining database and application logic.

    The Ecto library, your schemas, and your associations aren’t secrets.  You absolutely should encapsulate things like your complex query logic, the details for how you calculate numbers, or the decisions you make based on data. But it’s fine to ask Ecto to preload the associations and let your query functions just do querying. This can give you a clean separation of concerns:

    defmodule MyApp.Users do

    def get_user(id), do: MyApp.Repo.get(id)

    def get_admins do

    from(user in User, where: user.is_admin)

    |> MyApp.Repo.all()

    end

    end

    # Usage:

    MyApp.Users.get_admins()|> MyApp.Repo.preload(:api_keys)

    That said, if you have associations you need for specific functions, you may want to create functions which can preload without the caller knowing the details. This saves repetition and helps clarify overlapping dependencies:

    defmodule MyApp.Users do

    # You can call `preload_for_access_check(user)` to load the required data

    def can_access?(user, resource) do

    EctoRequireAssociations.ensure!(user, [:roles, :permissions])

    # …

    end

    def preload_for_access_check(user) do

    MyApp.Repo.preload(user, [:roles, :permissions])

    end

    def preload_for_something_else(user) do

    MyApp.Repo.preload(user, [:roles, :regions])

    end

    end

    Remember that Repo.preload won’t load data if it’s already been set on the struct unless you specify force: true!

    So if we shouldn’t put our data loading along with our business logic or with our queries, where should it go? The answer to this is fuzzier, but it makes sense to think about what parts of your code should exist as “coordination” points. Let’s talk about some good options:

    Phoenix Controllers, Channels, LiveViews, and Email handlers

    Controllers, LiveViews, and email handlers are all places where we render a template, and generally we are loading some sort of data to be able to do that. Channels and LiveViews are dealing with events which are also often a place where we’ll need to load data to provide some sort of update to a client. In both cases, this is the place where we know what data will be needed, so it makes sense to keep the responsibility of choosing what data to load in this code.

    Absinthe Resolvers

    Absinthe is a library for implementing a GraphQL API. Not only will you need to preload data, sometimes you may use the Dataloader to efficiently load data outside of manual preloading. This highlights how loading of associated data is a separate concern from evaluating it.

    Scripts and Tasks

    Scripts and mix tasks are another place for coordinating loading and logic. This code might even be one-off and/or short-lived, so it may not even make sense to define functions inside of context modules. Depending on the importance and lifecycle of a script/task it could be that none of the above discussion is applicable.

    High Level Context Functions

    The discussion above suggests pushing loading logic up and out of context -type modules. However, if you have a high-level function which is an entrypoint into some complex code, then it may make sense to coordinate your loading and logic there. This is especially true if the function is used from multiple places in your application.

    Conclusion

    It seems like a small detail, but making more functions purely functional and isolating your database access can have compounding effects on the maintainability of your code. This can be especially true when the codebase is maintained by more than one person, making it easier for everybody to change the code without worrying about side-effects. Try it out and see!

    [1]: Please don’t see this as me picking on the Plausible Analytics folks in any way. I think that their project is great and the fact that they open-sourced it makes it a great resource for real-world examples like this one!

    The post Lifting Your Loads for Maintainable Elixir Applications appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/lifting-your-loads-for-maintainable-elixir-applications/

    • chevron_right

      JMP: JMP is Launched and Out of Beta

      news.movim.eu / PlanetJabber • 12 June, 2023 • 2 minutes

    JMP has been in beta for over six years, and today we are finally launching! With feedback and testing from thousands of users, our team has made improvements to billing, phone network compatibility, and also helped develop the Cheogram Android app which provides a smooth onboarding process, good Android integration, and phone-like UX for users of that platform. There is still a long road ahead of us, but with so much behind us we’re comfortable saying JMP is ready for launch, and that we know we can continue to work with our customers and our community for even better things in the future. Check out our launch on Product Hunt today as well!

    JMP’s pricing has always been “while in beta” so the first question this raises for many is: what will the price be now? The new monthly price for a customer’s first JMP phone number is now $4.99 USD / month ($6.59 CAD), but we are running a sale so that all customers will continue to pay beta pricing of $2.99 USD / month ($3.59 CAD) until the end of August . We are extending until that time the option for anyone who wishes to prepay for up to 3 years and lock-in beta pricing . Contact support if you are interested in the prepay option. After August, all accounts who have not pre-paid will be put on the new plan with the new pricing. Those who do pre-pay won’t see their price increase until the end of the period they pre-paid for.  The new plan will also include a multi-line discount, so second, third, etc JMP phone numbers will be $2.45 USD / month ($3.45 CAD) when they are set to all bill from the same balance.  The new plan will also finally have zero-rated toll free calling.  All other costs (per-minute costs, etc) remain the same, see the pricing page for details.

    The account settings bot now has an option “Create a new phone number linked to this balance” so that you can get new numbers using your existing account credit and linked for billing to the same balance without support intervention.

    Thanks so much to all of you who have helped us get this far.  There is lots more exciting stuff coming this year, and we are so thankful to have such a supportive community along the way with us.  Don’t forget we’ll be at FOSSY in July , and be sure to check out our launch on Product Hunt today as well.

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/launch-2023

    • chevron_right

      Erlang Solutions: Sign up for the RabbitMQ Summit Waiting List

      news.movim.eu / PlanetJabber • 12 June, 2023 • 1 minute

    Mark your calendars! The Very Early Bird tickets for the RabbitMQ Summit are set to open on 15th June, 2023. In joining the waiting list, you will receive exclusive access to the conference’s best-priced tickets.

    This is your chance to secure your spot at the RabbitMQ Summit at a discounted rate, allowing you to make the most of this incredible learning and networking opportunity.

    Event date and location

    Join us on 20th Oct 2023 at the Estrel Hotel- Berlin in Germany.

    About the RabbitMQ Summit

    The RabbitMQ Summit 2023 is an eagerly anticipated event that brings together software developers, system architects, and industry experts from around the world.

    RabbitMQ is utilised by more than 35,000 companies globally and is considered a top choice for implementing the Advanced Message Queuing Protocol (AMQP).

    By joining us, you’ll have the opportunity to learn from industry experts, who will provide valuable insights and share best practices for successfully integrating RabbitMQ into your organisation.

    We’re looking forward to creative talks, great vibes and most importantly, meeting fellow Rabbit MQ enthusiasts!

    Ticket details

    You can join the waitlist and sign up for exclusive access here .

    For more information

    You can visit the RabbitMQ Summit website for more details and follow our Twitter account for the latest updates.

    The post Sign up for the RabbitMQ Summit Waiting List appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/sign-up-for-the-rabbitmq-summit-waiting-list/