phone

    • chevron_right

      Erlang Solutions: What businesses should consider when adopting AI and machine learning

      news.movim.eu / PlanetJabber · Thursday, 31 August, 2023 - 09:29 · 5 minutes

    AI is everywhere. The chatter about chatbots has crossed from the technology press to the front pages of national newspapers. Worried workers in a wide range of industries are asking if AI will take their jobs.

    Away from the headlines, organisations of all sizes are getting on with the task of working out what AI can do for them. It will almost certainly do something. One survey puts AI’s potential boost to the global economy at an eye watering US$15.7tr by 2030 .

    Those major gains will come from productivity enhancements, better data-driven decision making and enhanced product development, among other AI benefits.

    It’s clear from all this that most businesses can’t afford to ignore the AI revolution. It has the very real potential to cut costs and create better customer experiences.

    Quite simply, it can make businesses better. If you’re not at least thinking about AI right now, you should be aware that your competitors probably are.

    So the question is, what AI tools and processes are the right ones for you, and how do you implement groundbreaking technology that actually works, without disrupting day-to-day workflows? Here are a few things to consider.

    AI or machine learning?

    The first thing is to be clear about what you mean by AI.

    AI and machine learning

    AI is an umbrella term for technologies that attempt to mimic human intelligence. Perhaps the most important, at least at the moment, is machine learning (ML).

    ML tools analyse existing data to create business-enhancing insight. The more data they’re exposed to, the more they ‘learn’ what to look out for. They find patterns and trends in mountains of information and do so at speed.

    As far as business is concerned, those patterns might pinpoint unusual sales trends, potential production bottlenecks, or hidden productivity issues. They might reveal a hundred other potential opportunities and challenges. The important thing to remember is that ML of this kind automates the ability to learn from what has gone before.

    ChatGPT and similar technologies, meanwhile, are part of a class of tools called generative AI. These applications also use machine learning techniques to mine huge datasets but do so for fundamentally different reasons.

    If ML looks back at existing materials, generative AI looks forward and creates new ones. One obvious role is in creating content, but generative tools can also produce code, business simulations and product designs.

    These two AI technologies can work together. For example, they might be tasked with automating the production of reports based on detailed analysis of a previous year’s data.

    Work out your goals for AI and machine learning

    Once you know a little about different AI tools, the next step is to understand what they can do for you. Begin with a business goal, not a technology.

    Start by identifying problems you need to solve, or opportunities you want to grasp. Maybe you want to create data-driven marketing campaigns for different customer segments. Maybe your website is crying out for a series of basic ‘how-to’ animations. Maybe you have processes that are ripe for automation?

    Whatever it is, the key is to identify the challenges you face and the opportunities you can take advantage of, and then mould an AI strategy that meets real business goals.

    Become a data-centric organisation

    As we’ve seen, AI and ML are dependent on data, and lots of it. The more data they can use, the more accurate and useful they tend to be.

    But data can be problematic. It is diverse, fragmented and often unstructured. It needs to be stored and moved securely and in line with relevant privacy regulations. All of this means that to make it valuable, you need to create a data management strategy.

    That strategy needs to address challenges related to data sourcing, storage, quality, governance, integration, analysis and culture.

    Corporate data is typically spread across an organisation and often found squirrelled away in the silos of legacy technology systems. It needs to be pooled, formatted and made accessible to the AI and ML tools of different departments and business units. Data is only useful when it’s available.

    AI and machine learning: Start small and keep it simple

    All of this makes implementing AI and ML sound like a highly time-consuming and complex undertaking. But it needn’t be, and especially not at first.

    The secret to most successful technology implementations is to start small and simple. That’s doubly true with something as potentially game-changing as AI and ML.

    For example, start applying ML tools to just a small section of your data, rather than trying to do too much too soon. Pick a specific challenge that you have, focus on it, and experiment with refining processes to achieve better results. Then increase AI use incrementally as the technology proves its worth.

    Bring your team with you

    Much of the recent publicity around AI has focused on doom-laden predictions of mass unemployment. When you talk about adopting AI and ML in your organisation, employee alarm bells may start ringing, which could have serious implications for staff morale and productivity.

    But in most organisations, AI is about augmenting human effort, not replacing it. AI and ML can automate the mundane tasks people don’t like doing, freeing them up for more creative activity. It can provide insight that improves human decision making, but humans still make the decisions. It is far from perfect, and human oversight of AI is required at every step.

    Your communications around the implementation of AI should emphasise these points. AI is a tool for your people to use, not a substitute for their efforts.

    Elixir and Erlang machine learning

    As businesses become familiar with AI and ML tools, they may start creating their own, tailored to their specific needs and circumstances. Organisations that develop and modify AI and ML tools increasingly do so using Elixir, a programming language based on the Erlang Virtual Machine (VM).

    Elixir is perfect for creating scalable AI applications for three core reasons:

    • Concurrency: Elixir is designed to handle lots of tasks simultaneously, which is ideal for AI applications that have to process large amounts of data from different sources.
    • Functional programming: Elixir focuses on function – reaching a desired goal as simply as possible. That’s perfect for AI because the simpler your AI algorithms, the more reliable they are likely to be.
    • Distributed computing: AI applications demand significant computational resources that developers spread across multiple machines. Elixir offers in-built distribution capabilities, making distributed computing straightforward.

    In addition, Elixir is supported by a wide range of libraries and tools, providing ready-made solutions to challenges and shortening the development journey.

    The result is AI applications that are efficient, scalable and reliable. That’s hugely important because as AI and ML become ever more crucial to business success, effective applications and processes will become a fundamental business differentiator. AI isn’t something you can ignore. If you aren’t already, start thinking about your own AI and ML strategy today.

    Want to know more about efficient, effective AI development with Elixir? Talk to us .

    The post What businesses should consider when adopting AI and machine learning appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/what-businesses-should-consider-adopting-ai-machine-learning/

    • chevron_right

      Ignite Realtime Blog: CVE-2023-32315: Openfire vulnerability (update)

      news.movim.eu / PlanetJabber · Monday, 28 August, 2023 - 08:21 · 2 minutes

    A few months ago, we published details about an important security vulnerability in Openfire that is identified as CVE-2023-32315.

    To summarize: Openfire’s administrative console (the Admin Console), a web-based application, was found to be vulnerable to a path traversal attack via the setup environment. This permitted an unauthenticated user to access restricted pages in the Openfire Admin Console reserved for administrative users.

    Leveraging this, a malicious actor can gain access to all of Openfire, and, by extension (through installing custom plugins), much of the infrastructure that is used to run Openfire. The Ignite Realtime community has made available new Openfire releases in which the issue is addressed, and published various mitigation strategies for those who cannot immediately apply an update. Details can be found in the security advisory that we released back in May.

    In the last few days, this issue has seen a considerable increase in exposure: there have been numerous articles and podcasts that discuss the vulnerability. Many of these seem to refer back to a recent blogpost by Jacob Banes at Vulncheck.com , and those that do not seem to include very similar content.

    Many of these articles point out that there’s a “new way” to exploit the vulnerability. We indeed see that there are various methods being used, in the wild, in which this vulnerability is abused. Some of these methods leave less traces than others, but the level of access that can be obtained through each of each of these methods is pretty similar (and, sadly, similarly severe).

    Given the renewed attention, we’d like to make clear that there is no new vulnerability in Openfire. The issue, solutions and mitigations that are documented in the original security advisory are still accurate and up to date.

    Malicous actors use a significant amount of automation. By now, it’s almost safe to assume that your instance has been compromised if you’re running an unpatched instance of Openfire that has its administrative console exposed to the unrestricted internet. Tell-tale signs are high CPU loads (of crypto-miners being installed) and the appearance of new plugins (which carry the malicious code), but this by no means is true for every system that’s compromised.

    We continue to urge everyone to update Openfire to its last release, and carefully review the security advisory that we released back in May, to apply applicable mitigations where possible.

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Future-proofing legacy systems with Erlang

      news.movim.eu / PlanetJabber · Thursday, 24 August, 2023 - 07:20 · 4 minutes

    Relying on outdated legacy systems often serves as the biggest hindrance to both innovation and optimisation for businesses today. Since many of these systems have been used for years, if not multiple decades, the significant costs involved with replacing a system entirely are rarely within budgets, particularly in today’s business climate.

    But that doesn’t mean legacy systems should be left as is. Erlang is a resilient and proven high-level programming language that, when utilised effectively, can improve current legacy systems to ensure they remain both secure and efficient in the future.

    Why Should I Future-Proof My Legacy System Instead of Replacing It?

    The vast majority of companies are still relying on legacy systems due to the speed at which technology continues to advance. As an example, in 2021 91% of UK financial institutions were still operating on at least some form of legacy infrastructure.

    legacy systems infrastructure

    Occasionally, the knee-jerk reaction to the idea of legacy systems is that they should be completely replaced, but this is rarely the ideal solution. Entirely replacing a legacy system often involves completely changing the way your business operates – for starters, designing a functional system can take years to both design and implement. After these lengthy phases have been completed, all employees will then still require training on a new system so it can be used appropriately.

    Choosing instead to future-proof your current system can therefore be a far more cost-effective solution. Updating a system using a language like Erlang also means that improvements can be incremental, which saves on employee training costs.

    As long as a legacy system is future-proofed properly, your company will also access the same benefits you would have if you’d designed an entirely new system. That typically means a more secure infrastructure, a more reliable network, and a more efficient way of managing your operations.

    Using Erlang to Effectively Future-Proof Legacy Systems

    Erlang offers a number of notable benefits when applied to legacy system improvements and future-proofing across multiple different sectors. The below represents some of the core ways in which Erlang serves as the ideal programming language for these applications.

    Restructuring with Erlang Can Increase Availability and Reduce Downtime

    Erlang can be utilised to establish and bolster core systems, a use case Erlang Solutions achieved with FinTech unicorn Klarna. Our team assisted Klarna by future-proofing their over 10-year-old payment system.

    By using Erlang, Klarna was awarded the flexibility to adapt its system, incorporating a number of additional tech stacks like Scala, Clojure, and Haskell. In this new combined approach, they were able to increase the overall availability of their system considerably whilst reducing their downtime to zero.

    You can find out more about this particular Erlang success story in our previous blog post here .

    Downtime reduction is essential for business continuity planning today. Outages, often caused by outdated legacy systems, have been on the rise over the past few years, alongside the costs incurred by these issues. By future-proofing your system with Erlang, your business can protect against these risks by building more resilient infrastructure.

    Erlang Allows Legacy Systems to Scale

    If a system isn’t able to appropriately scale, it will inevitably become obsolete once your business grows big enough, or technology advances long past its capabilities. As shown by the above Klarna success story, Erlang enables a system to become more scalable thanks to both its flexibility and reliability as a programming language.

    Erlang is also highly-scalable because it’s able to accommodate large data volumes, which is crucial in today’s data-driven economy. This makes it a particularly adept solution for legacy systems used in data-heavy industries, like the financial and healthcare sectors.

    Creating a Flexible and Adaptable Legacy System with Erlang

    By updating the flexibility and adaptability of legacy systems, Erlang enables greater tech integration whilst ensuring your business is fully prepared for whatever the future might present.

    This adaptability means that your legacy system can also access additional improvements through other technologies. A great example is the ability to integrate RabbitMQ to further scale your system, due to it being an open-source message-broker written in Erlang.

    Using Erlang to Increase Legacy System Security

    One of the main concerns with an outdated system is that it leaves business-critical data and applications vulnerable to cybercrime. In 2020, an estimated 78% of businesses were still using outdated legacy systems to support business-critical operations and to hold sensitive data.

    Erlang Allows Your Legacy System to Become More Innovative, Without Replacing it Entirely

    Innovation continues to be a desirable trend for many businesses, as a means of differentiating from competitors and securing business growth in increasingly tech-focused markets. As a versatile language, Erlang can facilitate improved innovation in your legacy system.

    Much of this can be achieved by adapting to new technologies dependent on your business needs. But by refactoring legacy systems, companies are also able to provide more innovative services for customers. Erlang Solutions achieved this with OTP Bank, utilising LuErl alongside an improved and future-proofed banking infrastructure to create a modern banking system for the largest commercial bank in Hungary.

    Future-Proofing Your Legacy System with Erlang Solutions

    In 2023 and beyond, it’s of paramount importance that your business considers how your legacy system can be improved, regardless of the sector you operate within.

    The Erlang Solutions team has decades of experience using Erlang to improve legacy systems environments, whilst assessing the potential for further innovations with Elixir, RabbitMQ and other solutions.
    If you’d like us to assess how Erlang could improve your system, or you’d like to find out more about the process, please don’t hesitate to contact our team directly .

    The post Future-proofing legacy systems with Erlang appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/future-proofing-legacy-systems-with-erlang/

    • chevron_right

      Erlang Solutions: 5 ways Elixir programming can improve business performance

      news.movim.eu / PlanetJabber · Thursday, 10 August, 2023 - 07:00 · 5 minutes

    Elixir is a simple, lightweight programming language that is built on top of the Erlang virtual machine. It offers straightforward syntax, impressive performance and a raft of powerful features. It uses your digital resources in the most efficient way.

    This is all very well, but what does that mean in practice? Aside from impressing your web development team, what can Elixir do for your business?

    In this blog, we’ll look at how Elixir’s benefits translate into competitive advantage, and how it helps you reduce cost, time to market and process efficiency. Or to put it another way, here are five ways Elixir programming can improve business performance.

    The joys of simplicity

    In comparison to other programming languages, Elixir is relatively simple to master. It borrows from other languages, which means experienced programmers tend to pick it up quite quickly. It uses easy-to-grasp syntax, which helps developers work more quickly using less code.

    Essentially, Elixir focuses on function. It’s all about getting the desired result in the simplest possible way.

    What does that mean for your business? Most obviously, Elixir programming can speed up the development of new software or updates to existing applications. It allows developers to do more with less, which is a major advantage during a talent shortage.

    In addition, it tends to produce more reliable programmes. Simple code is easier to debug, while complexity increases the chances of something going wrong.

    The virtues of concurrency

    Concurrency is the ability to handle lots of tasks at the same time, and Elixir is designed with concurrency at its core.

    It’s a lightweight language, which means it runs processes in a highly resource-efficient way. With processes that typically use less than 1KB of RAM, running lots of them simultaneously is no problem for Elixir-based applications.

    But why is this beneficial in wider business terms? Well, it means applications can handle large numbers of users and instructions without slowing down, creating better experiences. Concurrency also improves reliability. Concurrent processes run independently of each other, which means a problem with one will not affect the performance of the rest.

    Making the most of your resources

    As we’ve seen, Elixir helps you make the most of your human resources. At the same time, it also helps you to fully utilise your digital ones.

    Elixir automatically uses all of the processing capacity at its disposal. If that capacity increases, it will utilise the extra resources without requiring your developers to write lots of new code. Put simply, Elixir-based applications always tend to run at the optimal performance for their hardware environment.

    This has a number of advantages for your business. Applications make full use of available processing power to provide faster, smoother user experiences. From an ROI point of view, it means none of your expensive IT kit goes to waste.

    By using resources in an optimal way, Elixir also makes it easier to quickly scale your software and applications. More about that below.

    Elixir means simple scalability

    Scalability is everything in a digital world. Can your applications serve ten thousand users as easily as 500, without any drop in performance? How about 100,000?

    In other words, can they grow as your business grows? And how easy is it to add capacity quickly, so that you can grasp new opportunities before they disappear?

    Elixir is designed for easy scalability. As we’ve seen, it automatically makes full use of your hardware resources, making it the perfect language for applications that experience frequent spikes in demand. The lightweight nature of Elixir processes also means you can grow to a significant size with limited processing power.

    But when you do reach the limits of your current hardware, Elixir’s adeptness with concurrency makes it easy to share workloads across a cluster of machines.

    An Elixir-based application can run multiple processes on a single machine. Alternatively, it can run millions of processes across lots of machines, creating a highly scalable environment. Elixir creates seamless communication channels between all the elements of a distributed system, which further encourages the efficient use of resources.

    You can do a lot of this with other programming languages, but not generally so easily. You may need to integrate third-party tools, for example. That adds to the cost, development hours and time to market. The beauty of Elixir is that scalability is built in.

    Elixir in practice

    When you add all this together, it creates significant performance advantages. Everything we’ve discussed so far – simplicity, concurrency, optimal resource use and scalability – coalesce to create more robust, efficient and future-proof applications.

    This is highlighted by our work with Bleacher Report , the second-largest sports website in the world.

    Bleacher report Elixir

    At peak times, the site’s mobile app serves over 200,000 concurrent requests. Bleacher Report wanted to transition the app from Ruby to Elixir, after it started presenting technical challenges at scale. That transition, supported by Erlang Solutions, created a much faster, more scalable application that was also significantly less resource intensive. The system now easily handles over 200 million push notifications per day. The number of servers it needs has reduced from 150 to eight. If you’d like to know more about that transformation, and what we did to support it, you can read the full case study here .

    The Elixir business performance benefit

    As we’ve seen, programming language matters. It matters for performance, cost, speed to market and scalability. Elixir, based on the powerful Erlang engine, produces positive results in all these areas.

    Elixir is still a relatively young language – barely a decade or so old, in fact – but that is to its benefit. It was designed for the challenges that digital-first businesses face today, and not for an era when the mobile internet was still in its infancy. It is already used by scores of top-tier companies to create better user experiences – and to create them faster and at scale.

    Want to know more about efficient, effective development with Elixir, and how it can enhance your own business performance? Why not drop us a line ?

    The post 5 ways Elixir programming can improve business performance appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/5-ways-elixir-can-improve-business-performance/

    • chevron_right

      Erlang Solutions: Blockchain in Sustainable Programming

      news.movim.eu / PlanetJabber · Thursday, 3 August, 2023 - 10:15 · 5 minutes

    The benefits of blockchain implementation across multiple sectors are well-documented, but how can this decentralised solution be used to achieve more sustainable programming?

    As the effects of the ongoing climate crisis continue to impact weather patterns and living conditions across the planet, we must continue to make every aspect of our lives, from transport and energy usage to all of our technology, greener and more sustainable.

    Sustainable programming and green coding practices play a crucial role in this transition. These concepts exist to help both coding and programming become as environmentally efficient as possible, by minimising the energy consumption of these processes. But sustainable programming also means ensuring that the solutions we program can be used to promote and achieve sustainable goals in the future.

    Blockchain still represents a relatively new technology, but as its capabilities continue to expand, so does its role in creating a greener industry for programmers as well as the wider tech sector. To better understand this, it’s important to first understand how blockchain solutions, supported by Erlang, are utilised today.

    How is Erlang Promoting Innovation in the Blockchain Space?

    Though a complex technology to define, blockchain is essentially a more secure, decentralised way for companies and organisations to record transactions. Both time-stamping and reference links, as well as the ability for anyone with access rights to track transactions but not alter them, provide an opportunity for blockchain to completely change the way organisations handle data. This is achieved through the 6 main blockchain principles .

    As a coding language, Erlang represents the ideal foundation for blockchain solutions thanks to several key benefits. Firstly, Erlang is a high-level, functional language that can be quickly deployed, which is often a necessity in the fast-moving, competitive markets that blockchain is usually deployed within, like fintech.

    Both Erlang and Elixir also don’t manipulate memory directly, which means they’re immune to many traditional vulnerabilities. This ensures they’re able to offer safer, more secure blockchain solutions.

    Companies are also opting to use Erlang for blockchain due to its high availability, resiliency, and the fact that its massively concurrent among other benefits. You can learn more about each of these, as well as the benefits mentioned above, in our recent tech deep dive .

    Once established, Erlang blockchains can be used to better support sustainable programming initiatives in several key ways.

    Contemporary and Sustainable Uses for Blockchain Technology

    It’s important to first note that early blockchain implementations consumed a lot of energy to power their decentralised network, which raised concerns regarding its sustainability as a solution. However, continued innovations have allowed companies to limit this issue considerably.

    Ethereum, currently the world’s second-largest blockchain by market cap, were able to cut the energy consumption of its network by 99.9% late last year.

    blockchain sustainable programming

    Source: Statistica

    This was achieved thanks to a switch from a Proof of Work (PoW) chain to a new Proof of Stake (PoS) approach, which Ethereum called “The Merge”. There are continued hopes that similar changes in the green coding behind the blockchain can access further energy consumption reductions in the future.
    PwC has created what’s known as its Blockchain Sustainability Framework to promote these improvements and to ensure future efforts can further support sustainable programming. In addition to these energy consumption improvements, blockchain can also work sustainably for businesses, governments and organisations in a myriad of other ways.

    Blockchain’s Role in Sustainable Infrastructure and Renewable Energy

    The UK’s OECD (Organisation for Economic Co-operation and Development) recently published a case study analysing how blockchain technologies can serve as a digital enabler for sustainable infrastructure.

    They confirmed that sustainable infrastructure services are already being impacted by blockchain technology and that its core competencies could be used in several case studies, from monitoring infrastructural standards to optimising emissions certificate trading systems.

    The UN’s Environment Programme published a similar article last year, evidencing blockchain’s role in fighting the ongoing climate crisis. Several businesses worldwide have already used blockchain to support renewable energy projects and to reduce their future energy costs.

    Improved Monitoring of Supply Chains Through Blockchain Smart Contracts

    If companies can implement smart contracts, stored on blockchain technology, opportunities exist to better track and automate their supply chain logistics. Smart contracts are programs that run automatically and securely through the blockchain once certain pre-determined conditions have been met.

    This often removes the need for intermediaries, considerably reducing the time taken on signing agreements. It also then provides an automated, optimised way to manage stock, conduct peer-to-peer transactions or manage a supply chain.
    But these contracts can also be used to achieve greener outcomes through the sustainable programming potential of blockchain. By creating smart contracts, companies can track the performance of supply chains, creating clear data on environmental impacts. This data can be monitored, and operational improvements can be made to reduce these emissions. Often, these changes can also cut costs in addition to creating a more sustainable supply chain.

    Tracking Sustainability Metrics Through Blockchain

    Further to supply chain impacts, blockchain technology allows companies to track a number of different sustainability metrics, such as carbon emissions, renewable energy credits, waste reduction and other variables.

    All of these metrics can be tracked in real-time, creating actionable data which companies can use to become more sustainable and further optimise their business practices.
    As blockchain technology continues to advance , new monitoring solutions, such as the ability to track plastic production or water usage, will enable both more detailed data and the capacity to implement even more sustainable changes in areas like manufacturing and product design. These improvements could benefit both companies and governments shortly.

    Accessing Blockchain Benefits with Erlang Solutions

    Blockchain’s role in sustainable programming will only continue to grow as the technology develops. Companies should be looking to build the foundation of their blockchain efforts today, to continue to access these benefits shortly.

    Our team at Erlang Solutions continues to work closely with Erlang as a programming language to unlock its potential in blockchain advancements for companies worldwide.

    To find out more about how Erlang Solutions blockchain support could help your company achieve more sustainable programming, contact our team today .

    The post Blockchain in Sustainable Programming appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/role-blockchain-in-sustainable-programming/

    • chevron_right

      Erlang Solutions: Ship RabbitMQ logs to Elasticsearch

      news.movim.eu / PlanetJabber · Thursday, 27 July, 2023 - 10:01 · 3 minutes

    RabbitMQ is a popular message broker that facilitates the exchange of data between applications. However, as with any system, it’s important to have visibility into the logs generated by RabbitMQ to identify issues and ensure smooth operation. In this blog post, we’ll walk you through the process of shipping RabbitMQ logs to Elasticsearch, a distributed search and analytics engine. By centralising and analysing RabbitMQ logs with Elasticsearch, you can gain valuable insights into your system and easily troubleshoot any issues that arise.

    Logs processing system architecture

    To build that architecture, we’re going to set up 4 components in our system. Each one of them has got its own set of features. Here there are:

    • A logs Publisher
    • A RabbitMQ Server With a Queue To Publish data to and receive data from.
    • A Logstash Pipeline To Process Data From The RabbitMQ Queue.
    • An Elasticsearch Index To Store The Processed Logs.
    Components of building logs

    Installation

    1. Logs Publisher

    Logs can come from any software. It can be from a web server (Apache, Nginx), a monitoring system, an operating system, a web or mobile application, and so on. The logs give information about the working history of any software.

    If don’t have any choices yet, you can use my simple stuff here: https://github.com/baoanh194/rabbitmq-simple-publisher-consumer

    2. RabbitMQ

    The logs publisher will be publishing the logs to a RabbitMQ queue.

    Instead of going through a very long RabbitMQ installation, we’re going to go with a RabbitMQ Docker instance to make things simple. You can find your preferred operating system here: https://docs.docker.com/engine/install/

    To start a RabbitMQ container. You can do this by running the following command:

    RabbitMQ container command

    This command starts a RabbitMQ container with the management plugin enabled. After enabling the plugin, you can access the RabbitMQ management console by going to http://localhost:15672/ in your web browser. Normally the username/password is guest/guest .

    RabbitMQ container

    3. Elasticsearch

    Go and check this link to install and configure Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html

    To store RabbitMQ data for visualisation in Kibana, you need to start an Elasticsearch container. You can do this by running the following command (I’m using Docker to set up Elasticsearch):

    Elasticsearch comand

    When you start Elasticsearch for the first time, there are some security configuration required.

    4. Logstash

    If you haven’t installed or worked with Logstash before, don’t worry. Have a look at the Elastic docs: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

    It’s very detailed and easy to read.

    For me, I installed Logstash on MacOS by Homebrew:

    Logstash on MacOS

    Once Logstash is installed on your machine, let’s create the Pipeline to process data.

    Paste the code below to your pipelines.conf file:

    (Put new config file under: /opt/homebrew/etc/logstash)

    Pipeline on Logstash

    Run your pipeline with Logstash:

    Run pipeline in Logstash

    Here is a screenshot of what you should get if your RabbitMQ Docker Instance is running well and everything works pretty well on your Logstash pipeline side:

    Logstash Pipeline

    Let’s ship some logs

    Now everything is ready. Go to the logs publisher root folder and run the send.js script

    send.js script

    You can check  the data is sent to Elastic:

    curl -k -u elastic https://localhost:9200/_search?pretty

    If everything goes well, you will get the result as below screenshot:

    Elastic

    Configure Kibana to Visualize RabbitMQ Data

    Additionally, you can configure Kibana to visualize the RabbitMQ data on Elastic. By configuring Kibana, you can create visualisations such as charts, graphs, and tables that make it easy to understand the data and identify trends or anomalies. For example, you could create a chart that shows the number of messages processed by RabbitMQ over time, or a table that shows the top senders and receivers of messages.

    Kibana also allows you to build dashboards, which are collections of visualisations and other user interface elements arranged on a single screen. Dashboards can be shared with others in your organization, making it easier for team members to collaborate and troubleshoot issues. You can refer to this link for how to set up Kibana: https://www.elastic.co/pdf/introduction-to-logging-with-the-elk-stack

    Conclusion

    In summary, shipping RabbitMQ logs to Elasticsearch offers benefits such as centralized log storage, quick search and analysis, and improved system troubleshooting. By following the steps outlined in this blog post, you can set up a system to handle large volumes of logs and gain real-time insights into your messaging system. Whether you’re running a small or large RabbitMQ instance, shipping logs to Elasticsearch can help you optimise and scale your system.

    The post Ship RabbitMQ logs to Elasticsearch appeared first on Erlang Solutions .

    • chevron_right

      Paul Schaub: PGPainless meets the Web-of-Trust

      news.movim.eu / PlanetJabber · Tuesday, 25 July, 2023 - 14:02 · 7 minutes

    We are very proud to announce the release of PGPainless-WOT , an implementation of the OpenPGP Web of Trust specification using PGPainless.

    The release is available on the Maven Central repository .

    The work on this project begun a bit over a year ago as an NLnet project which received funding through the European Commission’s NGI Assure program . Unfortunately, somewhere along the way I lost motivation to work on the project, as I failed to see any concrete users. Other projects seemed more exciting at the time.

    Fast forward to end of May when Wiktor reached out and connected me with Heiko, who was interested in the project. We two decided to work together on the project and I quickly rebased my – at this point ancient and outdated – feature branch onto the latest PGPainless release. At the end of June, we started the joint work and roughly a month later today, we can release a first version 🙂

    Big thanks to Heiko for his valuable contributions and the great boost in motivation working together gave me 🙂
    Also big thanks to NLnet for sponsoring this project in such a flexible way.
    Lastly, thanks to Wiktor for his talent to connect people 😀

    The Implementation

    We decided to write the implementation in Kotlin. I had attempted to learn Kotlin multiple times before, but had quickly given up each time without an actual project to work on. This time I stayed persistent and now I’m a convinced Kotlin fan 😀 Rewriting the existing codebase was a breeze and the line count drastically reduced while the amount of syntactic sugar which was suddenly available blow me away! Now I’m considering to steadily port PGPainless to Kotlin. But back to the Web-of-Trust.

    Our implementation is split into 4 modules:

    • pgpainless-wot parses OpenPGP certificates into a generalized form and builds a flow network by verifying third-party signatures. It also provides a plugin for pgpainless-core .
    • wot-dijkstra implements a query algorithm that finds paths on a network. This module has no OpenPGP dependencies whatsoever, so it could also be used for other protocols with similar requirements.
    • pgpainless-wot-cli provides a CLI frontend for pgpainless-wot
    • wot-test-suite contains test vectors from Sequoia PGP’s WoT implementation

    The code in pgpainless-wot can either be used standalone via a neat little API, or it can be used as a plugin for pgpainless-core to enhance the encryption / verification API:

    /* Standalone */
    Network network = PGPNetworkParser(store).buildNetwork();
    WebOfTrustAPI api = new WebOfTrustAPI(network, trustRoots, false, false, 120, refTime);
    
    // Authenticate a binding
    assertTrue(
        api.authenticate(fingerprint, userId, isEmail).isAcceptable());
    
    // Identify users of a certificate via the fingerprint
    assertEquals(
        "Alice <alice@example.org>",
        api.identify(fingerprint).get(0).getUserId());
    
    // Lookup certificates of users via userId
    LookupAPI.Result result = api.lookup(
        "Alice <alice@example.org>", isEmail);
    
    // Identify all authentic bindings (all trustworthy certificates)
    ListAPI.Result result = api.list();
    
    
    /* Or enhancing the PGPainless API */
    CertificateAuthorityImpl wot = CertificateAuthorityImpl
        .webOfTrustFromCertificateStore(store, trustRoots, refTime)
    
    // Encryption
    EncryptionStream encStream = PGPainless.encryptAndOrSign()
        [...]
        // Add only recipients we can authenticate
        .addAuthenticatableRecipients(userId, isEmail, wot)
        [...]
    
    // Verification
    DecryptionStream decStream = [...]
    [...]  // finish decryption
    MessageMetadata metadata = decStream.getMetadata();
    assertTrue(metadata.isAuthenticatablySignedBy(userId, isEmail, wot));

    The CLI application pgpainless-wot-cli mimics Sequoia PGP’s neat sq-wot tool, both in argument signature and output format. This has been done in an attempt to enable testing of both applications using the same test suite.

    pgpainless-wot-cli can read GnuPGs keyring, can fetch certificates from the Shared OpenPGP Certificate Directory (using pgpainless-cert-d of course :P) and ingest arbitrary .pgp keyring files.

    $ ./pgpainless-wot-cli help     
    Usage: pgpainless-wot [--certification-network] [--gossip] [--gpg-ownertrust]
                          [--time=TIMESTAMP] [--known-notation=NOTATION NAME]...
                          [-r=FINGERPRINT]... [-a=AMOUNT | --partial | --full |
                          --double] (-k=FILE [-k=FILE]... | --cert-d[=PATH] |
                          --gpg) [COMMAND]
      -a, --trust-amount=AMOUNT
                             The required amount of trust.
          --cert-d[=PATH]    Specify a pgp-cert-d base directory. Leave empty to
                               fallback to the default pgp-cert-d location.
          --certification-network
                             Treat the web of trust as a certification network
                               instead of an authentication network.
          --double           Equivalent to -a 240.
          --full             Equivalent to -a 120.
          --gossip           Find arbitrary paths by treating all certificates as
                               trust-roots with zero trust.
          --gpg              Read trust roots and keyring from GnuPG.
          --gpg-ownertrust   Read trust-roots from GnuPGs ownertrust.
      -k, --keyring=FILE     Specify a keyring file.
          --known-notation=NOTATION NAME
                             Add a notation to the list of known notations.
          --partial          Equivalent to -a 40.
      -r, --trust-root=FINGERPRINT
                             One or more certificates to use as trust-roots.
          --time=TIMESTAMP   Reference time.
    Commands:
      authenticate  Authenticate the binding between a certificate and user ID.
      identify      Identify a certificate via its fingerprint by determining the
                      authenticity of its user IDs.
      list          Find all bindings that can be authenticated for all
                      certificates.
      lookup        Lookup authentic certificates by finding bindings for a given
                      user ID.
      path          Verify and lint a path.
      help          Displays help information about the specified command

    The README file of the pgpainless-wot-cli module contains instructions on how to build the executable.

    Future Improvements

    The current implementation still has potential for improvements and optimizations. For one, the Network object containing the result of many costly signature verifications is currently ephemeral and cannot be cached. In the future it would be desirable to change the network parsing code to be agnostic of reference time, including any verifiable signatures as edges of the network, even if those signatures are not yet – or no longer valid. This would allow us to implement some caching logic that could write out the network to disk, ready for future web of trust operations.

    That way, the network would only need to be re-created whenever the underlying certificate store is updated with new or changed certificates (which could also be optimized to only update relevant parts of the network). The query algorithm would need to filter out any inactive edges with each query, depending on the queries reference time. This would be far more efficient than re-creating the network with each application start.

    But why the Web of Trust?

    End-to-end encryption suffers from one major challenge: When sending a message to another user, how do you know that you are using the correct key? How can you prevent an active attacker from handing you fake recipient keys, impersonating your peer? Such a scenario is called Machine-in-the-Middle (MitM) attack .

    On the web, the most common countermeasure against MitM attacks are certificate authorities, which certify the TLS certificates of website owners, requiring them to first prove their identity to some extent. Let’s Encrypt for example first verifies, that you control the machine that serves a domain before issuing a certificate for it. Browsers trust Let’s Encrypt, so users can now authenticate your website by validating the certificate chain from the Let’s Encrypt CA key down to your website’s certificate.

    The Web-of-Trust follows a similar model, with the difference, that you are your own trust-root and decide, which CA’s you want to trust (which in some sense makes you your own “meta-CA”). The Web-of-Trust is therefore far more decentralized than the fixed set of TLS trust-roots baked into web browsers. You can use your own key to issue trust signatures on keys of contacts that you know are authentic. For example, you might have met Bob in person and he handed you a business card containing his key’s fingerprint. Or you helped a friend set up their encrypted communications and in the process you two exchanged fingerprints manually.

    In all these cases, in order to initiate a secure communication channel, you needed to exchange the fingerprint via an out-of-band channel. The real magic only happens, once you take into consideration that your close contacts could also do the same for their close contacts, which makes them CAs too. This way, you could authenticate Charlie via your friend Bob, of whom you know that he is trustworthy, because – come on, it’s Bob! Everybody loves Bob!

    An example OpenPGP Web-of-Trust Network diagram. An example for an OpenPGP Web-of-Trust. Simply by delegating trust to the Neutron Mail CA and to Vincenzo, Aaron is able to authenticate a number of certificates.

    The Web-of-Trust becomes really useful if you work with people that share the same goal. Your workplace might be one of them, your favorite Linux distribution’s maintainer team, or that non-Profit organization/activist collective that is fighting for a better tomorrow. At work for example, your employer’s IT department might use a local CA (such as an instance of the OpenPGP CA ) to help employees to communicate safely. You trust your workplace’s CA, which then introduces you safely to your colleagues’ authentic key material. It even works across business’ boundaries, e.g. if your workplace has a cooperation with ACME and you need to establish a safe communication channel to an ACME employee. In this scenario, your company’s CA might delegate to the ACME CA, allowing you to authenticate ACME employees.

    As you can see, the Web-of-Trust becomes more useful the more people are using it. Providing accessible tooling is therefore essential to improve the overall ecosystem. In the future, I hope that OpenPGP clients such as MUAs (e.g. Thunderbird) will embrace the Web-of-Trust.

    • wifi_tethering open_in_new

      This post is public

      blog.jabberhead.tk /2023/07/25/pgpainless-meets-the-web-of-trust/

    • chevron_right

      Ignite Realtime Blog: Jabber Browsing Openfire Plugin 1.0.1 released

      news.movim.eu / PlanetJabber · Monday, 24 July, 2023 - 17:34

    The Ignite Realtime community is happy to announce a new release of the Jabber Browsing plugin for Openfire.

    This is a plugin for the Openfire Real-time Communications server. It provides an implementation for service discovery using the jabber:iq:browse namespace, as specified in XEP-0011: Jabber Browsing . Note that this feature is considered obsolete! The plugin should only be used by people that seek backwards compatibility with very old and very specific IM clients.

    This release is a maintenance release. It adds translations and fixes one bug. More details are available in the changelog .

    Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the Jabber Browsing plugin archive page .

    If you have any questions, please stop by our community forum or our live groupchat .

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: Agent Information plugin for Openfire release 1.0.1

      news.movim.eu / PlanetJabber · Monday, 24 July, 2023 - 09:25

    The Ignite Realtime community is happy to announce a new release of the Agent Information plugin for Openfire.

    This plugin implements the XEP-0094 ‘Agent Information’ specification for service discovery using the jabber:iq:agents namespace. Note that this feature is considered obsolete! The plugin should only be used by people that seek backwards compatibility with very old and very specific IM clients.

    This release is a maintenance release. It adds translations and fixes one bug. More details are available in the changelog .

    Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the JmxWeb plugin archive page .

    If you have any questions, please stop by our community forum or our live groupchat .

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic