phone

    • chevron_right

      Isode: M-Guard 1.5 – New Capabilities

      news.movim.eu / PlanetJabber · Monday, 17 July, 2023 - 12:28 · 1 minute

    M-Guard is an XML guard that is used at a network boundary to control traffic. An M-Guard instance is an application level data diode, with traffic flowing in one direction only. Commonly, M-Guard instances will be deployed in pairs, one controlling flow in each direction. The following is a list of the new capabilties introduced in version 1.5.

    M-Guard Certificate Authority

    M-Guard uses X.509 certificates to verify peers. It does not use CRL checking or OCSP to check for certificate revocation as the network connections to do this would lead to an unacceptable security risk. This means that in the event of certificate compromise the whole PKI needs to be replaced. This essentially means that each M-Guard instance needs its own PKI.

    M-Guard product has added a product-specific certificate authority (CA) to manage certificates used to authenticate GCXP peers. This provides a convenient way to manage the PKI for each M-Guard deployment.

    The M-Guard CA functionality is provided in M-Guard Console.

    Guard Isolation Support

    Guard instances are now isolated from each other and other processes on the M-Guard Appliance. This is built on the FreeBSD “Jail” capability. It increases the protection of each guard instance.  Each guard now has independent IP addressing.

    System Integrity Verification

    The M-Guard Appliance system software now includes a manifest of system files with cryptographic hashes for each file. M-Guard Appliance verifies the current system against this manifest at boot and at regular intervals (hourly by default) to provide notice of any detected changes to the system files.

    M-Guard Console can be used to verify system integrity of an M-Guard Appliance against a separately distributed, signed manifest, which enables regular and more robust checking for changes to system files. Customers may implement additional checks against this signed manifest.

    Release Artifact Signatures and Signature Verification

    All M-Guard release artifacts are digitally signed. These include:

    • M-Guard Appliance full and update images;
    • M-Guard Appliance manifest, release information, and release notes;
    • M-Guard Console image;
    • M-Guard Console release information and release notes; and
    • M-Guard Admin Guide.

    M-Guard Console supports verification of release artifact signatures to ensure their integrity.

    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/m-guard-1-5-new-capabilities/

    • chevron_right

      Erlang Solutions: Effortlessly Extract Data from Websites with Crawly YML

      news.movim.eu / PlanetJabber · Friday, 14 July, 2023 - 08:24 · 5 minutes

    The workflow

    So in our ideal world scenario, it should work in the following way:

    1. Pull Crawly Docker image from DockerHub.
    2. Create a simple configuration file.
    3. Start it!
    4. Create a spider via the YML interface.

    The detailed documentation and the example can be found on HexDocs here: https://hexdocs.pm/crawly/spiders_in_yml.html#content

    This article will follow all steps from scratch, so it’s self-containing. But if you have any questions, please don’t hesitate to refer to the original docs or to ping us on the Discussions board .

    The steps

    1. First of all, we will pull Crawly from DockerHub:

    docker pull oltarasenko/crawly:0.15.0

    2. We should re-use the same configuration as in our previous article as we need to get the same data as in our previous article. So let’s create a file called `crawly.config` with the same content as previously:

    [{crawly, [
        {closespider_itemcount, 100},
        {closespider_timeout, 5},
        {concurrent_requests_per_domain, 15},
    
        {middlewares, [
                'Elixir.Crawly.Middlewares.DomainFilter',
                'Elixir.Crawly.Middlewares.UniqueRequest',
                'Elixir.Crawly.Middlewares.RobotsTxt',
                {'Elixir.Crawly.Middlewares.UserAgent', [
                    {user_agents, [
                        <<"Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:42.0) Gecko/20100101 Firefox/42.0">>,
                        <<"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36">>
                        ]
                    }]
                }
            ]
        },
    
        {pipelines, [
                {'Elixir.Crawly.Pipelines.Validate', [{fields, [<<"title">>, <<"author">>, <<"publishing_date">>, <<"url">>, <<"article_body">>]}]},
                {'Elixir.Crawly.Pipelines.DuplicatesFilter', [{item_id, <<"title">>}]},
                {'Elixir.Crawly.Pipelines.JSONEncoder'},
                {'Elixir.Crawly.Pipelines.WriteToFile', [{folder, <<"/tmp">>}, {extension, <<"jl">>}]}
            ]
        }]
    }].
    [{crawly, [
        {closespider_itemcount, 100},
        {closespider_timeout, 5},
        {concurrent_requests_per_domain, 15},
    
        {middlewares, [
                'Elixir.Crawly.Middlewares.DomainFilter',
                'Elixir.Crawly.Middlewares.UniqueRequest',
                'Elixir.Crawly.Middlewares.RobotsTxt',
                {'Elixir.Crawly.Middlewares.UserAgent', [
                    {user_agents, [
                        <<"Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:42.0) Gecko/20100101 Firefox/42.0">>,
                        <<"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36">>
                        ]
                    }]
                }
            ]
        },
    
        {pipelines, [
                {'Elixir.Crawly.Pipelines.Validate', [{fields, [<<"title">>, <<"author">>, <<"publishing_date">>, <<"url">>, <<"article_body">>]}]},
                {'Elixir.Crawly.Pipelines.DuplicatesFilter', [{item_id, <<"title">>}]},
                {'Elixir.Crawly.Pipelines.JSONEncoder'},
                {'Elixir.Crawly.Pipelines.WriteToFile', [{folder, <<"/tmp">>}, {extension, <<"jl">>}]}
            ]
        }]
    }].
    

    3. Starting the container shall be done with the help of the following command:

    docker run --name yml_spiders_example \
     -it -p 4001:4001 \
     -v $(pwd)/crawly.config:/app/config/crawly.config \
     oltarasenko/crawly:0.15.0
    

    Once done, you will probably see the following debug messages in your console. That is a good sign, as it worked!

    Now you can open `localhost:4000` in your browser, and your journey starts here!

    4. Building a spider

    Once you click Create New Spider, you will see the following basic page allowing inputting your spider code:

    One may say that these interfaces are super simple a basic, as for something from 2023. It’s right. We’re backend developers, and we do what we can. So this allows for achieving needed results with minimal frontend efforts. If you have a passion for improving it or want to contribute in any other way — you are more than welcome to do so!

    Writing a spider

    So the interface above requires you to write a “right” YML. So you need to know what is expected. Let’s start with a basic example, add some explanations, and improve it later.

    I suggest starting by inputting the following YML there:

    name: ErlangSolutionsBlog
    base_url: "https://www.erlang-solutions.com"
    start_urls:
     - "https://www.erlang-solutions.com/blog/web-scraping-with-elixir/"
    fields:
     - name: title
       selector: "title"
    links_to_follow:
     - selector: "a"
       attribute: "href"

    Now if you click the Preview button, you shall see what spider is going to extract from your start urls:

    So, what you can see here is:

    The spider will extract only one field called title that equals to Erlang Solutions.” Besides that, your spider is going to follow these links after the start page:

    "https://www.erlang-solutions.com/", 
    "https://www.erlang-solutions.com#", 
    "https://www.erlang-solutions.com/consultancy/consulting/", 
    "https://www.erlang-solutions.com/consultancy/development/", 
    ....

    The YML format

    • name A string representing the name of the scraper.
    • base_url A string representing the base URL of the website being scraped. The value must be a valid URI.
    • start_urls An array of strings representing the URLs to start scraping from. Each URL must be a valid URI.
    • links_to_follow An array of objects representing the links to follow when scraping a page. Each object must have the following properties:
      selector A string representing the CSS selector for the links to follow.
      attribute A string representing the attribute of the link element that contains the URL to follow.
    • fields : An array of objects representing the fields to scrape from each page. Each object must have the following properties:
      name A string representing the name of the field
      selector A string representing the CSS selector for the field to scrape.

    Finishing

    As in the original article, we plan to extract the following fields:

    title
    author
    publishing_date
    url
    article_body

    Expected selectors are copied from the previous article and can be found using Google Chrome’s inspect & copy approach!

    fields:
     - name: title
       selector: ".page-title-sm"
     - name: article_body
       selector: ".default-content"
     - name: author
       selector: ".post-info__author"
     - name: publishing_date
       selector: ".header-inner .post-info .post-info__item span"
    

    By now, you have noticed that the URL field is not added here. That’s because the URL is automatically added to every item by Crawly

    Now if you hit preview again, you should see the full scraped item:

    Now, if you click save, running the full spider and seeing the actual results will be possible.

    Conclusion

    We hope you like our work! We hope it will help reduce the need to write spider code or maybe help engage non-Elixir people who can play with it without coding!

    Please don’t be too critical of the product, as everything in this world has bugs, and we plan to improve it over time (as we already did during the last 4+ years). If you have ideas and improvement suggestions, please drop us a message so we can help !

    The post Effortlessly Extract Data from Websites with Crawly YML appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/effortlessly-extract-data-from-websites-with-crawly-yml/

    • chevron_right

      Erlang Solutions: Optimización para lograr concurrencia: comparación y contraste de las máquinas virtuales BEAM y JVM

      news.movim.eu / PlanetJabber · Friday, 7 July, 2023 - 09:01 · 17 minutes

    En esta nota exploraremos los aspectos internos de la máquina virtual BEAM o VM por sus siglas en inglés (Virtual Machine). Y haremos una comparación con la máquina virtual de Java, la JVM.

    El éxito de cualquier lenguaje de programación en el ecosistema Erlang puede ser repartido a tres componentes estrechamente acoplados:

    1. la semántica del lenguaje de programación Erlang, que es la base sobre la cual otros lenguajes están implementados
    2. las bibliotecas OTP y middleware usados para construir arquitecturas escalabels y sistemas concurrentes y resilientes y
    3. la máquina virtual BEAM, estrechamente acoplada a la semántica del lenguaje y OPT.

    Toma cualquiera de estos componentes por si solo y tendras a un potencial ganador. Pero si consideras a los tres juntos, tendrás a un ganador indiscutible para el desarrollo de sistemas escalables, resilientes y soft-real time. Citando a Joe Armstrong:

    “Puedes copiar las bibliotecas de Erlang, pero si no corren en la BEAM, no puedes emular la semánticas”

    Esta idea es reforzada por la primera regla de programación de Robert Virding, que establece que “Cualquier programa concurrente escrito en otro lenguaje y que sea lo suficientemente complejo, contiene una implementación ad hoc, específicada informalmente, lenta y plagada de errores, de la mitad de Erlang.”

    En esta nota vamos a explorar los aspectos internos de la máquina virtual BEAM. Compararemos algunos de ellos con la JVM, señalando las razones por las que deberías poner especial atención en ellos. Por mucho tiempo estos componentes han sido tratados como una caja negra, y confiamos ciegamente en ellos sin entender que hay detrás. Es tiempo de cambiar eso!

    Aspectos relevantes de la BEAM

    Erlang y la máquina virtual BEAM fueron inventados para tener una herramienta que resolviera un problema específico. Fueron desarrollados por Ericsson para ayudar a implementar la infraestructura de un sistema de telecomunicaciones que manejara redes fijas y móviles. Esta infraestructura por naturaleza es altamente concurrente y escalable. Tiene que funcionar en tiempo real y posiblemente nunca presentar fallas. No queremos que nuestra llamada de Hangouts con nuestra abuela de pronto terminé por un error, o estar en un juego en línea como Fortnite y que se interrumpa porque le tienen que hacer actualizaciones. La máquina virtual BEAM está optimizada para resolver muchos de estos retos, gracias a características que funcionan con un modelo de programación concurrente predecible.

    La receta secreta son los procesos Erlang, que son ligeros, no comparten memoria y son administrados por schedulers capaces de manejar millones a través de múltiples procesadores. Utiliza un recolector de basura que corre en un proceso por si mismo y está altamente optimizado para reducir el impacto en otros procesos. Como resultado de esto, el recolector de basura no afecta las propiedades globales en tiempo real del sistema. La BEAM es también la única máquina virtual utilizada ampliamente a escala con un modelo de distribución hecho a la medida, que permite a un programa ejecutarse en múltiples máquinas de manera transparente.

    Aspectos relevantes de la JVM

    La máquina virtual de Java o JVM por sus siglas en inglés (Java Virtual Machine) fue desarrollada por Sun Microsystem en un intento de proveer un plataforma en la que “ escribes código una vez ” y corre en donde sea. Crearon un lenguaje orientado a objetos, similar a C++ pero que fuera memory-safe ya que la detección de errores en tiempo de ejecución revisa los límites de los arreglos y las desreferencias de punteros. El ecosistema JVM se volvió extremamente popular en la era del Internet, convirtiéndose un estándar de-facto para el desarrollo de aplicaciones de servidores empresariales. El amplio rango de aplicabilidad fue posible gracias a una máquina virtual que se adapta a muchos  casos de uso y a un impresionante conjunto de bibliotecas que se adaptan al desarrollo empresarial.

    La JVM fue diseñada pensando en eficiencia. La mayoría de sus conceptos son una abstracción de características encontradas en populares sistemas operativos, como el modelo de hilos, similar al manejo de hilos o threads del sistema operativo. La JVM es altamente personalizable, incluyendo el recolector de basura y class loaders. Algunas implementaciones del recolector de basura de última generación brindan características ajustables que se adaptan a un modelo de programación basado en memoria compartida.

    La JVM le permite modificar el código mientras se ejecuta el programa. Y, un compilador JIT permite que el código de bytes se compile en el código de máquina nativo con la intención de acelerar partes de la aplicación.

    La concurrencia en el mundo de Java se relaciona principalmente con la ejecución de aplicaciones en subprocesos paralelos, lo que garantiza que sean rápidos. La programación con primitivas de concurrencia es una tarea difícil debido a los desafíos creados por su modelo de memoria compartida. Para superar estas dificultades, existen intentos de simplificar y unificar los modelos de programación concurrentes, como el marco Akka, que es el intento más exitoso.

    Concurrencia y Paralelismo

    Cuando hablamos de ejecución de código paralela, nos referismo a que partes del código se ejecutan al mismo tiempo en múltiples procesadores, o computadoras, mientras que programación concurrente se refiere al manejo de eventos que llegan de forma independiente. Una ejecución concurrente se puede simular en hardware de un solo subproceso, mientras que la ejecución en paralelo no. Aunque esta distinción puede parecer pedante, la diferencia da como resultado problemas por resolver con enfoques muy diferentes. Piense en muchos cocineros que preparan un plato de pasta carbonara. En el enfoque paralelo, las tareas se dividen entre la cantidad de cocineros disponibles, y una sola parte se completaría tan rápido como les tome a estos cocineros completar sus tareas específicas. En un mundo concurrente, obtendría una porción para cada cocinero, donde cada cocinero hace todas las tareas.

    Utilice el paralelismo para velocidad y la concurrencia para escalabilidad.

    La ejecución en paralelo intenta resolver una descomposición óptima del problema en partes independientes entre sí. Hervir el agua, sacar la pasta, mezclar el huevo, freír el jamón, rallar el queso. Los datos compartidos (o en nuestro ejemplo, el plato a servir) se manejan mediante bloqueos, mutexes y otras técnicas que garantizan la correcta ejecución. Otra forma de ver esto es que los datos (o ingredientes) están presentes y queremos utilizar tantos recursos de CPU paralelos como sea posible para terminar el trabajo lo más rápido que se pueda.

    La programación concurrente, por otro lado, trata con muchos eventos que llegan al sistema en diferentes momentos y trata de procesarlos todos dentro de un tiempo razonable. En arquitecturas multi-procesadores o distribuidas, parte de la ejecución se lleva a cabo en paralelo, pero esto no es un requisito. Otra forma de verlo es que el mismo cocinero hierve el agua, saca la pasta, mezcla los huevos, etc., siguiendo un algoritmo secuencial que es siempre el mismo. Lo que cambia entre procesos (o cocciones) son los datos (o ingredientes) en los que trabajar, que existen en múltiples instancias.

    La JVM está diseñada para el paralelismo, la BEAM para la concurrencia. Son dos problemas intrínsecamente diferentes, que requieren soluciones diferentes.

    La BEAM y la concurrencia

    La BEAM proporciona procesos ligeros para dar contexto al código en ejecución. Estos procesos, también llamados actores, no comparten memoria, sino que se comunican a través del paso de mensajes, copiando datos de un proceso a otro. El paso de mensajes es una característica que la máquina virtual implementa a través de buzones de correo que tienen los procesos individualmente. El paso de mensajes es una operación no-bloqueante, lo que significa que enviar un mensaje de un proceso a otro otro es casi instantáneo y la ejecución del remitente no se bloquea. Los mensajes enviados tienen la forma de datos inmutables, copiados de la pila del proceso remitente al buzón del proceso receptor. Esto se logra sin necesidad de bloqueos y mutexes entre los procesos, el único bloqueo en el buzón o mailbox es en caso de que varios procesos envíen un mensaje al mismo destinatario en paralelo.

    Los datos inmutables y el paso de mensajes permiten al programador escribir procesos que funcionan de forma independiente y que se centran en la funcionalidad en lugar del manejo de bajo nivel de la memoria y la programación de tareas. Este diseño simple no solo funciona en un solo proceso, sino también en múltiples threads en una máquina local que se ejecuta en la misma VM y utilizando la distribución integrada, a través de la red con un grupo de VMs y máquinas. Si los mensajes son inmutables entre procesos, se pueden enviar a otro subproceso (o máquina) sin bloqueos, escalando casi linealmente en arquitecturas distribuidas de varios procesadores. Los procesos se abordan de la misma manera en una VM local que en un clúster de VM, el envío de mensajes funciona de manera transparente, independientemente de la ubicación del proceso de recepción.

    Los procesos no comparten memoria, permitiendo replicar los datos para mayor resiliencia, y distribuirlos para escalar. Esto significa que se pueden tener dos instancias del mismo proceso en dos máquinas separadas, compartiendo una actualización de estado entre ellas. Si una máquina falla entonces la otra tiene una copia de los datos y puede continuar manejando la solicitud, haciendo el sistema tolerante a fallas. Si ambas máquinas están operando, ambos procesos pueden manejar peticiones, brindando así escalabilidad. La BEAM proporcional primitivas altamente optmizadas para que todo esto funcione sin problemas, mientras que OTP (la biblioteca estándar ) proporciona las construcciones de nivel superior para facilitar la vida de los programadores.

    Akka hace un gran trabajo al replicar las construcciones de nivel superior, pero esta de alguna manera limitado por la falta de primitivas proporcionadas por la JVm, permitiendo estar altamente optimizada para concurrencia. Si bien las primitivas de la JVM permiten una gama más amplia de casos de uso, hacen el desarrollo de sistemas distribuidos más complicado al no tener características por default para la comunicación y a menudo se basan en un modelo de memoria compartida. Por ejemplo, ¿en qué parte de un sistema distribuido coloca memoria compartida? ¿Y cuál es el costo de acceder a ella?

    Scheduler

    Mencionamos que una de las características más fuertes de la BEAM es la capacidad de dividir un programa en procesos pequeños y livianos. La gestión de estos procesos es tarea del scheduler . A diferencia de la JVM, que asigna sus subprocesos a threads del sistema operativo y deja que este los administre, la BEAM viene con su propio scheduler o administrador.

    El scheduler inicia por default un hilo del sistema operativo ( OS thread ) por cada procesador de la máquina y optimiza la carga entre ellos. Cada proceso consiste en código que será ejecutado y un estado que cambia con el tiempo. El scheduler escoge el primer proceso en la cola de ejecución que esté listo para correr, le asigna una cierta cantidad de reducciones para ejecutarse, donde cada reducción es el equivalente aproximado a un comando. Una vez que el proceso se ha quedado sin reducciones, sera bloqueado por I/O, y se queda esperando un mensaje o que pueda completar su ejecución, el scheduler escoge el siguiente proceso en la cola y lo despacha. Esta técnica es llamada preventiva.

    Mencionamos el framework Akka varias veces, ya que su más grande desventaja es la necesidad de anotar el código con scheduling points, ya que la administración no está dada a nivel de la JVM. Al quitar el control de las manos del programador, las propiedades en tiempo real son preservadas y garantizadas, ya que no hay riesgo de que accidentalmente se provoque inanición del proceso.

    Los procesos pueden ser esparcidos a todos los hilos disponiblews del scheduler y maximizar el uso de CPU. Hay muchas maneras de modificar el scheduler pero es raro y solo será requerido en ciertos casos límite, ya que las opciones predeterminadas cubren la mayoría de los patrones de uso.

    Hay un tema sensible que aparece con frequencia con respecto a los schedulers: como manejar funciones implementadas nativamente, o NIFs por sus siglas en inglés (Natively Implemented Functions). Un NIF es un fragmento de código escrito en C, compilado como una biblioteca y ejecutado en el mismo espacio de memoria que la BEAM para mayor velocidad. El problema con los NIF es que no son preventivos y pueden afectar a los schedulers. En versiones recientes de la BEAM, se agregó una nueva función, dirty schedulers , para brindar un mejor control de los NIF. Los dirty schedulers son schedulers separados que se ejecutan en diferentes subprocesos para minimizar la interrupción que puede causar un NIF en un sistema. La palabra dirty se refiere a la naturaleza del código que ejecutan estos schedulers.

    Recolector de Basura

    Los lenguajes de programación modernos utilizan un recolector de basura para el manejo de memoria. Los lenguajes en la BEAM no son la excepción. Confiar en la máquina virtual para manejar los recursos y administrar la memoria es muy útil cuando desea escribir código concurrente de alto nivel, ya que simplifica la tarea. La implementación subyacente del recolector de basura es bastante sencilla y eficiente, gracias al modelo de memoria basado en el estado inmutable. Los datos se copian, no se modifican y el hecho de que los procesos no compartan memoria elimina las interdependencias de los procesos que por consiguiente, no necesitan ser administradas.

    Otra característica del recolecto de basura de la BEAM es que solamente se ejecuta cuando es necesario, en un proceso por si solo, sin afectar otros procesos esperando en la cola de ejecución. Por lo tanto, el recolector de basura en Erlang no detine el mundo. Evita picos de latencia en el procesamiento, porque la máquina virtual nunca se detiene como un todo, solo se detienen procesos específicos, y nunca todos al mismo tiempo. En la práctica, es solo parte de lo que hace un proceso y se trata como otra reducción. El recolector de basura suspende el proceso por un intervalo muy corto, hablamos de microsegundos. Como resultado, habrá muchas ráfagas pequeñas, que se activarán solo cuando el proceso necesite más memoria. Un solo proceso generalmente no tiene asignadas grandes cantidades de memoria y, a menudo, es de corta duración, lo que reduce aún más el impacto al liberar inmediatamente toda su memoria. Una característica de la JVM es la capacidad de intercambiar recolectores de basura, así que al usar un recolector comercial, también es posible lograr un recolector continuo o non-stopping en la JVM.

    Las características del recolector de basura son discutidas en este excelente post por Lukas Larsson. Hay muchos detalles intrincados, pero está optimizada para manejar datos inmutables de manera eficiente, dividiendo los datos entre la pila y el heap para cada proceso. El mejor enfoque es hacer la mayor parte del trabajo en procesos de corta duración.

    Una pregunta que surge a menudo sobre este tema es cuánta memoria usa la BEAM. Si indgamos un poco, la máquina virtual asigna grandes porciones de memoria y utiliza allocators personalizados para almacenar los datos de manera eficiente y minimizar la sobrecarga de las llamadas al sistema. Esto tiene dos efectos visibles:

    1) La memoria utilizada disminuye gradualmente después de que no se necesita espacio

    2) La reasignación de grandes cantidades de datos podría significar duplicar la memoria de trabajo actual.

    El primer efecto puede, si es realmente necesario, mitigarse ajustando las estrategias del allocator . El segundo es fácil de monitorear y planificar si tiene visibilidad de los diferentes tipos de uso de la memoria. (Una de esas herramientas de monitoreo que proporciona métricas del sistema listas para usar es WombatOAM ).

    Hot Code Loading

    La carga de código en caliente o  hot code loading es probablemente la característica única más citada de BEAM. La carga de código en caliente significa que la lógica de la aplicación se puede actualizar cambiando el código ejecutable en el sistema mientras se conserva el estado del proceso interno. Esto se logra reemplazando los archivos BEAM cargados e instruyendo a la máquina virtual para que reemplace las referencias del código en los procesos en ejecución.

    Es una característica fundamental para garantizar que no habrá tiempo de inactividad en una infraestructura de telecomunicaciones, donde se utilizó hardware redundante para manejar los picos. Hoy en día, en la era de la contenerización, también se utilizan otras técnicas hacer actualizaciones a un sistema en producción. Aquellos que nunca lo han usado o requerido, lo descartan como una característica no tan importante, pero no hay que subestimarla en el flujo de trabajo de desarrollo. Los desarrolladores pueden iterar más rápido reemplazando parte de su código sin tener que reiniciar el sistema para probarlo. Incluso si la aplicación no está diseñada para ser actualizable en producción, esto puede reducir el tiempo necesario para volver a compilar y re-lanzar el sistema.

    Cuando no usar la BEAM

    Se trata en gran medida de saber escoger la herramienta adecuada para el trabajo.

    ¿Necesita un sistema que sea extremadamente rápido, pero no le preocupa la concurrencia? ¿Quiere manejar algunos eventos en paralelo de manera rápida? ¿Necesita procesar números para gráficos, IA o análisis? Siga la ruta de C++, Python o Java. La infraestructura de telecomunicaciones no necesita operaciones rápidas con floats , por lo que la velocidad nunca fue una prioridad. Con el tipado dinámico, que tiene que hacer todas las comprobaciones de tipo en tiempo de ejecución, las optimizaciones en el compilador no son tan triviales. Por lo tanto, es mejor dejar el procesamiento de números en manos de la JVM, Go u otros lenguajes que compilan de forma nativa. No es de sorprender que las operaciones de coma flotante en Erjang, la versión de Erlang que se ejecuta en la JVM, sean un 5000 % más rápidas que en la BEAM. Por otro lado, en donde hemos visto realmente brillar a la BEAM es en el uso de su concurrencia para orquestar el procesamiento de números, subcontratando el análisis a C, Julia, Python o Rust. Haces el mapa fuera de la BEAM y la reducción dentro de ella.

    El mantra siempre ha sido lo suficientemente rápido. Los humanos tardan unos cientos de milisegundos en percibir un estímulo (un evento) y procesarlo en su cerebro, lo que significa que el tiempo de respuesta de micro o nano segundos no es necesario para muchas aplicaciones. Tampoco es recomendable usar la BEAM para microcontroladores, ya que consume demasiados recursos. Pero para los sistemas integrados con un poco más de potencia de procesamiento, donde los multi-procesadores se están convirtiendo en la norma, se necesita concurrencia y la BEAM brilla ahí. En los años 90, estábamos implementando conmutadores de telefonía que manejaban decenas de miles de suscriptores que se ejecutaban en placas integradas con 16 MB de memoria. ¿Cuánta memoria tiene un RaspberryPi en estos días?

    Y por último, hard-real-time . Probablemente no quiera que la BEAM administre su sistema de control de bolsas de aire. Necesita garantías sólidas, un sistema operativo en tiempo real y un lenguaje sin recolección de basura ni excepciones. Una implementación de una máquina virtual de Erlang que se ejecuta en el metal, como GRiSP, le brindará garantías similares.

    Conclusión

    Utilice la herramienta adecuada para el trabajo.

    Si está escribiendo un sistema soft-real time que tiene que escalar fuera de la caja y nunca fallar, y hacerlo sin la molestia de tener que reinventar la rueda, definitivamente la BEAM es la tecnología que está buscando. Para muchos, funciona como una caja negra. No saber cómo funciona sería como conducir un Ferrari y no ser capaz de lograr un rendimiento óptimo o no entender de qué parte del motor proviene ese extraño sonido. Es por eso que es importante aprender más sobre la BEAM, comprender su funcionamiento interno y estar listo para ajustarlo. Para aquellos que han usado Erlang y Elixir con ira, hemos lanzado un curso dirigido por un instructor de un día que desmitificará y explicará mucho de lo que vio mientras lo prepara para manejar la concurrencia masiva a escala. El curso está disponible a través de nuestra nueva capacitación remota dirigida por un instructor; obtenga más información aquí . También recomendamos el libro The BEAM de Erik Stenman y BEAM Wisdoms , una colección de artículos de Dmytro Lytovchenko.

    The post Optimización para lograr concurrencia: comparación y contraste de las máquinas virtuales BEAM y JVM appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/optimizacion-para-lograr-concurrencia-comparacion-y-contraste-de-las-maquinas-virtuales-beam-y-jvm-2/

    • chevron_right

      Paul Schaub: Creating an OpenPGP Web-of-Trust Implementation – Knitting a Net

      news.movim.eu / PlanetJabber · Thursday, 6 July, 2023 - 21:55 · 4 minutes

    There are two obvious operations your OpenPGP implementation needs to be capable of performing if you want to build a Web-of-Trust. First you need to be able to sign other users public keys (certificates), and second, you need to be able to verify those certifications.

    The first is certainly the easier of the two tasks. In order to sign another users certificate, you simply take your own secret key, decide which binding (a tuple of a certificate and a user-id) you want to create a certification for and then simply generate a signature over the user-id and the primary key of the certificate according to the standard.

    Now your signature can be distributed, e.g. by mailing it to the user, or by publishing it to a key server. This task is simple, because all the ingredients are well known. You know which key to use to create the signature, you know which user-id you want to certify and you know which key to bind the user-id to. So signing a certificate is more or less straight forward application of the cryptography defined in the specification.

    But the task of verifying, whether there is a valid signature by one certificate over another is far more complex of a task. Here, the specification is deliberately vague. Some time ago I wrote an article, describing why signature verification in OpenPGP is hard , and I still stand to my points.

    Authentication of a certificate is the task of figuring out how confident you can be that a key that claims to belong to “Alice <alice@example.org>” really was issued by Alice and not by an imposter, in other words you need to proof the authenticity of the binding. To accomplish this task, the Web-of-Trust is scanned for paths that lead from a root of trust (think e.g. a CA or your own certificate) to the binding in question.

    Building a Web-of-Trust implementation can be divided in a number of steps which luckily for us stand independent from another:

    1. Ingest the set of certificates
    2. Verify certifications made on those certificates
    3. Build a flow network from the certificates and certifications
    4. Perform queries on the network to find paths from or to a certain binding
    5. Interpret the resulting path(s) to infer authenticity of said binding

    In the first step we simply want to create an index of all available certificates, such that in later steps we are able to have random access on any certificate via its fingerprint(s) or key-ID(s).

    The second step is to go through each certificate one-by-one and attempt to verify third-party certifications made over its primary key or user-ids. Here, the index built in the previous step comes in handy acquire the issuer certificate to perform the signature verification. In this step, we index the certifications we made and keep them available for the next step. Once we successfully performed all signature verifications, the OpenPGP-portion of the task is done.

    In step three, we form a flow network from the results of the previous steps. Certificates themselves form the nodes of the network, while each signature represents an edge between the issuer- and target certificate. There can be more than one edge between two certificates.

    Step 4 and 5 are the technically most complicated steps, so I will not go into too much detail in this post. For now, I will instead first try to explain the abstract picture of the Web-of-Trust I have in my head:

    I imagine the Web-of-Trust as an old, half-rotten fishing net (bear with me); There are knobbly knots, which may or may not be connected to neighboring knots through yarn of different thickness. Some knots are well-connected with others, as ye olde fisherman did some repair work on the net, while other knots or even whole sections of the net have no intact connections left to the rest. Many connections rotted away as the yarn past its expiration date.

    When we now attempt to pick up the net by lifting one of the knots into the air, all those knots that are connected either directly or indirectly will also lift up, while disconnected knots or sections will fall to the ground. If we put some weight on the net, some of the brittle connections may even break, depending on how much weight we apply. Others might hold because a knot has multiple connections that share the load.

    In this analogy, each knot is a node (OpenPGP certificate), with yarn connections being the certifications. Different thickness of the yarn means different trust-amounts. The knot(s) we chose to pick the net up from are the trust roots. Each knot that lifts up from the ground we can authenticate. The weight we apply to the net can be seen as the amount of trust we require. If we aren’t able to accumulate enough trust-amount for a path, the knot rips off the fishing net, meaning it cannot be authenticated to a sufficient degree.

    This analogy is of course not perfect at all. First off, edges in the Web-of-Trust are directed, meaning you can follow the edge in one direction, but not necessarily in the other. Furthermore, the Web-of-Trust has some more advanced attributes that can be put into a certification to give it even more meaning. For example, a trust signature not only has a numeric trust-amount, but also a depth, which limits the number of “hops” you can make after passing over the edge. Certifications can also include regular expressions, limiting to which certificates you can hop next.

    Still, to get an initial, rough understanding of the WoT, I find the fishing net analogy quite suitable. In a later post I might go into more detail on steps 4 and 5.

    • wifi_tethering open_in_new

      This post is public

      blog.jabberhead.tk /2023/07/06/creating-an-openpgp-web-of-trust-implementation-knitting-a-net/

    • chevron_right

      Erlang Solutions: How to Manage Your RabbitMQ Logs: Tips and Best Practices

      news.movim.eu / PlanetJabber · Tuesday, 4 July, 2023 - 10:37 · 12 minutes

    RabbitMQ is an open-source message broker software that allows you to build distributed systems and implement message-based architectures. It’s a reliable and scalable messaging system that enables efficient communication between different parts of your application. However, managing RabbitMQ logs can be a challenging task, especially when it’s deployed on a large cluster. In this article, we’ll take you through some basic know-how when it comes to logging on RabbitMQ, including how to configure RabbitMQ logs to store them at a particular location, how to change the format of the logs, how to rotate RabbitMQ log files, how to monitor and troubleshoot RabbitMQ logs

    W1d1EZ0PV7l_qzpGVI044oo_7NVtNlsbFIlfItLOHL6c-zUpUAWbSZ0eE0GkIjsvrW0MpbWR04c0cMp9gcRCPBrCuXB2aU1BKwD3u09Tyux7HWMl_tStF63ifprTSVzbdtSWWdj7KwSX_ny9TEYB4yI

    Understanding RabbitMQ Logs

    RabbitMQ logs are a form of recorded information generated by the RabbitMQ message broker system. These log messages provide a lot of information on events taking place in the RabbitMQ cluster. For example, a log message is generated whenever a new message is received in a queue, when that message is replicated in another queue, if a node goes down or comes up, or if any error is encountered.

    The log entries typically include timestamps, severity levels, and descriptive messages about the events that occurred. These logs can help administrators and developers diagnose problems, identify bottlenecks, analyse system performance, track message flows, and detect any anomalies or misconfigurations within the RabbitMQ infrastructure.

    Before managing your RabbitMQ logs, you should first understand the different types of logs that RabbitMQ produces. There are several types of RabbitMQ logs:

    • Crash Logs: produced when RabbitMQ crashes or encounters a fatal error.
    • Error Logs: contain information about errors that occur during RabbitMQ’s runtime.
    • Warning Logs: contain warnings about issues that may affect RabbitMQ’s performance.
    • Info Logs: contain general information about RabbitMQ’s runtime.
    • Debug Logs: contain detailed information about the RabbitMQ server’s internal processes and operations, such as message routing decisions, channel operations, and internal state changes.

    Not having these log messages can make debugging issues very difficult, as they help us understand what RabbitMQ is doing at any given point in time. So enabling RabbitMQ logging and storing these log messages for a certain period is crucial for troubleshooting and fixing problems fast.

    RabbitMQ logging location

    Use RabbitMQ management UI or rabbitmq-diagnostics -q log_location to find when a node stores its log file(s).

    By default, the logging directory is: /var/log/rabbitmq

    The default value for the log folder location is saved in an environment variable named RABBITMQ_LOGS . Changing this environment variable will change the RabbitMQ log folder.

    You can check the info of environment variables here .

    Or we can use the configuration file to change the logging folder. By default, this parameter is commented because the environment variable is set automatically while installing or deploying RabbitMQ. The default value of the log file location configuration in the rabbitmq.conf file is as follows:

    log.dir = /var/log/rabbitmq

    Configuring RabbitMQ Logging

    RabbitMQ offers various log levels, each having a different severity number. This number is important, as it decides, along with other factors, if a message has to be logged or not.

    l7H0hwBn_BIxXZnRfmMHm_xM-N1p2BQaGknKlhwgcJkeei_EwQwtD3S1lz_U-SpDqcgq7Ea-Xk4VJwx9aM-kxhXhJ1EJGkmh4JhvHrxQbnxHhZRwh3yyTUT0cWP6GvpJYyEmv4DKQqfrPYqFNNO_Xl8

    By default, RabbitMQ logs at the info level. However, you can adjust the log levels based on your needs. The none level means no logging.

    To make the default category log only errors or higher severity messages, use

    log.default.level = error

    Each output can use its own log level. If a message has lower severity than the output level, the message will not be logged.

    For example, if no outputs are configured to log debug messages, even if the category level is set to debug , the debug messages will not be logged.

    On the other hand, if an output is configured to log debug messages, it will get them from all categories, unless a category is configured with a less verbose level.

    Changing RabbitMQ log level

    RabbitMQ provides two methods of changing the log level: via the configuration files or running a command using CLI tools.

    Changing the log level in the configuration file requires a node restart for the change to take effect. But this change is permanent since it’s made directly in the configuration file. The configuration parameter for changing the log level is as follows:

    log.default.level = info

    You can change this value to any supported log level and restart the node, and the change will be reflected.

    The second method, using CLI tools, is transient. This change will be effective only until a node restarts. After a restart, the log level will be read from the configuration file again. This method comes in handy when you want to temporarily enable the RabbitMQ debug log level to perform some quick debugging:

    rabbitmqctl -n rabbit@target-host set_log_level debug

    The last option passed to this command is the desired log level. You can pass any of the supported log levels. Changing the log level this way does not require a node restart, and the changes will take effect immediately

    Log Messages Category

    Along with log levels, RabbitMQ offers log categories to better fine-tune logging and debugging. Log categories are various categories of messages, which can be visualised as logs coming from various modules in RabbitMQ.

    This is the list of log categories available in RabbitMQ:

    • Connection: Connection lifecycle event logs for certain messaging protocols
    • Channel: Errors and warning logs for certain messaging protocol channels
    • Queue: Mostly debug logs from message queues
    • Mirroring: Logs related to queue mirroring
    • Federation: Federation plugin logs
    • Upgrade: Verbose upgrade logs
    • Default: Generic log category; no ability to override file location for this category

    You can use these to override the default log levels configured at the global level. For example, suppose you’re trying to debug an issue and have configured the global log level as debug. But you want to avoid debug log messages for the upgrade category, or all log messages for the upgrade category.

    For this, configure the upgrade log category to not send any log messages:

    log.<category>.level = none

    Similarly, you can configure all messages from a certain category to log to a separate file, using the following configuration:

    log.<category>.file

    This way, RabbitMQ makes it easy to segregate log messages based on both the log category and the log level.

    Rotate Logs

    RabbitMQ logs can quickly fill up your disk space, so it’s essential to rotate the logs regularly. By rotating the logs, you can ensure that you have enough disk space for your RabbitMQ logs.

    Based on Date

    You can use a configuration parameter to automatically rotate log files based on the date or a given hour of the day:

    log.file.rotation.date = $D0

    The $D0 option will rotate a log file every 24 hours at midnight. But if you want to instead rotate the log file every day at 11 p.m., use the following configuration:

    log.file.rotation.date = $D23

    Based on File Size

    RabbitMQ also allows you to rotate log files based on their size. So, whenever the log file hits the configured file size, a new file is created:

    log.file.rotation.size = 10485760

    The configuration above rotates log files every time the file size hits 10 MB.

    Number of Archived Log Files to Retain

    When log file rotation is enabled, it’s also important to make sure you configure how many log files to retain. Otherwise, all older log files will be retained on disk, consuming unnecessary storage. RabbitMQ has a built-in configuration for this as well, and you can control the number of files to retain using the following configuration:

    log.file.rotation.count = 5

    In this example, five log files will be retained on the storage disk.

    Rotation Using Logrotate

    On Linux, BSD and other UNIX-like systems, logrotate is an alternative way of log file rotation and compression.RabbitMQ Debian and RPM packages will set up logrotate to run weekly on files located in default /var/log/rabbitmq directory. Rotation configuration can be found in /etc/logrotate.d/rabbitmq-server .

    How to Format RabbitMQ Logs

    Properly formatting logs is crucial for effective log management in RabbitMQ. It helps improve readability, enables efficient searching, and allows for easier analysis. In most production environments, you would want to use a third-party tool to analyse all of your logs. This becomes easy when the logs are formatted in a way that’s easy to understand by an outside tool. In most cases, this is the JSON format.

    You can use a parameter in the configuration file to change the format of the RabbitMQ log messages to JSON. This takes effect irrespective of the log level you have configured or the category of logs you’re logging.

    To change the log format to JSON, you just have to add or uncomment the following configuration in the configuration file:

    log.file.formatter = json

    This will convert all log messages written to a file to JSON format. Or, change the format of the logs written to the console or syslog to JSON using the following configurations, respectively:

    log.console.formatter = json

    log.syslog.formatter = json

    If you want to revert these changes, that is, if you don’t want the log messages to be in JSON format, then simply comment out these configuration parameters.

    RabbitMQ Logging Configuration

    RabbitMQ offers three target options for logging messages: to a file, the console, or syslog. By default, RabbiMQ directs all log messages to files. You can use the RabbitMQ configuration file to control this.

    Logging to File

    The following configuration will enable or disable RabbitMQ logging to a file:

    log.file = true

    There are numerous other configuration parameters for tuning RabbitMQ logs to files. For example, you can control the log level to files, create separate files for each log category, use the built-in log rotation feature, or even integrate with a third-party log rotation package, as discussed above. When you want to stop logging messages this way, all you have to do is change the configuration value to false.

    Logging to Console

    Logging to the console is similar to logging to a file. You have a configuration in the RabbitMQ configuration file to either enable or disable logging to the console. The configuration is as follows:

    log.console = true

    As with logging to a file, you can change the value to false to stop the logging. Also, there are numerous options for formatting log messages sent to the console.

    Logging to Syslog

    Logging to syslog takes a bit more configuration than logging to a file or the console. Once you enable logging to syslog, you also have to provide other related configurations, like the transport mechanism (TCP or UDP), transport protocol (RFC 3164 or RFC 5424), syslog service IP address or hostname, port, TLS configuration if any, and more.

    To enable logging to syslog, use the following configuration:

    log.syslog = true

    By default, RabbitMQ uses the UDP transport mechanism on port 514 and the RFC 3164 protocol. To change the transport and the protocol, use the following two configurations:

    log.syslog.transport = tcp

    log.syslog.protocol = rfc5424

    If you want to use TLS, do so using the following configuration:

    log.syslog.transport = tls

    log.syslog.protocol = rfc5424

    log.syslog.ssl_options.cacertfile = /path/to/ca_certificate.pem

    log.syslog.ssl_options.certfile = /path/to/client_certificate.pem

    log.syslog.ssl_options.keyfile = /path/to/client_key.pem

    Next, you need to configure the syslog IP address or hostname and the port using the following configuration:

    log.syslog.ip = 10.10.10.10

    or

    log.syslog.host = my.syslog-server.local

    and

    log.syslog.port = 1514

    Along with these, you can also change the format of the log messages to JSON as we discussed earlier.

    Sample Logging Configuration

    ## See https://rabbitmq.com/logging.html for details.
    ## Log directory, taken from the RABBITMQ_LOG_BASE env variable by default.
    # log.dir = /var/log/rabbitmq
    
    
    ## Logging to file. Can be false or a filename.
    ## Default:
    # log.file = rabbit.log
    
    ## To disable logging to a file
    # log.file = false
    
    ## Log level for file logging
    ## See https://www.rabbitmq.com/logging.html#log-levels for details
    ##
    # log.file.level = info
    # log.file.formatter = json (Configure JSON format)
    
    ## File rotation config. No rotation by default.
    ## See https://www.rabbitmq.com/logging.html#log-rotation for details
    ## DO NOT SET rotation date to ''. Leave the value unset if "" is the desired value
    # log.file.rotation.date = $D0
    # log.file.rotation.size = 0
    
    ## Logging to console (can be true or false)
    # log.console = false
    
    ## Log level for console logging
    # log.console.level = info
    
    ## Logging to the amq.rabbitmq.log exchange (can be true or false)
    ## See https://www.rabbitmq.com/logging.html#log-exchange for details
    # log.exchange = false
    
    ## Log level to use when logging to the amq.rabbitmq.log exchange
    # log.exchange.level = info
    

    Monitoring RabbitMQ Logs

    Monitoring your RabbitMQ logs in real-time can help you identify issues before they become critical. There are several monitoring tools available, such as AppDynamics , DataDog , Prometheus , Sematext ,….

    NyKHiXHYGhHaJmsrF8JhPC3j60AELaUvSNGM-Q33nS7538MTGHgCl5Ryezf4GyQQG4yKCJYhy4A0dFJt3_foeMT8QKKHNDa1e7n1sTQeZBvgkNkQ7NkpLyfIL931h-Bh57rXVNmWJna4i1Y6OPELsjw

    Monitoring RabbitMQ with Sematext

    Troubleshooting RabbitMQ Logs

    Managing RabbitMQ logs is an essential part of maintaining a reliable and efficient RabbitMQ setup. However, even with the best logging practices in place, issues can still arise.

    I usually work with the RabbitMQ logs follow these steps:

    • Check the Log Files Regularly: Checking the logs on a regular basis can help you identify any issues as they arise, allowing you to address them promptly, also help you to track trends and identify patterns of behavior that may indicate potential issues.
    • Look for Error Messages: When reviewing your RabbitMQ logs, pay close attention to any error messages that appear. Error messages can provide valuable clues about potential issues with your RabbitMQ setup.
    • Check RabbitMQ Status: Using the RabbitMQ management console or command-line tools, check the status of your RabbitMQ instance by typing:”rabbitmqctl status”
      Look for any warnings or errors that may indicate potential issues. Address any warnings or errors promptly to prevent them from becoming larger issues down the road.
    • Check Network Connections: Check network connections and firewall settings to ensure that they are properly configured. Network issues can cause a wide range of issues with your RabbitMQ setup, so it’s essential to make sure that your network is configured correctly.
    • Monitor Resource Usage: Monitor resource usage on your RabbitMQ server, including CPU, memory, and disk space. If any of these resources are running low, it can cause issues with your RabbitMQ instance.

    Conclusion

    Remember to set appropriate log levels, use a monitoring tool to keep track of your logs, and be proactive in identifying and resolving issues. With these practices in place, you can make sure that your RabbitMQ system is reliable and performing at its best.

    If you have further questions, our team of expert consultants are on hand and ready to chat. Just head to our contact page.

    The post How to Manage Your RabbitMQ Logs: Tips and Best Practices appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/how-to-manage-your-rabbitmq-logs-tips-and-best-practices/

    • chevron_right

      Erlang Solutions: The business value behind green coding

      news.movim.eu / PlanetJabber · Thursday, 29 June, 2023 - 14:21 · 5 minutes

    Most large businesses – and many smaller ones – now have a sustainability strategy. Measuring Environmental, Social and Governance (ESG) impact has been transformed from a fringe activity into a fundamental differentiator.

    That’s partly because organisations want to do the right thing – climate change affects everyone, after all – and also because sustainability makes sound business sense.

    B2C businesses are dealing with increasingly environmentally aware consumers. B2B businesses are selling to companies with their own ESG and sustainability targets. Large organisations of all kinds need to green their supply chains as well as themselves.

    On top of it all, regulation around ESG is only going one way. Progressive companies are fighting to get ahead of the curve. As far as non-compliance is concerned, prevention is better than cure.

    In which case, how might organisations achieve greater sustainability? There are many ways, but in a digital world green coding is an increasingly essential ingredient in any corporate sustainability strategy. It can also provide a range of other business benefit.

    IT energy consumption: the elephant in the room

    According to the Association for Computing Machinery, energy consumption at data centres has doubled in the last decade . The ICT sector is responsible for between 1.8% and 3.9% of greenhouse gas emissions globally.

    That should come as no surprise. The expansion of our digital horizons in the last ten years or so has been remarkable. Every business is, to some degree or other, a digital business.

    Digital-first organisations create vast amounts of data, and then set powerful new technologies to work, analysing it for insight. AI, analytics, the IoT, VR…they’re all changing the world, and adding significantly to its energy consumption.

    The need for green coding in a data-hungry world

    The direction of travel here is only one way. The explosion in data is ramping up energy use in data centres, raising costs and making sustainability targets more difficult to achieve.

    Source: Statista

    At the same time, the vast expansion of computing power over the last couple of decades has created a sense of freedom among software engineers. As technology constraints have diminished, code has become longer and more complex. Coders have worried less about the process and more about the outcome.

    To take one example, open-source code libraries have allowed programmers to reuse pre-produced pieces of code in their projects. This can certainly be useful, accelerating time to market for under-pressure projects. And as we all know, recycling can also be sustainable, when it’s the right code for the task.

    But often it isn’t, or at least not quite. An ‘off-the-shelf’ approach to software engineering – when code is reused simply to cut development time – can result in more lines of code than is necessary. That in turn requires more processing power and demands more energy use.

    In addition, the almost limitless computing capabilities that modern developers have at their fingertips have led to major leaps in file size. Why reduce image or video quality, even fractionally, if you really don’t have to?

    This is all in stark contrast to the coding practices of the early years of the 21st century when technological limitations demanded greater coding discipline.

    Green coding is many things but one of them is a return to coding with stricter limits and a focus on taking the shortest possible path to a desired outcome.

    Source: OurWorldinData.org

    Lean coding is green coding

    In other words, to reduce energy use in software, applications and websites, programmers are adopting the principles of lean coding.

    In short, this means writing code that requires less processing power without compromising quality. This is a win-win, because when it’s done well, lean coding can have business benefits beyond contributing to sustainability targets.

    Efficient, purpose-built code can accelerate applications and improve user experiences. And because it uses less energy, it can reduce energy bills.

    There are some easy wins in this. Many – perhaps most – file sizes can be reduced. Coders need to recognise where high-quality media is necessary, and where lower-quality alternatives will do just as well. There is a balance to be struck.

    Similarly, writing tight, tailored code will increase efficiency and lower energy consumption. And because efficient code takes less time to process, it’s also likely to produce better results.

    Writing good code leads to better user experiences because applications are faster and more efficient. And focusing on user-friendliness is also a principle of green coding.

    Applications that are more intuitive mean users spend less time doing what they need to do. The more user-friendly software is, the more energy efficient it tends to be, too.

    The importance of language in green coding

    In all this, your choice of coding tools takes on new significance. Some languages – Elixir is one – are designed with efficiency in mind. That tends to mean the software they produce uses less energy as well.

    Elixir, which is based on Erlang, is a lightweight language, consuming relatively little in terms of memory or processor power. It’s often the most sustainable option in places where many processes happen at the same time, such as a web server.

    Because it utilises the Erlang Virtual Machine (VM), Elixir is ideal for running low-resource, high-availability processes. It’s also simple and easy to understand, requiring less code overall than many counterparts. That makes the end application more sustainable, and it also reduces the number of computing hours needed in its development.

    Elixir is written in clear, easy-to-understand syntax, so it’s easy to maintain and update as circumstances demand. In addition, comprehensive in-built testing tools make it easier to write code that works first time – another sustainability win.

    Sustainability and business value

    In other words, in the right circumstances, Elixir is an excellent tool for writing lean, well-tested and readable code, which is as energy efficient as code gets.

    And as we’ve seen, the business value of green code goes beyond ESG compliance. It can lower bills. It also shows customers, prospects and potential employees how seriously your organisation takes environmental responsibility.

    Lean programming tells the world that your green commitment is far from just skin deep. It reaches the core of your infrastructure.

    Sustainability targets will only become more stringent as net zero deadlines inch closer. The code that runs our digital world will play a significant role in creating a leaner, greener future. Businesses that adopt green coding now can gain a competitive advantage.
    For more information on Elixir and how it can help green your development processes, please see our website or drop us a line .

    The post The business value behind green coding appeared first on Erlang Solutions .

    • chevron_right

      Ignite Realtime Blog: Openfire inVerse plugin v10.1.4-1 release!

      news.movim.eu / PlanetJabber · Tuesday, 27 June, 2023 - 16:59

    The Ignite Realtime community is happy to announce the immediate release of version “10.1.4 release 1” of the inVerse plugin for Openfire!

    The inVerse plugin adds a Converse-based web client to Openfire ( Converse is a third party implementation). With this plugin, you’ll be able to set up a fully functional Converse-based chat clients with just a few mouse-clicks!

    This update includes an update of the converse.js library (to version 10.1.4), which brings various improvements .

    Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the inVerse plugin archive page .

    If you have any questions, please stop by our community forum or our live groupchat .

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Isode: Harrier 3.3 – New Capabilities

      news.movim.eu / PlanetJabber · Tuesday, 27 June, 2023 - 11:45 · 1 minute

    Harrier is our Military Messaging client. It provides a modern, secure web UI that supports SMTP, STANAG 4406 and ACP 127. Harrier allows authorised users to access role-based mailboxes and respond as a role within an organisation rather than as an individual.

    Harrier Inbox view (behind) showing Military Messaging security label and priority parameters; and Message view (in front). Harrier Inbox view (behind) showing Military Messaging security label and priority
    parameters; and Message view (in front).

    The following changes have been made with the 3.3 release:

    Integration with IRIS WebForms

    Harrier’s generic support for MTF (Message Text Format) has been extended by provision of a close integration with Systematic IRIS WebForms . This provides convenient creation and display of MTFs using the IRIS WebForms UI within Harrier.

    IRIS Forms message attachment in Harrier Military Messaging Client IRIS Forms message attachment

    Further examples and an in-depth description can be found in the Isode white paper C2 Systems using MTF and Messaging .

    Browser Support Enhancements

    New session handling, which allows a users to open multiple sessions per browser and multiple views.  This enables a user to easily access multiple mailboxes at the same time.

    PKCS#11 HSM Support

    PKCS#11 HSM (Hardware Security Module) support is added. This has been tested with HSMs from Nitrokey, Yubico, Gemalto and the SoftHSM software. This provides two capabilities, which can be managed using Cobalt 1.4.

    1. The private key for the server, protecting HTTPS access.
    2. Private keys for Users, Roles and Organizations. supporting message signing and encryption.

    Other Enhancements

    • Audit logging when user prints a message
    • Option to enforce security label access control checks.  By default, these are advisory, with enforcement generally provided by M-Switch.
    • Default security label in forward and reply to the label of the message being replied to or forwarded.
    • Option to configure backup servers for IMAP, SMTP and LDAP to provide resilience in event of primary server failing.
    • Option to use local timezone instead of Zulu for DTG, Filing Time and Scan Listing.
    • When using Zulu timezone, show local time in tool tip.
    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/harrier-3-3-new-capabilities/

    • chevron_right

      Erlang Solutions: IoT Complexity Made Simple with the Versatility of Erlang and Elixir

      news.movim.eu / PlanetJabber · Tuesday, 27 June, 2023 - 11:35 · 16 minutes

    Part A: Current Context and Challenges

    The world is on the brink of a transformative industrial revolution known as Industry 4.0. This fourth industrial revolution is revolutionising our lives, work, and interactions on an unprecedented scale. The convergence of technology, Artificial Intelligence (AI), and the Internet of Things (IoT) has enabled highly sophisticated and interconnected systems. These systems and devices can communicate with each other, automate complex tasks, and even reshape our future.

    Automation and systematisation are particularly influential within this revolution, profoundly impacting our society’s future. From factories to smart cities, and autonomous vehicles to health monitoring devices, automation has taken a prominent role in business, industry, and education.

    For instance, AI automates tasks in factories, such as quality control and predictive maintenance. IoT powers smart cities that leverage data and technology to enhance residents’ lives. Big data improves healthcare by predicting diseases and developing new treatments.

    Despite the potential for new job creation and improved lives, Industry 4.0 also presents challenges. Automation will replace certain jobs, necessitating worker reskilling and retraining. Additionally, this paradigm shift demands new infrastructure like high-speed internet and data centres.

    Overall, Industry 4.0 is a transformative force with profound societal implications. It is crucial to understand its benefits and challenges in order to adequately prepare for the future.

    Most significant challenges to large-scale IoT adoption

    Interoperability and integration

    IoT devices and systems from various manufacturers often utilise different protocols, leading to connectivity and communication issues. This lack of interoperability presents challenges and fragmented solutions within the IoT landscape, restricting seamless device compatibility. Consequently, the absence of interoperability can impede the development and deployment of IoT applications, diminish the value of IoT data, and heighten the risk of security breaches.

    Erlang/Elixir provides all the tools to easily tackle new and existing protocols.

    Thanks to the BEAM being able to handle binary data without the issues currently found in traditional languages used for IoT such as C, .NET and Rust. This significantly speeds up development cycles and integration efforts.

    For example, the BEAM can be used to easily decode and encode binary data from different protocols, such as Zigbee, Z-Wave, and Bluetooth Low Energy. The BEAM can also be used to easily create and manage distributed systems, which is often required for IoT applications.

    The BEAM’s ability to handle binary data and create and manage distributed systems makes it a powerful tool for IoT development. By using Erlang/Elixir, developers can significantly speed up development cycles and integration efforts, and they can create more reliable and scalable IoT applications.

    In addition to this, it is worth highlighting that Erlang also plays a crucial role in the message broker ecosystem, due to its unique characteristics. Erlang’s inherent traits, such as low latency, fault tolerance, and massive concurrency, make it well-suited for deploying IoT solutions at an industrial scale. Low latency ensures that messages are delivered swiftly, enabling real-time responsiveness in IoT applications. Fault tolerance ensures that the system can continue operating even in the presence of failures or errors, crucial for maintaining uninterrupted connectivity in IoT environments. Furthermore, Erlang’s ability to handle massive concurrency allows it to efficiently manage the high volume of messages exchanged in IoT systems, ensuring seamless communication across a vast network of connected devices. These desired traits collectively emphasise the vital role a robust message broker backbone, built on Erlang’s foundations, holds in powering the IoT landscape.

    Security and privacy

    As the number of interconnected devices increases, so does the risk of cyberattacks. Even if the objective of such devices is to provide security. Such devices are often vulnerable to numerous types of security threats due to poor security practices, unpatched software or immature components being deployed in unsecured environments.

    IoT privacy

    For example, IoT devices can be used to launch denial-of-service attacks, steal sensitive data, or even control physical devices. The security risks associated with IoT devices are often exacerbated by the fact that IoT devices are often deployed in unsecured environments, such as home networks.

    Some of the most common security threats associated with IoT devices include:

    • Denial-of-service attacks: IoT devices can be used to launch denial-of-service attacks by flooding a network with traffic. This can make it difficult or impossible for legitimate users to access the network.
    • Data theft: IoT devices can be used to steal sensitive data, such as passwords, credit card numbers, or personal information. This data can then be used for identity theft, fraud, or other crimes.
    • Physical control: IoT devices can be used to control physical devices, such as thermostats, lights, or even cars. This could be used to cause damage, steal property, or even harm people.

    Erlang/Elixir provides programming facilities that render whole classes of errors and security threads obsolete and with the possibility of being able to run an Erlang virtual machine using different security models such as an unhackable/ultras-secure Operating System running on seL4, custom Linux-based firmware via Nerves or constrained re-implementations of the BEAM, such as AtomVM.

    Robust security and privacy policies are essential for safeguarding sensitive data collected and transmitted by IoT devices, which may include personal and confidential information and/or metadata.

    Some of the security features of the BEAM that can benefit IoT development include:

    • Fault tolerance: The BEAM is a fault-tolerant language, which means that it can continue to run even if some of its processes fail. This can help to prevent denial-of-service attacks.
    • Hot code reloading: The BEAM can reload code without restarting the process. This can help to protect against security vulnerabilities that are discovered in code.
    • Distributed systems: The BEAM is a distributed language, which means that it can be used to create applications that run on multiple machines. This can help to protect sensitive data from unauthorised access.

    Erlang/Elixir innovations allow developers to build systems and applications using mathematically verified kernels, rather than relying on traditional Linux solutions. This makes Erlang/Elixir ideal for high-security and mission-critical environments, including manufacturing, military, aerospace, and medical devices. In such industries, a failure or security breach could have devastating consequences.

    Developers can leverage Erlang/Elixir to build highly secure and reliable systems, enhancing mission safety and reducing susceptibility to attacks and failures.

    Scalability

    IoT networks need to handle numerous interconnected devices and facilitate the exchange of vast data volumes. The number of interconnected devices is steadily increasing, enabling specialised tasks and coordinated outcomes among devices. With the growing device connections, the network infrastructure must accommodate higher traffic, ensuring sufficient bandwidth and processing power. These situations demand near-real-time responses and low latency to gain a competitive edge.

    Erlang/Elixir have been designed to excel in such scenarios, where OTP provides all the necessary functionality to handle such loads in a predictable and scalable manner.

    Some of the benefits of using Erlang/Elixir for IoT development include:

    • Fault tolerance: This can help to prevent network outages.
    • Scalability: Erlang/Elixir is a scalable language, meaning it can be used to create applications that can handle a large number of devices. This can help to handle the increased traffic and processing power requirements of IoT networks.
    • (soft) Real-time performance: Erlang/Elixir is a soft real-time language, which means that it can be used to create applications that can respond to events in a timely manner. This can help to meet the low latency requirements of IoT networks.

    Data management

    With the huge volumes of data generated by IoT devices, it is essential to have effective data management systems to store, process and analyse the data. Identifying the most valuable data sets and using them effectively to create insights that support decision-making and performable actions is crucial. The Internet of Things (IoT) is a network of physical devices, vehicles, home appliances, and other objects that are embedded with sensors, software, and network connectivity to collect and exchange data. IoT devices are generating an unprecedented amount of data, which is being used to create new insights and applications.

    Erlang/Elixir provides not only the facilities to handle such volumes of data with ease but also the opportunity to identify meaningful data using machine-learning solutions such as Nx, Axon and Bumblebee in production-ready environments.

    Some of the benefits of using Erlang/Elixir for data management include:

    • Scalability: Erlang/Elixir is a scalable language, which means that it can be used to create applications that can handle large volumes of data.
    • Fault tolerance: Erlang/Elixir is a fault-tolerant language, which means that it can continue to run even if some of its processes fail. This can help to prevent data loss.
    • Machine learning: Erlang/Elixir has a number of machine learning libraries, such as Nx, Axon, and Bumblebee. These libraries can be used to identify meaningful data sets and create insights that support decision-making.

    By processing data, IoT devices and backbones can be made more intelligent and capable of autonomous decision-making. This can lead to improved efficiency, productivity, and safety.

    Energy consumption

    IoT requires energy to operate devices. With the increasing number of devices, consumption can become a significant challenge. Finding ways to optimise energy usage without sacrificing performance is crucial for large-scale IoT architectures. Work in this area has led the industry to create smaller and smaller chipsets and boards. Many IoT devices are battery-powered or otherwise constrained in terms of their energy supply limiting their operating time, therefore making them less practical in some scenarios. This is one of the reasons why hardware manufacturers are developing more efficient components that consume less energy. On the software side, specialised development frameworks and methodologies are being developed to enable more efficient and optimised code for IoT devices. Optimising energy consumption is a critical challenge for IoT adoption. Both hardware and software solutions are being developed to address these issues. Erlang/Elixir addresses such scenarios by using alternative BEAM implementations such as AtomVM. AtomVM was specifically created for such constrained devices and maintains the high-level development facilities that Erlang/Elixir offers, including fault tolerance, without increasing energy consumption.

    IoT energy

    Some of the benefits of using Erlang/Elixir and AtomVM for energy-efficient IoT development include:

    • Fault tolerance: This can help to prevent devices from crashing, which can lead to energy savings.
    • Energy efficiency: AtomVM is an energy-efficient BEAM implementation, which means that it uses less power than other BEAM implementations. This can help to extend the battery life of IoT devices.
    • High-level development: Erlang/Elixir is a high-level language, meaning that it is easy to learn and use. This can help to reduce the development time and cost of IoT applications.

    Deployment

    Deploying IoT devices at scale requires careful planning and coordination. This is a complex process, especially when deploying a large number of devices across multiple locations. It requires expertise in networking, security and device management. In addition, once devices are deployed they need to be maintained and updated on a regular basis. To address these challenges, organisations need to invest in the right tools and processes for managing their IoT deployments. This can include device management platforms that provide centralised management and monitoring of devices, security frameworks that ensure the devices are secure and protected, and data analysis tools that help to make sense of the vast amount of data generated by the devices.

    Erlang/Elixir are platforms that have been designed from the ground up to be able to handle massive amounts of concurrent processes and excel at handling vast amounts of data in a predictable fashion. Having your centralized solutions written in Erlang/Elixir will help you tackle the increasing complexity of managing millions of deployed devices at scale.

    By using these, developers can create more reliable, scalable, and real-time IoT applications that can meet the challenges of the IoT ecosystem. In addition to the benefits mentioned above, Erlang/Elixir also offers a number of other advantages for IoT development, including:

    • Strong community: A strong community of developers who are actively working on new features and libraries. This can be a valuable resource for developers who are new to the language.
    • Open-source: As an open-source language, it is free to use and modify. This can save developers money on licensing costs.

    Erlang/Elixir harnesses the power of these languages to create innovative, efficient, and resilient IoT applications.

    Use cases

    Smart Sensors

    Smart sensors are crucial components within large-scale IoT systems, serving a wide range of industries such as manufacturing, food processing, smart farming, retail, storage, healthcare, and Smart City applications. These sensors play a vital role in collecting and transmitting valuable data. To meet the diverse needs of these industries, smart sensors must be developed to support specific requirements and ensure reliable real-time operation. The design and implementation of these sensors are critical in enabling seamless data collection and facilitating efficient decision-making processes within IoT ecosystems.

    Security

    Implementing security upgrades and updates is crucial for ensuring the security of IoT devices and systems. Typically, these enhancements involve incorporating additional security patches into the operating system. It is imperative that IoT devices and systems are designed to support future upgrades and updates. Insufficient security measures can lead to severe security failures, especially in the face of intense competition within the industry. Due to the lack of standardisation in the IoT technology landscape, numerous SMEs and startups enter the IoT market without proper security measures, thereby heightening the risk of potential security vulnerabilities.

    Privacy

    Preserving personal privacy is a critical consideration in any system that involves the collection of user information. In the realm of IoT, devices gather personal data, which is then transmitted through wired or wireless networks. This information is often stored in a centralised database for future use.

    To safeguard the personal information of users, it is essential to prevent unauthorised access and hacking attempts. Sensitive details concerning a user’s habits, lifestyle, health, and more could be exploited if unauthorised access to data is possible. Therefore, it is imperative for IoT systems to adhere to privacy regulations and prioritise the protection of user privacy.

    By implementing robust security measures, such as encryption, access controls, and data anonymisation, IoT systems can ensure compliance with regulatory privacy requirements. These measures help mitigate the risks associated with unauthorised access and provide users with the confidence that their personal information is being handled securely.

    Connectivity

    The majority of IoT devices are connected to wireless networks, offering specific requirements and convenience for seamless connectivity. IoT systems employ various wireless transmission technologies, such as Wi-Fi, Bluetooth, LoRa WAN, SigFox, Zigbee, and more. Each of these systems possesses its own advantages and specifications, catering to different use cases.

    However, the implementation of multiple technology platforms poses challenges for developers, -particularly in the context of information sharing between devices and applications. Interoperability and data exchange become crucial considerations when integrating diverse wireless technologies within an IoT ecosystem.

    Developers must navigate the complexities of integrating and harmonising data flows between devices, utilising different wireless transmission technologies. This involves designing robust communication protocols and ensuring compatibility across the network infrastructure. By addressing these challenges, developers can optimise data exchange and foster seamless interoperability, enabling efficient communication and collaboration among IoT devices and applications.

    Furthermore, careful consideration should be given to factors such as network coverage, data transmission range, power consumption, and security requirements when selecting the appropriate wireless technology for specific IoT use cases. By leveraging the advantages and specifications of each wireless transmission technology, developers can tailor their IoT systems to meet the unique demands of their applications.

    Efficiently managing the diversity of wireless transmission technologies in IoT systems is a complex but necessary task. Through effective integration and collaboration, developers can harness the benefits of multiple wireless technologies and create interconnected IoT ecosystems that drive innovation and deliver valuable experiences to users.

    Compatibility

    As with any emerging technology, IoT is rapidly expanding across various domains to support a wide array of applications. With the rapid growth of the IoT landscape, competition among companies striving to establish their technologies as industry standards also increases. Introducing additional hardware requirements for compatibility can result in increased investments for both service providers and consumers.

    Ensuring the compatibility of IoT devices and networks with conventional transmission technologies becomes a crucial and challenging task. The sheer diversity of devices, applications, transmission technologies, and gateways presents a complex landscape to navigate. Achieving compatibility and seamless interoperability becomes essential for establishing an efficient IoT infrastructure.

    To create an effective and scalable IoT ecosystem, it becomes imperative to standardise the underlying technologies. This includes defining and monitoring network protocols, transmission bands, data rates, and processing requirements. Standardisation enables consistent and harmonious communication between IoT devices, facilitating seamless integration, and enhancing the overall interoperability of the system.

    Other challenges

    Complexity

    The complexity of IoT necessitates careful design, testing, and implementation to meet specific requirements. Erlang and Elixir offer powerful solutions that significantly reduce development efforts while effectively managing complexity.

    These languages offer developers high-level abstractions to streamline IoT development, making it more efficient to address challenges. They provide features such as concurrency models, fault-tolerant systems, and distributed computing, empowering developers to create robust and scalable solutions.

    Erlang and Elixir excel in handling low-level binary data through declarative approaches like pattern matching. This simplifies device and protocol interoperability, reducing complexity and facilitating smoother communication within the IoT ecosystem.

    By utilising Erlang and Elixir, developers gain access to a powerful toolset that eases complexity in IoT systems. High-level abstractions and declarative programming allow developers to focus on problem-solving rather than low-level implementation details, leading to more efficient development processes and increased productivity.

    The language ecosystem of Erlang and Elixir is specifically designed to tackle complexity in IoT applications. Developers can effortlessly navigate the complexities of IoT and deliver reliable, scalable solutions by utilising its advanced features and abstractions.

    Erlang and Elixir are invaluable assets in the IoT landscape, providing the necessary tools and frameworks to simplify development, enhance interoperability, and unlock the potential of complex IoT systems.

    Power

    IoT, like other smart systems, relies on power for its sensors, wireless devices, and gadgets. Efficiency in power consumption is essential. The community is developing embedded Erlang systems to address these concerns and expand the platform into new domains.

    Cloud access

    Most IoT will be using a centralised cloud-based process control system. In order to reduce latency and maintain real-time connectivity in a cloud base system are challenging.

    IoT cloud

    Part B: Conclusion

    Erlang and Elixir have emerged as robust and scalable alternatives for IoT, automation, and Artificial Intelligence solutions, showcasing their adaptability to evolving paradigms. These programming languages excel in constructing highly scalable and fault-tolerant systems. Erlang developed for the telecommunications industry, supports 24/7 concurrent operations and error recovery.

    Its powerful, robust message brokers serve as the backbone for IoT integration. With low latency, fault tolerance, and massive concurrency, Erlang is an ideal choice for building message brokers that meet IoT demands. This emphasises Erlang’s and the BEAM’s relevance for use with IoT systems.

    These can be hosted in different BEAM implementations to meet specific needs like power consumption and formal verification. They are versatile in low-power, aerospace, military, and healthcare applications. Specialised BEAM implementations are available for resource-limited environments, and formal verification ensures system accuracy.

    Erlang and Elixir are well-suited for building fault-tolerant systems in IoT and automation. They enable continuous operation under high demand or hardware failures. Elixir’s developer-friendly nature facilitates rapid software development without compromising quality.

    We look forward to taking you on this exciting journey as we explore even more of the powerful capabilities of Erlang and Elixir in revolutionising IoT and automation systems.

    The exploration doesn’t stop here. In our upcoming series, we’ll delve deeper into use cases that demonstrate how Erlang and Elixir effectively tackle the challenges posed by IoT.

    So stay tuned for more thought-provoking posts as we unlock the immense potential of these languages. Let’s continue the conversation!

    The post IoT Complexity Made Simple with the Versatility of Erlang and Elixir appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/iot-complexity-made-simple-erlang-elixir/