phone

    • chevron_right

      Isode: Red/Black 2.0 – New Capabilities

      news.movim.eu / PlanetJabber · Friday, 21 April, 2023 - 15:46 · 1 minute

    This major release adds significant new functionality and improvements to Red/Black, a management tool that allows you to monitor and control devices and servers across a network, with a particular focus on HF Radio Systems.  A general summary is given in the white paper Red/Black Overview

    Switch Device

    Support added for Switch type devices, that can connect multiple devices and allow an operator (red or black side) to change switch connections.   Physical switch connectivity is configured by an administrator.  The switch column can be hidden, so that logical connectivity through the switch is shown.

    SNMP Support

    A device driver for SNMP devices is provided, including SNMPv3 authorization.   Abstract devices specifications are included in Red/Black for:

    • SNMP System MIB
    • SNMP Host MIB
    • SNMP UPS MIB
    • Leonardo HF 2000 radio
    • IES Antenna Switch
    • eLogic Radio Gateway

    Abstract devices specifications can be configured for other devices with suitable SNMP MIBs.

    Further details provided in the Isode WP “ Managing SNMP Devices in Red/Black “.

    Alert Handling

    The UI shows all devices that have Alerts which have not been handled by operator.   The UI enables an operator to see all un-handled alerts for a device and gives the ability to mark some or all alerts as handled.

    Device Parameter Display and Management

    A number of improvements have been made to the way device parameters are handled:

    • Improved general parameter display
    • Display in multiple columns, with selectable number of columns and choice of style, to better support devices with large numbers of parameters
    • Parameter grouping
    • Labelled integer support, so that semantics can be added to values
    • Configurable Colours
    • Display of parameter Units
    • Configurable parameter icons
    • Optimized UI for Device refresh; enable/disable; power off; and reset
    • Integer parameters can specify “interval”
    • Parameters with limited integer values can be selected as drop down

    Top Screen Display

    The top screen display is improved.

    • Modes of “Device” (monitoring)  and “Connectivity” with UIs optimized for these functions
    • Reduced clutter when no device is being examined
    • Allow columns to be hidden/restored so that the display can be tuned to operator needs
    • Show selected device parameters on top screen so that operator can see critical device parameters without needing to inspect the device details
    • UI clearly shows which links user can modify, according to operator or administrator rights
    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/red-black-2-0-new-capabilities/

    • chevron_right

      ProcessOne: ejabberd 23.04

      news.movim.eu / PlanetJabber · Wednesday, 19 April, 2023 - 07:48 · 13 minutes

    This new ejabberd 23.04 release includes many improvements and bug fixes, as well as some new features.

    ejabberd 23.04

    A more detailed explanation of these topics and other features:

    Many improvements to SQL databases

    There are many improvements in the area of SQL databases (see #3980 and #3982 ):

    • Added support for migrating MySQL and MS SQL to new schema , fixed a long-standing bug, and many other improvements.
    • Regarding MS SQL, there are schema fixes, added support for new schema and the corresponding schema migration, along with other minor improvements and bugfixes.
    • The automated ejabberd tests now also run on updated schema databases, and support for running tests on MS SQL has been added.
    • and other minor SQL schema inconsistencies, removed unnecessary indexes and changed PostgreSQL SERIAL columns to BIGSERIAL columns.

    Please upgrade your existing SQL database, check the notes later in this document!

    Added mod_mam support for XEP-0425: Message Moderation

    XEP-0425: Message Moderation allows a Multi-User Chat (XEP-0045) moderator to moderate certain group chat messages, for example by removing them from the group chat history, as part of an effort to address and resolve issues such as message spam, inappropriate venue language, or revealing private personal information of others. It also allows moderators to correct a message on another user’s behalf, or flag a message as inappropriate, without having to retract it.

    Clients that currently support this XEP are Gajim , Converse.js , Monocles , and have read-only support Poezio and XMPP Web .

    New mod_muc_rtbl module

    This new module implements Real-Time Block List for MUC rooms. It works by monitoring remote pubsub nodes according to the specification described in xmppbl.org .

    captcha_url option now accepts auto value

    In recent ejabberd releases, captcha_cmd got support for macros (in ejabberd 22.10 ) and support for using modules (in ejabberd 23.01 ).

    Now captcha_url gets an improvement: if set to auto , it tries to detect the URL automatically, taking into account the ejabberd configuration. This is now the default. This should be good enough in most cases, but manually setting the URL may be necessary when using port forwarding or very specific setups.

    Erlang/OTP 19.3 is deprecated

    This is the last ejabberd release with support for Erlang/OTP 19.3. If you have not already done so, please upgrade to Erlang/OTP 20.0 or newer before the next ejabberd release. See the ejabberd 22.10 release announcement for more details.

    About the binary packages provided for ejabberd:

    • The binary installers and container images now use Erlang/OTP 25.3 and Elixir 1.14.3.
    • The mix , ecs and ejabberd container images now use Alpine 3.17.
    • The ejabberd container image now supports an alternative build method, useful to work around a problem in QEMU and Erlang 25 when building the image for the arm64 architecture.

    Erlang node name in ecs container image

    The ecs container image is built using the files from docker-ejabberd/ecs and published in docker.io/ejabberd/ecs . This image generally gets only minimal fixes, no major or breaking changes, but in this release it got one change that requires administrator intervention.

    The Erlang node name is now fixed to ejabberd@localhost by default, instead of being variable based on the container hostname. If you previously allowed ejabberd to choose its node name (which was random), it will now create a new mnesia database instead of using the previous one:

    $ docker exec -it ejabberd ls /home/ejabberd/database/
    ejabberd@1ca968a0301a
    ejabberd@localhost
    ...
    

    A simple solution is to create a container that provides ERLANG_NODE_ARG with the old erlang node name, for example:

    docker run ... -e ERLANG_NODE_ARG=ejabberd@1ca968a0301a
    

    or in docker-compose.yml

    version: '3.7'
    services:
      main:
        image: ejabberd/ecs
        environment:
          - ERLANG_NODE_ARG=ejabberd@1ca968a0301a
    

    Another solution is to change the mnesia node name in the mnesia spool files.

    Other improvements to the ecs container image

    In addition to the previously mentioned change to the default erlang node name, the ecs container image has received other improvements:

    • For each commit to the docker-ejabberd repository that affects ecs and mix container images, those images are uploaded as artifacts and are available for download in the corresponding runs .
    • When a new release is tagged in the docker-ejabberd repository, the image is automatically published to ghcr.io/processone/ecs , in addition to being manually published to the Docker Hub.
    • There are new sections in the ecs README file: Clustering and Clustering Example .

    Documentation Improvements

    In addition to the usual improvements and fixes, some sections of the ejabberd documentation have been improved:

    Acknowledgments

    We would like to thank the following people for their contributions to the source code, documentation, and translation for this release:

    And also to all the people who help solve doubts and problems in the ejabberd chatroom and issue tracker.

    Updating SQL Databases

    These notes allow you to apply the SQL database schema improvements in this ejabberd release to your existing SQL database. Please consider which database you are using and whether it is the default or the new schema .

    PostgreSQL new schema:

    Fixes a long-standing bug in the new schema on PostgreSQL. The fix for all existing affected installations is the same:

    ALTER TABLE vcard_search DROP CONSTRAINT vcard_search_pkey;
    ALTER TABLE vcard_search ADD PRIMARY KEY (server_host, lusername);
    

    PosgreSQL default or new schema:

    To convert columns to allow up to 2 billion rows in these tables. This conversion requires full table rebuilds and will take a long time if the tables already have many rows. Optional: This is not necessary if the tables will never grow large.

    ALTER TABLE archive ALTER COLUMN id TYPE BIGINT;
    ALTER TABLE privacy_list ALTER COLUMN id TYPE BIGINT;
    ALTER TABLE pubsub_node ALTER COLUMN nodeid TYPE BIGINT;
    ALTER TABLE pubsub_state ALTER COLUMN stateid TYPE BIGINT;
    ALTER TABLE spool ALTER COLUMN seq TYPE BIGINT;
    

    PostgreSQL or SQLite default schema:

    DROP INDEX i_rosteru_username;
    DROP INDEX i_sr_user_jid;
    DROP INDEX i_privacy_list_username;
    DROP INDEX i_private_storage_username;
    DROP INDEX i_muc_online_users_us;
    DROP INDEX i_route_domain;
    DROP INDEX i_mix_participant_chan_serv;
    DROP INDEX i_mix_subscription_chan_serv_ud;
    DROP INDEX i_mix_subscription_chan_serv;
    DROP INDEX i_mix_pam_us;
    

    PostgreSQL or SQLite new schema:

    DROP INDEX i_rosteru_sh_username;
    DROP INDEX i_sr_user_sh_jid;
    DROP INDEX i_privacy_list_sh_username;
    DROP INDEX i_private_storage_sh_username;
    DROP INDEX i_muc_online_users_us;
    DROP INDEX i_route_domain;
    DROP INDEX i_mix_participant_chan_serv;
    DROP INDEX i_mix_subscription_chan_serv_ud;
    DROP INDEX i_mix_subscription_chan_serv;
    DROP INDEX i_mix_pam_us;
    

    And now add index that might be missing

    In PostgreSQL:

    CREATE INDEX i_push_session_sh_username_timestamp ON push_session USING btree (server_host, username, timestamp);
    

    In SQLite:

    CREATE INDEX i_push_session_sh_username_timestamp ON push_session (server_host, username, timestamp);
    

    MySQL default schema:

    ALTER TABLE rosterusers DROP INDEX i_rosteru_username;
    ALTER TABLE sr_user DROP INDEX i_sr_user_jid;
    ALTER TABLE privacy_list DROP INDEX i_privacy_list_username;
    ALTER TABLE private_storage DROP INDEX i_private_storage_username;
    ALTER TABLE muc_online_users DROP INDEX i_muc_online_users_us;
    ALTER TABLE route DROP INDEX i_route_domain;
    ALTER TABLE mix_participant DROP INDEX i_mix_participant_chan_serv;
    ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv_ud;
    ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv;
    ALTER TABLE mix_pam DROP INDEX i_mix_pam_u;
    

    MySQL new schema:

    ALTER TABLE rosterusers DROP INDEX i_rosteru_sh_username;
    ALTER TABLE sr_user DROP INDEX i_sr_user_sh_jid;
    ALTER TABLE privacy_list DROP INDEX i_privacy_list_sh_username;
    ALTER TABLE private_storage DROP INDEX i_private_storage_sh_username;
    ALTER TABLE muc_online_users DROP INDEX i_muc_online_users_us;
    ALTER TABLE route DROP INDEX i_route_domain;
    ALTER TABLE mix_participant DROP INDEX i_mix_participant_chan_serv;
    ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv_ud;
    ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv;
    ALTER TABLE mix_pam DROP INDEX i_mix_pam_us;
    

    Add index that might be missing:

    CREATE INDEX i_push_session_sh_username_timestamp ON push_session (server_host, username(191), timestamp);
    

    MS SQL

    DROP INDEX [rosterusers_username] ON [rosterusers];
    DROP INDEX [sr_user_jid] ON [sr_user];
    DROP INDEX [privacy_list_username] ON [privacy_list];
    DROP INDEX [private_storage_username] ON [private_storage];
    DROP INDEX [muc_online_users_us] ON [muc_online_users];
    DROP INDEX [route_domain] ON [route];
    go
    

    MS SQL schema was missing some tables added in earlier versions of ejabberd:

    CREATE TABLE [dbo].[mix_channel] (
        [channel] [varchar] (250) NOT NULL,
        [service] [varchar] (250) NOT NULL,
        [username] [varchar] (250) NOT NULL,
        [domain] [varchar] (250) NOT NULL,
        [jid] [varchar] (250) NOT NULL,
        [hidden] [smallint] NOT NULL,
        [hmac_key] [text] NOT NULL,
        [created_at] [datetime] NOT NULL DEFAULT GETDATE()
    ) TEXTIMAGE_ON [PRIMARY];
    
    CREATE UNIQUE CLUSTERED INDEX [mix_channel] ON [mix_channel] (channel, service)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE INDEX [mix_channel_serv] ON [mix_channel] (service)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE TABLE [dbo].[mix_participant] (
        [channel] [varchar] (250) NOT NULL,
        [service] [varchar] (250) NOT NULL,
        [username] [varchar] (250) NOT NULL,
        [domain] [varchar] (250) NOT NULL,
        [jid] [varchar] (250) NOT NULL,
        [id] [text] NOT NULL,
        [nick] [text] NOT NULL,
        [created_at] [datetime] NOT NULL DEFAULT GETDATE()
    ) TEXTIMAGE_ON [PRIMARY];
    
    CREATE UNIQUE INDEX [mix_participant] ON [mix_participant] (channel, service, username, domain)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE INDEX [mix_participant_chan_serv] ON [mix_participant] (channel, service)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE TABLE [dbo].[mix_subscription] (
        [channel] [varchar] (250) NOT NULL,
        [service] [varchar] (250) NOT NULL,
        [username] [varchar] (250) NOT NULL,
        [domain] [varchar] (250) NOT NULL,
        [node] [varchar] (250) NOT NULL,
        [jid] [varchar] (250) NOT NULL
    );
    
    CREATE UNIQUE INDEX [mix_subscription] ON [mix_subscription] (channel, service, username, domain, node)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE INDEX [mix_subscription_chan_serv_ud] ON [mix_subscription] (channel, service, username, domain)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE INDEX [mix_subscription_chan_serv_node] ON [mix_subscription] (channel, service, node)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE INDEX [mix_subscription_chan_serv] ON [mix_subscription] (channel, service)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    CREATE TABLE [dbo].[mix_pam] (
        [username] [varchar] (250) NOT NULL,
        [channel] [varchar] (250) NOT NULL,
        [service] [varchar] (250) NOT NULL,
        [id] [text] NOT NULL,
        [created_at] [datetime] NOT NULL DEFAULT GETDATE()
    ) TEXTIMAGE_ON [PRIMARY];
    
    CREATE UNIQUE CLUSTERED INDEX [mix_pam] ON [mix_pam] (username, channel, service)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    
    go
    

    MS SQL also had some incompatible column types:

    ALTER TABLE [dbo].[muc_online_room] ALTER COLUMN [node] VARCHAR (250);
    ALTER TABLE [dbo].[muc_online_room] ALTER COLUMN [pid] VARCHAR (100);
    ALTER TABLE [dbo].[muc_online_users] ALTER COLUMN [node] VARCHAR (250);
    ALTER TABLE [dbo].[pubsub_node_option] ALTER COLUMN [name] VARCHAR (250);
    ALTER TABLE [dbo].[pubsub_node_option] ALTER COLUMN [val] VARCHAR (250);
    ALTER TABLE [dbo].[pubsub_node] ALTER COLUMN [plugin] VARCHAR (32);
    go
    

    … and mqtt_pub table was incorrectly defined in old schema:

    ALTER TABLE [dbo].[mqtt_pub] DROP CONSTRAINT [i_mqtt_topic_server];
    ALTER TABLE [dbo].[mqtt_pub] DROP COLUMN [server_host];
    ALTER TABLE [dbo].[mqtt_pub] ALTER COLUMN [resource] VARCHAR (250);
    ALTER TABLE [dbo].[mqtt_pub] ALTER COLUMN [topic] VARCHAR (250);
    ALTER TABLE [dbo].[mqtt_pub] ALTER COLUMN [username] VARCHAR (250);
    CREATE UNIQUE CLUSTERED INDEX [dbo].[mqtt_topic] ON [mqtt_pub] (topic)
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    go
    

    … and sr_group index/PK was inconsistent with other DBs:

    ALTER TABLE [dbo].[sr_group] DROP CONSTRAINT [sr_group_PRIMARY];
    CREATE UNIQUE CLUSTERED INDEX [sr_group_name] ON [sr_group] ([name])
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
    go
    

    ChangeLog

    General

    • New s2s_out_bounce_packet hook
    • Re-allow anonymous connection for connection without client certificates ( #3985 )
    • Stop ejabberd_system_monitor before stopping node
    • captcha_url option now accepts auto value, and it’s the default
    • mod_mam : Add support for XEP-0425: Message Moderation
    • mod_mam_sql : Fix problem with results of mam queries using rsm with max and before
    • mod_muc_rtbl : New module for Real-Time Block List for MUC rooms ( #4017 )
    • mod_roster : Set roster name from XEP-0172, or the stored one ( #1611 )
    • mod_roster : Preliminary support to store extra elements in subscription request ( #840 )
    • mod_pubsub : Pubsub xdata fields max_item/item_expira/children_max use max not infinity
    • mod_vcard_xupdate : Invalidate vcard_xupdate cache on all nodes when vcard is updated

    Admin

    • ext_mod : Improve support for loading *.so files from ext_mod dependencies
    • Improve output in gen_html_doc_for_commands command
    • Fix ejabberdctl output formatting ( #3979 )
    • Log HTTP handler exceptions

    MUC

    • New command get_room_history
    • Persist none role for outcasts
    • Try to populate room history from mam when unhibernating
    • Make mod_muc_room:set_opts process persistent flag first
    • Allow passing affiliations and subscribers to create_room_with_opts command
    • Store state in db in mod_muc:create_room()
    • Make subscribers members by default

    SQL schemas

    • Fix a long standing bug in new schema migration
    • update_sql command: Many improvements in new schema migration
    • update_sql command: Add support to migrate MySQL too
    • Change PostgreSQL SERIAL to BIGSERIAL columns
    • Fix minor SQL schema inconsistencies
    • Remove unnecessary indexes
    • New SQL schema migrate fix

    MS SQL

    • MS SQL schema fixes
    • Add new schema for MS SQL
    • Add MS SQL support for new schema migration
    • Minor MS SQL improvements
    • Fix MS SQL error caused by ORDER BY in subquery

    SQL Tests

    • Add support for running tests on MS SQL
    • Add ability to run tests on upgraded DB
    • Un-deprecate ejabberd_config:set_option/2
    • Use python3 to run extauth.py for tests
    • Correct README for creating test docker MS SQL DB
    • Fix TSQLlint warnings in MSSQL test script

    Testing

    • Fix Shellcheck warnings in shell scripts
    • Fix Remark-lint warnings
    • Fix Prospector and Pylint warnings in test extauth.py
    • Stop testing ejabberd with Erlang/OTP 19.3, as Github Actions no longer supports ubuntu-18.04
    • Test only with oldest OTP supported (20.0), newest stable (25.3) and bleeding edge (26.0-rc2)
    • Upload Common Test logs as artifact in case of failure

    ecs container image

    • Update Alpine to 3.17 to get Erlang/OTP 25 and Elixir 1.14
    • Add tini as runtime init
    • Set ERLANG_NODE fixed to ejabberd@localhost
    • Upload images as artifacts to Github Actions
    • Publish tag images automatically to ghcr.io

    ejabberd container image

    • Update Alpine to 3.17 to get Erlang/OTP 25 and Elixir 1.14
    • Add METHOD to build container using packages ( #3983 )
    • Add tini as runtime init
    • Detect runtime dependencies automatically
    • Remove unused Mix stuff: ejabberd script and static COOKIE
    • Copy captcha scripts to /opt/ejabberd-*/lib like the installers
    • Expose only HOME volume, it contains all the required subdirs
    • ejabberdctl: Don’t use .../releases/COOKIE , it’s no longer included

    Installers

    • make-binaries: Bump versions, e.g. erlang/otp to 25.3
    • make-binaries: Fix building with erlang/otp 25.x
    • make-packages: Fix for installers workflow, which didn’t find lynx

    Full Changelog

    https://github.com/processone/ejabberd/compare/23.01…23.04

    ejabberd 23.04 download & feedback

    As usual, the release is tagged in the git source repository on GitHub .

    The source package and installers are available on the ejabberd Downloads page. To verify the *.asc signature files, see How to verify the integrity of ProcessOne downloads .

    For convenience, there are alternative download locations such as the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you think you’ve found a bug, please search or file a bug report at GitHub Issues .

    The post ejabberd 23.04 first appeared on ProcessOne .
    • chevron_right

      Sam Whited: Concord and Spring Road Linear Parks

      news.movim.eu / PlanetJabber · Saturday, 15 April, 2023 - 23:00 · 4 minutes

    In my earlier review of Rose Garden and Jonquil public parks I mentioned the Mountain-to-River Trail ( M2R ), a mixed-use bicycle and walking trail that connects the two parks.

    The two parks I’m going to review today are also connected by the M2R trail in addition to the Concord Road Trail , but unlike the previous parks these are linear parks that are integrated directly into the trails!

    Since the linear parks aren’t very large and don’t have much in the way of ammenities to talk about, we’ll veer outside of our Smyrna focus and discuss a few other highlights of the Concord Road Trail and the southern portion of the M2R trail, starting with the Chattahoochee River.

    Paces Mill

    • Amenities: 🏞️ 👟 🥾 🛶 💩 🚲
    • Transportation: 🚍 🚴 🚣

    The southern terminus of the M2R trail is at the Chattahoochee River National Recreation Area ’s Paces Mill Unit. In addition to the paved walking and biking trails, the park has several miles of unpaved hiking trail, fishing, and of course the river itself. Dogs are allowed and bags are available near the entrance. If you head north on the paved Rottenwood Creek Trail you’ll eventually connect to the Palisades West Trails, the Bob Callan Trail, and the Akers Mill East Trail, giving you access to one of the largest connected mixed-use trail systems in the Atlanta area!

    If, instead, you head out of the park to the south on the M2R trail you’ll quickly turn back north into the urban sprawl of the Atlanta suburbs. In approximately 2km you’ll reach the Cumberland Transfer Center where you can catch a bus to most anywhere in Cobb, or transfer to MARTA in Atlanta-proper. At this point the trail also forks for a more direct route to the Silver Comet Trail using the Silver Comet Cumberland Connector trail. We may take that trail another day, but for now we’ll continue north on the M2R trail. Just a bit further north there are also short connector trails to Cobb Galleria Center (an exhibit hall and convention center) and The Battery, a mixed-use development surrounding the Atlanta Braves baseball stadium.

    It’s at this point that the trail turns west along Spring Road where it coincides with the Spring Road Trail that connects to the previously-reviewed Jonquil Park (a total ride of ~3.7km). Shortly thereafter we reach our first actual un-reviewed Smyrna park: the Spring Road Linear Park.

    a map showing bike directions between the CRNRA at Paces Mill and Spring Road Linear Park

    Spring Road Linear Park

    • Amenities: 👟 💩
    • Transportation: 🚍 🚴

    The Spring Road Linear Park stretches 1.1km along the M2R Trail and is easily accessed by both bike (of course) and bus via CobbLinc Route 25 .

    The park does not have a sign or other markers, but does have several nice pull offs with benches that make a good stop over point on your way home to or from the buses at the Cumberland Transfer Center. If you’re out walking the dog public trash cans and dog-poo bags are available on the east end of the park, but do keep in mind that the main trail is mixed-use so dogs should be kept on one side of the trail to avoid incidents with bikes.

    a trail stretches out ahead with a side trail providing access from a neighborhood a small parklet to the side of the trail contains benches and dog poo bags

    After a short climb the trail turns north again and intersects with the Concord Road Trail and the Atlanta Road Trail. We could veer just off the trail near this point to reach Durham Park , the subject of a future review, but instead we’ll continue west, transitioning to the Concord Road Trail to reach our next park: Concord Road Linear Park.

    map of the bike trail between Spring Road Linear Park and Concord Road Linear Park

    Concord Road Linear Park

    • Amenities: 👟 🔧 📚 💩
    • Transportation: 🚍 🚴

    The Concord Road Linear Park sits in the middle of the mid-century Smyrna Heights neighborhood and has something special that’s not often found in poorly designed suburban neighborhoods: (limited) mixed-use zoning! A restaurant and bar (currently seafood) sits at the edge of the park along with a bike repair stand and bike parking.

    It’s worth commending Smyrna for creating this park at all, it may be small but in addition to the mixed-use zoning it did something that’s also not often seen in the burbs: it removed part of Evelyn Street, disconnecting it from the nearest arterial road! In the war-on-cars this is a small but important victory that creates a quality-of-life improvement for everyone in the neighborhood, whether they bike, walk the dog, or just take a stroll over to the restaurants in the town square without having to be molested by cars.

    a street ends and becomes a walking path into a park

    Formerly part of Evelyn Street, now a path

    Silver Comet Concord Road Trail Head

    • Amenities: 🚻 🍳 👟 📚
    • Transportation: 🚴

    In our next review we’ll turn back and continue up the M2R trail to reach a few other parks, but if we were to continue we’d find that the Concord Road Trail continues for another 4km until it terminates at the Silver Comet Trail’s Concord Road Trail Head . This trail head sits at mile marker 2.6 on the Silver Comet Trail, right by the Concord Covered Bridge Historic District .

    The Silver Comet will likely be covered in future posts, so for now I’ll leave it there. Thanks for bearing with me while we take a detour away from the City of Smyrna’s parks, next time the majority of the post will be about parks within the city, I promise.

    map of the bike trail between Concord Road Linear Park and the Silver Comet Concord Road Trail Head
    • wifi_tethering open_in_new

      This post is public

      blog.samwhited.com /2023/04/concord-and-spring-road-linear-parks/

    • chevron_right

      Erlang Solutions: Optimización para lograr concurrencia: comparación y contraste de las máquinas virtuales BEAM y JVM

      news.movim.eu / PlanetJabber · Wednesday, 12 April, 2023 - 10:11 · 17 minutes

    En esta nota exploraremos los aspectos internos de la máquina virtual BEAM o VM por sus siglas en inglés (Virtual Machine). Y haremos una comparación con la máquina virtual de Java, la JVM.

    El éxito de cualquier lenguaje de programación en el ecosistema Erlang puede ser repartido a tres componentes estrechamente acoplados:

    1. la semántica del lenguaje de programación Erlang, que es la base sobre la cual otros lenguajes están implementados
    2. las bibliotecas OTP y middleware usados para construir arquitecturas escalabels y sistemas concurrentes y resilientes y
    3. la máquina virtual BEAM, estrechamente acoplada a la semántica del lenguaje y OPT.

    Toma cualquiera de estos componentes por si solo y tendras a un potencial ganador. Pero si consideras a los tres juntos, tendrás a un ganador indiscutible para el desarrollo de sistemas escalables, resilientes y soft-real time. Citando a Joe Armstrong:

    “Puedes copiar las bibliotecas de Erlang, pero si no corren en la BEAM, no puedes emular la semánticas”

    Esta idea es reforzada por la primera regla de programación de Robert Virding, que establece que “Cualquier programa concurrente escrito en otro lenguaje y que sea lo suficientemente complejo, contiene una implementación ad hoc, específicada informalmente, lenta y plagada de errores, de la mitad de Erlang.”

    En esta nota vamos a explorar los aspectos internos de la máquina virtual BEAM. Compararemos algunos de ellos con la JVM, señalando las razones por las que deberías poner especial atención en ellos. Por mucho tiempo estos componentes han sido tratados como una caja negra, y confiamos ciegamente en ellos sin entender que hay detrás. Es tiempo de cambiar eso!

    Aspectos relevantes de la BEAM

    Erlang y la máquina virtual BEAM fueron inventados para tener una herramienta que resolviera un problema específico. Fueron desarrollados por Ericsson para ayudar a implementar la infraestructura de un sistema de telecomunicaciones que manejara redes fijas y móviles. Esta infraestructura por naturaleza es altamente concurrente y escalable. Tiene que funcionar en tiempo real y posiblemente nunca presentar fallas. No queremos que nuestra llamada de Hangouts con nuestra abuela de pronto terminé por un error, o estar en un juego en línea como Fortnite y que se interrumpa porque le tienen que hacer actualizaciones. La máquina virtual BEAM está optimizada para resolver muchos de estos retos, gracias a características que funcionan con un modelo de programación concurrente predecible.

    La receta secreta son los procesos Erlang, que son ligeros, no comparten memoria y son administrados por schedulers capaces de manejar millones a través de múltiples procesadores. Utiliza un recolector de basura que corre en un proceso por si mismo y está altamente optimizado para reducir el impacto en otros procesos. Como resultado de esto, el recolector de basura no afecta las propiedades globales en tiempo real del sistema. La BEAM es también la única máquina virtual utilizada ampliamente a escala con un modelo de distribución hecho a la medida, que permite a un programa ejecutarse en múltiples máquinas de manera transparente.

    Aspectos relevantes de la JVM

    La máquina virtual de Java o JVM por sus siglas en inglés (Java Virtual Machine) fue desarrollada por Sun Microsystem en un intento de proveer un plataforma en la que “ escribes código una vez ” y corre en donde sea. Crearon un lenguaje orientado a objetos, similar a C++ pero que fuera memory-safe ya que la detección de errores en tiempo de ejecución revisa los límites de los arreglos y las desreferencias de punteros. El ecosistema JVM se volvió extremamente popular en la era del Internet, convirtiéndose un estándar de-facto para el desarrollo de aplicaciones de servidores empresariales. El amplio rango de aplicabilidad fue posible gracias a una máquina virtual que se adapta a muchos  casos de uso y a un impresionante conjunto de bibliotecas que se adaptan al desarrollo empresarial.

    La JVM fue diseñada pensando en eficiencia. La mayoría de sus conceptos son una abstracción de características encontradas en populares sistemas operativos, como el modelo de hilos, similar al manejo de hilos o threads del sistema operativo. La JVM es altamente personalizable, incluyendo el recolector de basura y class loaders. Algunas implementaciones del recolector de basura de última generación brindan características ajustables que se adaptan a un modelo de programación basado en memoria compartida.

    La JVM le permite modificar el código mientras se ejecuta el programa. Y, un compilador JIT permite que el código de bytes se compile en el código de máquina nativo con la intención de acelerar partes de la aplicación.

    La concurrencia en el mundo de Java se relaciona principalmente con la ejecución de aplicaciones en subprocesos paralelos, lo que garantiza que sean rápidos. La programación con primitivas de concurrencia es una tarea difícil debido a los desafíos creados por su modelo de memoria compartida. Para superar estas dificultades, existen intentos de simplificar y unificar los modelos de programación concurrentes, como el marco Akka, que es el intento más exitoso.

    Concurrencia y Paralelismo

    Cuando hablamos de ejecución de código paralela, nos referismo a que partes del código se ejecutan al mismo tiempo en múltiples procesadores, o computadoras, mientras que programación concurrente se refiere al manejo de eventos que llegan de forma independiente. Una ejecución concurrente se puede simular en hardware de un solo subproceso, mientras que la ejecución en paralelo no. Aunque esta distinción puede parecer pedante, la diferencia da como resultado problemas por resolver con enfoques muy diferentes. Piense en muchos cocineros que preparan un plato de pasta carbonara. En el enfoque paralelo, las tareas se dividen entre la cantidad de cocineros disponibles, y una sola parte se completaría tan rápido como les tome a estos cocineros completar sus tareas específicas. En un mundo concurrente, obtendría una porción para cada cocinero, donde cada cocinero hace todas las tareas.

    Utilice el paralelismo para velocidad y la concurrencia para escalabilidad.

    La ejecución en paralelo intenta resolver una descomposición óptima del problema en partes independientes entre sí. Hervir el agua, sacar la pasta, mezclar el huevo, freír el jamón, rallar el queso. Los datos compartidos (o en nuestro ejemplo, el plato a servir) se manejan mediante bloqueos, mutexes y otras técnicas que garantizan la correcta ejecución. Otra forma de ver esto es que los datos (o ingredientes) están presentes y queremos utilizar tantos recursos de CPU paralelos como sea posible para terminar el trabajo lo más rápido que se pueda.

    La programación concurrente, por otro lado, trata con muchos eventos que llegan al sistema en diferentes momentos y trata de procesarlos todos dentro de un tiempo razonable. En arquitecturas multi-procesadores o distribuidas, parte de la ejecución se lleva a cabo en paralelo, pero esto no es un requisito. Otra forma de verlo es que el mismo cocinero hierve el agua, saca la pasta, mezcla los huevos, etc., siguiendo un algoritmo secuencial que es siempre el mismo. Lo que cambia entre procesos (o cocciones) son los datos (o ingredientes) en los que trabajar, que existen en múltiples instancias.

    La JVM está diseñada para el paralelismo, la BEAM para la concurrencia. Son dos problemas intrínsecamente diferentes, que requieren soluciones diferentes.

    La BEAM y la concurrencia

    La BEAM proporciona procesos ligeros para dar contexto al código en ejecución. Estos procesos, también llamados actores, no comparten memoria, sino que se comunican a través del paso de mensajes, copiando datos de un proceso a otro. El paso de mensajes es una característica que la máquina virtual implementa a través de buzones de correo que tienen los procesos individualmente. El paso de mensajes es una operación no-bloqueante, lo que significa que enviar un mensaje de un proceso a otro otro es casi instantáneo y la ejecución del remitente no se bloquea. Los mensajes enviados tienen la forma de datos inmutables, copiados de la pila del proceso remitente al buzón del proceso receptor. Esto se logra sin necesidad de bloqueos y mutexes entre los procesos, el único bloqueo en el buzón o mailbox es en caso de que varios procesos envíen un mensaje al mismo destinatario en paralelo.

    Los datos inmutables y el paso de mensajes permiten al programador escribir procesos que funcionan de forma independiente y que se centran en la funcionalidad en lugar del manejo de bajo nivel de la memoria y la programación de tareas. Este diseño simple no solo funciona en un solo proceso, sino también en múltiples threads en una máquina local que se ejecuta en la misma VM y utilizando la distribución integrada, a través de la red con un grupo de VMs y máquinas. Si los mensajes son inmutables entre procesos, se pueden enviar a otro subproceso (o máquina) sin bloqueos, escalando casi linealmente en arquitecturas distribuidas de varios procesadores. Los procesos se abordan de la misma manera en una VM local que en un clúster de VM, el envío de mensajes funciona de manera transparente, independientemente de la ubicación del proceso de recepción.

    Los procesos no comparten memoria, permitiendo replicar los datos para mayor resiliencia, y distribuirlos para escalar. Esto significa que se pueden tener dos instancias del mismo proceso en dos máquinas separadas, compartiendo una actualización de estado entre ellas. Si una máquina falla entonces la otra tiene una copia de los datos y puede continuar manejando la solicitud, haciendo el sistema tolerante a fallas. Si ambas máquinas están operando, ambos procesos pueden manejar peticiones, brindando así escalabilidad. La BEAM proporcional primitivas altamente optmizadas para que todo esto funcione sin problemas, mientras que OTP (la biblioteca estándar ) proporciona las construcciones de nivel superior para facilitar la vida de los programadores.

    Akka hace un gran trabajo al replicar las construcciones de nivel superior, pero esta de alguna manera limitado por la falta de primitivas proporcionadas por la JVm, permitiendo estar altamente optimizada para concurrencia. Si bien las primitivas de la JVM permiten una gama más amplia de casos de uso, hacen el desarrollo de sistemas distribuidos más complicado al no tener características por default para la comunicación y a menudo se basan en un modelo de memoria compartida. Por ejemplo, ¿en qué parte de un sistema distribuido coloca memoria compartida? ¿Y cuál es el costo de acceder a ella?

    Scheduler

    Mencionamos que una de las características más fuertes de la BEAM es la capacidad de dividir un programa en procesos pequeños y livianos. La gestión de estos procesos es tarea del scheduler . A diferencia de la JVM, que asigna sus subprocesos a threads del sistema operativo y deja que este los administre, la BEAM viene con su propio scheduler o administrador.

    El scheduler inicia por default un hilo del sistema operativo ( OS thread ) por cada procesador de la máquina y optimiza la carga entre ellos. Cada proceso consiste en código que será ejecutado y un estado que cambia con el tiempo. El scheduler escoge el primer proceso en la cola de ejecución que esté listo para correr, le asigna una cierta cantidad de reducciones para ejecutarse, donde cada reducción es el equivalente aproximado a un comando. Una vez que el proceso se ha quedado sin reducciones, sera bloqueado por I/O, y se queda esperando un mensaje o que pueda completar su ejecución, el scheduler escoge el siguiente proceso en la cola y lo despacha. Esta técnica es llamada preventiva.

    Mencionamos el framework Akka varias veces, ya que su más grande desventaja es la necesidad de anotar el código con scheduling points, ya que la administración no está dada a nivel de la JVM. Al quitar el control de las manos del programador, las propiedades en tiempo real son preservadas y garantizadas, ya que no hay riesgo de que accidentalmente se provoque inanición del proceso.

    Los procesos pueden ser esparcidos a todos los hilos disponiblews del scheduler y maximizar el uso de CPU. Hay muchas maneras de modificar el scheduler pero es raro y solo será requerido en ciertos casos límite, ya que las opciones predeterminadas cubren la mayoría de los patrones de uso.

    Hay un tema sensible que aparece con frequencia con respecto a los schedulers: como manejar funciones implementadas nativamente, o NIFs por sus siglas en inglés (Natively Implemented Functions). Un NIF es un fragmento de código escrito en C, compilado como una biblioteca y ejecutado en el mismo espacio de memoria que la BEAM para mayor velocidad. El problema con los NIF es que no son preventivos y pueden afectar a los schedulers. En versiones recientes de la BEAM, se agregó una nueva función, dirty schedulers , para brindar un mejor control de los NIF. Los dirty schedulers son schedulers separados que se ejecutan en diferentes subprocesos para minimizar la interrupción que puede causar un NIF en un sistema. La palabra dirty se refiere a la naturaleza del código que ejecutan estos schedulers.

    Recolector de Basura

    Los lenguajes de programación modernos utilizan un recolector de basura para el manejo de memoria. Los lenguajes en la BEAM no son la excepción. Confiar en la máquina virtual para manejar los recursos y administrar la memoria es muy útil cuando desea escribir código concurrente de alto nivel, ya que simplifica la tarea. La implementación subyacente del recolector de basura es bastante sencilla y eficiente, gracias al modelo de memoria basado en el estado inmutable. Los datos se copian, no se modifican y el hecho de que los procesos no compartan memoria elimina las interdependencias de los procesos que por consiguiente, no necesitan ser administradas.

    Otra característica del recolecto de basura de la BEAM es que solamente se ejecuta cuando es necesario, en un proceso por si solo, sin afectar otros procesos esperando en la cola de ejecución. Por lo tanto, el recolector de basura en Erlang no detine el mundo. Evita picos de latencia en el procesamiento, porque la máquina virtual nunca se detiene como un todo, solo se detienen procesos específicos, y nunca todos al mismo tiempo. En la práctica, es solo parte de lo que hace un proceso y se trata como otra reducción. El recolector de basura suspende el proceso por un intervalo muy corto, hablamos de microsegundos. Como resultado, habrá muchas ráfagas pequeñas, que se activarán solo cuando el proceso necesite más memoria. Un solo proceso generalmente no tiene asignadas grandes cantidades de memoria y, a menudo, es de corta duración, lo que reduce aún más el impacto al liberar inmediatamente toda su memoria. Una característica de la JVM es la capacidad de intercambiar recolectores de basura, así que al usar un recolector comercial, también es posible lograr un recolector continuo o non-stopping en la JVM.

    Las características del recolector de basura son discutidas en este excelente post por Lukas Larsson. Hay muchos detalles intrincados, pero está optimizada para manejar datos inmutables de manera eficiente, dividiendo los datos entre la pila y el heap para cada proceso. El mejor enfoque es hacer la mayor parte del trabajo en procesos de corta duración.

    Una pregunta que surge a menudo sobre este tema es cuánta memoria usa la BEAM. Si indgamos un poco, la máquina virtual asigna grandes porciones de memoria y utiliza allocators personalizados para almacenar los datos de manera eficiente y minimizar la sobrecarga de las llamadas al sistema. Esto tiene dos efectos visibles:

    1) La memoria utilizada disminuye gradualmente después de que no se necesita espacio

    2) La reasignación de grandes cantidades de datos podría significar duplicar la memoria de trabajo actual.

    El primer efecto puede, si es realmente necesario, mitigarse ajustando las estrategias del allocator . El segundo es fácil de monitorear y planificar si tiene visibilidad de los diferentes tipos de uso de la memoria. (Una de esas herramientas de monitoreo que proporciona métricas del sistema listas para usar es WombatOAM ).

    Hot Code Loading

    La carga de código en caliente o  hot code loading es probablemente la característica única más citada de BEAM. La carga de código en caliente significa que la lógica de la aplicación se puede actualizar cambiando el código ejecutable en el sistema mientras se conserva el estado del proceso interno. Esto se logra reemplazando los archivos BEAM cargados e instruyendo a la máquina virtual para que reemplace las referencias del código en los procesos en ejecución.

    Es una característica fundamental para garantizar que no habrá tiempo de inactividad en una infraestructura de telecomunicaciones, donde se utilizó hardware redundante para manejar los picos. Hoy en día, en la era de la contenerización, también se utilizan otras técnicas hacer actualizaciones a un sistema en producción. Aquellos que nunca lo han usado o requerido, lo descartan como una característica no tan importante, pero no hay que subestimarla en el flujo de trabajo de desarrollo. Los desarrolladores pueden iterar más rápido reemplazando parte de su código sin tener que reiniciar el sistema para probarlo. Incluso si la aplicación no está diseñada para ser actualizable en producción, esto puede reducir el tiempo necesario para volver a compilar y re-lanzar el sistema.

    Cuando no usar la BEAM

    Se trata en gran medida de saber escoger la herramienta adecuada para el trabajo.

    ¿Necesita un sistema que sea extremadamente rápido, pero no le preocupa la concurrencia? ¿Quiere manejar algunos eventos en paralelo de manera rápida? ¿Necesita procesar números para gráficos, IA o análisis? Siga la ruta de C++, Python o Java. La infraestructura de telecomunicaciones no necesita operaciones rápidas con floats , por lo que la velocidad nunca fue una prioridad. Con el tipado dinámico, que tiene que hacer todas las comprobaciones de tipo en tiempo de ejecución, las optimizaciones en el compilador no son tan triviales. Por lo tanto, es mejor dejar el procesamiento de números en manos de la JVM, Go u otros lenguajes que compilan de forma nativa. No es de sorprender que las operaciones de coma flotante en Erjang, la versión de Erlang que se ejecuta en la JVM, sean un 5000 % más rápidas que en la BEAM. Por otro lado, en donde hemos visto realmente brillar a la BEAM es en el uso de su concurrencia para orquestar el procesamiento de números, subcontratando el análisis a C, Julia, Python o Rust. Haces el mapa fuera de la BEAM y la reducción dentro de ella.

    El mantra siempre ha sido lo suficientemente rápido. Los humanos tardan unos cientos de milisegundos en percibir un estímulo (un evento) y procesarlo en su cerebro, lo que significa que el tiempo de respuesta de micro o nano segundos no es necesario para muchas aplicaciones. Tampoco es recomendable usar la BEAM para microcontroladores, ya que consume demasiados recursos. Pero para los sistemas integrados con un poco más de potencia de procesamiento, donde los multi-procesadores se están convirtiendo en la norma, se necesita concurrencia y la BEAM brilla ahí. En los años 90, estábamos implementando conmutadores de telefonía que manejaban decenas de miles de suscriptores que se ejecutaban en placas integradas con 16 MB de memoria. ¿Cuánta memoria tiene un RaspberryPi en estos días?

    Y por último, hard-real-time . Probablemente no quiera que la BEAM administre su sistema de control de bolsas de aire. Necesita garantías sólidas, un sistema operativo en tiempo real y un lenguaje sin recolección de basura ni excepciones. Una implementación de una máquina virtual de Erlang que se ejecuta en el metal, como GRiSP, le brindará garantías similares.

    Conclusión

    Utilice la herramienta adecuada para el trabajo.

    Si está escribiendo un sistema soft-real time que tiene que escalar fuera de la caja y nunca fallar, y hacerlo sin la molestia de tener que reinventar la rueda, definitivamente la BEAM es la tecnología que está buscando. Para muchos, funciona como una caja negra. No saber cómo funciona sería como conducir un Ferrari y no ser capaz de lograr un rendimiento óptimo o no entender de qué parte del motor proviene ese extraño sonido. Es por eso que es importante aprender más sobre la BEAM, comprender su funcionamiento interno y estar listo para ajustarlo. Para aquellos que han usado Erlang y Elixir con ira, hemos lanzado un curso dirigido por un instructor de un día que desmitificará y explicará mucho de lo que vio mientras lo prepara para manejar la concurrencia masiva a escala. El curso está disponible a través de nuestra nueva capacitación remota dirigida por un instructor; obtenga más información aquí . También recomendamos el libro The BEAM de Erik Stenman y BEAM Wisdoms , una colección de artículos de Dmytro Lytovchenko.

    The post Optimización para lograr concurrencia: comparación y contraste de las máquinas virtuales BEAM y JVM appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/optimizacion-para-lograr-concurrencia-comparacion-y-contraste-de-las-maquinas-virtuales-beam-y-jvm/

    • chevron_right

      JMP: Verify Google Play App Purchase on Your Server

      news.movim.eu / PlanetJabber · Tuesday, 11 April, 2023 - 14:59 · 5 minutes

    We are preparing for the first-ever Google Play Store launch of Cheogram Android as part of JMP coming out of beta later this year.  One of the things we wanted to “just work” for Google Play users is to be able to pay for the app and get their first month of JMP “bundled” into that purchase price, to smooth the common onboarding experience.  So how do the JMP servers know that the app communicating with them is running a version of the app bought from Google Play as opposed to our builds, F-Droid’s builds, or someone’s own builds?  And also ensure that this person hasn’t already got a bundled month before?  The documentation available on how to do this is surprisingly sparse, so let’s do this together.

    Client Side

    Google publishes an official Licensing Verification Library for communicating with Google Play from inside an Android app to determine if this install of the app can be associated with a Google Play purchase.  Most existing documentation focuses on using this library, however it does not expose anything in the callbacks other than “yes license verified” or “no, not verified”.  This can allow an app to check if it is a purchased copy itself, but is not so useful for communicating that proof onward to a server.  The library also contains some exciting snippets like:

    // Base64 encoded -
    // com.android.vending.licensing.ILicensingService
    // Consider encoding this in another way in your
    // code to imp rove security
    Base64.decode(
        "Y29tLmFuZHJvaWQudmVuZGluZy5saWNlbnNpbmcuSUxpY2Vuc2luZ1NlcnZpY2U=")))

    Which implies that they expect developers to fork this code to use it.  Digging in to the code we find in LicenseValidator.java:

    public void verify(PublicKey publicKey, int responseCode, String signedData, String signature)

    Which looks like exactly what we need: the actual signed assertion from Google Play and the signature!  So we just need a small patch to pass those along to the callback as well as the response code currently being passed.  Then we can use the excellent jitpack to include the forked library in our app:

    implementation 'com.github.singpolyma:play-licensing:1c637ea03c'

    Then we write a small class in our app code to actually use it:

    import android.content.Context;
    import com.google.android.vending.licensing.*;
    import java.util.function.BiConsumer;
    
    public class CheogramLicenseChecker implements LicenseCheckerCallback {
        private final LicenseChecker mChecker;
        private final BiConsumer mCallback;
    
        public CheogramLicenseChecker(Context context, BiConsumer<String, String> callback) {
            mChecker = new LicenseChecker(  
                context,  
                new StrictPolicy(), // Want to get a signed item every time  
                context.getResources().getString(R.string.licensePublicKey)  
            );
            mCallback = callback;
        }
    
        public void checkLicense() {
            mChecker.checkAccess(this);
        }
    
        @Override
        public void dontAllow(int reason) {
            mCallback.accept(null, null);
        }
    
        @Override
        public void applicationError(int errorCode) {
            mCallback.accept(null, null);
        }
    
        @Override
        public void allow(int reason, ResponseData data, String signedData, String signature) {
            mCallback.accept(signedData, signature);
        }
    }

    Here we use the StrictPolicy from the License Verification Library because we want to get a fresh signed data every time, and if the device is offline the whole question is moot because we won’t be able to contact the server anyway.

    This code assumes you put the Base64 encoded licensing public key from “Monetisation Setup” in Play Console into a resource R.string.licensePublicKey .

    Then we need to communicate this to the server, which you can do whatever way makes sense for your protocol; with XMPP we can easily add custom elements to our existing requests so:

    new com.cheogram.android.CheogramLicenseChecker(context, (signedData, signature) -> {
        if (signedData != null && signature != null) {
            c.addChild("license", "https://ns.cheogram.com/google-play").setContent(signedData);
            c.addChild("licenseSignature", "https://ns.cheogram.com/google-play").setContent(signature);
        }
    
        xmppConnectionService.sendIqPacket(getAccount(), packet, (a, iq) -> {
            session.updateWithResponse(iq);
        });
    }).checkLicense();

    Server Side

    When trying to verify this on the server side we quickly run into some new issues.  What format is this public key in?  It just says “public key” and is Base64 but that’s about it.  What signature algorithm is used for the signed data?  What is the format of the data itself?  Back to the library code!

    private static final String KEY_FACTORY_ALGORITHM = "RSA";
    …
    byte[] decodedKey = Base64.decode(encodedPublicKey);
    …
    new X509EncodedKeySpec(decodedKey)

    So we can see it is an X509 related encoded, and indeed turns out to be Base64 encoded DER.  So we can run this:

    echo "BASE64_STRING" | base64 -d | openssl rsa -pubin -inform der -in - -text

    to get the raw properties we might need for any library (key size, modulus, and exponent).  Of course, if your library supports parsing DER directly you can also use that.

    import java.security.Signature;
    …
    private static final String SIGNATURE_ALGORITHM = "SHA1withRSA";
    …
    Signature sig = Signature.getInstance(SIGNATURE_ALGORITHM);
    sig.initVerify(publicKey);
    sig.update(signedData.getBytes());

    Combined with the java documentation we can thus say that the signature algoritm is PKCS#1 padded RSA with SHA1.

    And finally:

    String[] fields = TextUtils.split(mainData, Pattern.quote("|"));
    data.responseCode = Integer.parseInt(fields[0]);
    data.nonce = Integer.parseInt(fields[1]);
    data.packageName = fields[2];
    data.versionCode = fields[3];
    // Application-specific user identifier.
    data.userId = fields[4];
    data.timestamp = Long.parseLong(fields[5]);

    The format of the data, pipe-seperated text. The main field of interest for us is userId which is (as it says in a comment) “a user identifier unique to the <application, user> pair”. So in our server code:

    import Control.Error (atZ)
    import qualified Data.ByteString.Base64 as Base64
    import qualified Data.Text as T
    import Crypto.Hash.Algorithms (SHA1(SHA1))
    import qualified Crypto.PubKey.RSA as RSA
    import qualified Crypto.PubKey.RSA.PKCS15 as RSA
    import qualified Data.XML.Types as XML
    
    googlePlayUserId
        | googlePlayVerified = (T.split (=='|') googlePlayLicense) `atZ` 4
        | otherwise = Nothing
    googlePlayVerified = fromMaybe False $ fmap (\pubKey ->
        RSA.verify (Just SHA1) pubKey (encodeUtf8 googlePlayLicense)
            (Base64.decodeLenient $ encodeUtf8 googlePlaySig)
        ) googlePlayPublicKey
    googlePlayLicense = mconcat $ XML.elementText
        =<< XML.isNamed (s"{https://ns.cheogram.com/google-play}license")
        =<< XML.elementChildren payload
    googlePlaySig = mconcat $ XML.elementText
        =<< XML.isNamed (s"{https://ns.cheogram.com/google-play}licenseSignature")
        =<< XML.elementChildren payload

    We can then use the verified and extracted googlePlayUserId value to check if this user has got a bundled month before and, if not, to provide them with one during signup.

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/play-purchase-verification-2023

    • chevron_right

      Erlang Solutions: You’ve been curious about LiveView, but you haven’t gotten into it

      news.movim.eu / PlanetJabber · Thursday, 6 April, 2023 - 10:04 · 21 minutes

    As a backend developer, I’ve spent most of my programming career away from frontend development. Whether it’s React/Elm for the web or Swift/Kotlin for mobile, these are fields of knowledge that fall outside of what I usually work with.

    Nonetheless, I always wanted to have a tool at my disposal for building rich frontends. While the web seemed like the platform with the lowest bar of entry for this, the size of the Javascript ecosystem had become so vast that familiarizing oneself with it was no small task.

    This is why I got very excited when Chris McCord first showed LiveView to the world. Building interactive frontends, with no Javascript required? This sounded like it was made for all of us Elixir backend developers that were “frontend curious”.

    However, if you haven’t already jumped into it, you might be hesitant to start. After all: it’s often not just about learning LiveView as if you were writing a greenfield project, but about how you would add LiveView into that Phoenix app that you’re already working on.

    Therefore, throughout this guide, I’ll presume that you already have an existing project that you wish to integrate LiveView into. If you have the luxury of a clean slate, then other resources (such as the Programming Phoenix LiveView book, by Bruce A. Tate and Sophie DeBenedetto ) may be of more use.

    I hope that this article may serve you well as a starting point!

    Will it work for my use case?

    You might have some worries about whether LiveView is a technology that you can introduce to your application. After all: no team likes to adopt a technology that they later figure out does not suit their use case.

    There are some properties of LiveView which are inherent to the technology, and therefore must be considered:

    Offline mode

    The biggest question is whether you need an offline mode for your application. My guess is that you probably do not need it , but if you do, LiveView is not the technology for you. The reason for this is that LiveView is rendered on the backend , necessitating communication with it.

    Latency

    The second biggest question: do you expect the latency from your clients to the server to be high , and would it being high be a serious detriment to your application?

    As Chris McCord put it in his announcement blog post on the Dockyard blog :

    “Certain use cases and experiences demand zero-latency, as well as offline capabilities. This is where Javascript frameworks like React, Ember, etc., shine.”

    Almost every interaction with a LiveView interface will send a request to the server; while requests will have highly optimized payloads, if you expect the average round trip from client to server to be too many milliseconds, then the user experience will suffer. LiveView ships with tools for testing your application with increased latency, but if you already know that there’s a certain latency maximum that your clients must not but very likely would exceed, then LiveView may not be suitable.

    If these are not of concern to your use case, then let’s get going!

    What does it take for me to start?

    Phoenix setup

    First of all, you’ll want to have a recent version of Phoenix, and your code up-to-date. Following are upgrade guides for older projects:

    LiveView setup

    The next step is to install LiveView into your existing project. The LiveView documentation has a great section on the subject: Installing LiveView into an existing project .

    The guide is rather straight-forward, so I will not reiterate its contents here. The only comment I’ll add is that the section at the very end about adding a topbar is (as the documentation points out) optional. It should be said, however, that this is added by default in new LiveView projects, so if you want to have a setup that’s as close to a freshly generated project, you should include this.

    At this point, you should have everything ready for introducing your own LiveView code!

    Quick LiveView overview

    Before we get to the actual coding, let’s get at a quick overview of the life cycle of a LiveView page. Here’s a high-level overview:

    The first request made to a LiveView route will be a plain HTTP request. The router will invoke a LiveView module, which calls the mount/3 function and then the render/1 function. This will render a static page (SEO-friendly out-of-the-box, by the way!), with the required Javascript for LiveView to work. The page then opens a WebSocket connection between the client and the server.

    After the WebSocket connection has been established, we get into the LiveView life cycle:

    Note that mount/3 and render/1 will be called again, this time over the WebSocket connection. While this probably will not be something you need to worry about when writing your first LiveView pages, it might be of relevance to know that this is the case ( discussion about this can be read here ). If you have a very expensive function call to make, and you only want to do it once, consider using the connected?/1 function.

    After render/1 has been called a second time, we get into the LiveView loop: wait for events, send the events over the wire, change the state on the server, then send back the minimal required data for updating the page on the client.

    Let’s now see how we’ll need to change your code to get to this LiveView flow.

    Making things live

    Now you might be asking:

    “OK, so the basics have been set up. What are the bare minimum things to get a page to be live?”

    You’ll need to do the following things:

    1. Convert an existing route to a live one
    2. Convert the controller module into a live module
    3. Modify the templates
    4. Introduce liveness

    Let’s go over them, one by one:

    Bringing life to the dead

    Here’s a question I once had, that you might be wondering:

    If I’ve got a regular (“dead”) Phoenix route, can I just add something live to a portion of the page, on the existing “dead” route?

    Considering how LiveView works, I’d like to transform the question into two new (slightly different) questions:

    1. Can one preserve the current routes and controllers, having them execute live code?
    2. Can one express the live interactions in the dead controllers?

    The answer to the first question: yes, but generally you won’t . You won’t, because of the answer to the second question: no , you’ll need separate live modules to express the live interactions.

    This leads to an important point:

    If you want some part of a page to be live, then your whole page has to be live.

    Technically , you can have the route be something else than live (e.g. a get route), and you would then use Phoenix.LiveView.Controller.live_render/3 in a “dead” controller function to render a LiveView module. This does still mean, however, that the page (the logic and templates) will be defined by the live module. You’re not “adding something live to a portion of the dead page”, but rather delegating to a live module from a dead route; you’ll still have to migrate the logic and templates to the live module.

    Therefore, your live code will be in LiveView modules (instead of your current controller modules ), invoked by live routes. As a sidenote: while it’s not covered by this article, you’ll eventually group live routes with live_session/3 , enabling redirects between routes without full page reloads.

    Introducing a live route

    Many tutorials and videos about LiveView use the example of programming a continuously updating rendering of a thermostat. Let’s therefore presume that you’ve got a home automation application, and up until now you had to go to /thermostats and refresh the page to get the latest data.

    The router.ex might look something like this:

    defmodule HomeAutomationWeb.Router do
      use HomeAutomationWeb, :router
    
      pipeline :browser do
        # ...
      end
    
      pipeline :logged_in do
        # ...
      end
    
      scope "/", HomeAutomationWeb do
        pipe_through [:browser, :logged_in]
    
        # ...
    
        resources "/thermostats", ThermostatController
        post "/thermostats/reboot", ThermostatController, :reboot
      end
    end
    

    This is a rather simple router (with some lines removed for brevity), but you can probably figure out how this compares to your code. We’re using a call to Phoenix.Router.resources/2 here to cover a standard set of CRUD actions; your set of actions could be different.

    Let’s introduce the following route after the post-route:

    live "/live/thermostats", ThermostatLive
    

    The ThermostatLive will be the module to which we’ll be migrating logic from ThermostatController.

    Creating a live module to migrate to

    Creating a skeleton

    Let’s start by creating a directory for LiveView modules, then create an empty thermostat_live.ex in that directory.

    $ mkdir lib/home_automation_web/live
    $ touch lib/home_automation_web/live/thermostat_live.ex
    

    It might seem a bit strange to create a dedicated directory for the live modules, considering that the dead parts of your application already have controller/template/view directories. This convention, however, allows one to make use of the following feature from the Phoenix.LiveView.render/1 callback (slight changes by me, for readability):

    If you don’t define [render/1 in your LiveView module], LiveView will attempt to render a template in the same directory as your LiveView. For example, if you have a LiveView named MyApp.MyCustomView inside lib/my_app/live_views/my_custom_view.ex, Phoenix will look for a template at lib/my_app/live_views/my_custom_view.html.heex.

    This means that it’s common for LiveView projects to have a live directory with file pairs, such as foobar.ex and foobar.html.heex, i.e. module and corresponding template. Whether you inline your template in the render/1 function or put it in a dedicated file is up to you.

    Open the lib/home_automation_web/live/thermostat_live.ex file, and add the following skeleton of the ThermostatLive module:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      def mount(_params, _session, socket) do
        {:ok, socket}
      end
    
      def render(assigns) do
        ~H"""
        <div id="thermostats">
          <p>Thermostats</p>
        </div>
        """
      end
    end
    

    There are two mandatory callbacks in a LiveView module: mount/3, and render/1. As mentioned earlier, you can leave out render/1 if you have a template file with the right file name. You can also leave out the mount/3, but that would mean that you neither want to set any state, nor do any work on mount, which is unlikely.

    Migrating mount logic

    Let’s now look at our imagined HomeAutomationWeb.ThermostatController, to see what we’ll be transferring over to ThermostatLive:

    defmodule HomeAutomationWeb.ThermostatController do
      use HomeAutomationWeb, :controller
    
      alias HomeAutomation.Thermostat
    
      def index(conn, _params) do
        thermostats = Thermostat.all_for_user(conn.assigns.current_user)
    
        render(conn, :index, thermostats: thermostats)
      end
    
      # ...
    
      def reboot(conn, %{"id" => id}) do
        {:ok, thermostat} =
          id
          |> Thermostat.get!()
          |> Thermostat.reboot()
    
        conn
        |> put_flash(:info, "Thermostat '#{thermostat.room_name}' rebooted.")
        |> redirect(to: Routes.thermostat_path(conn, :index))
      end
    end
    

    We’ll be porting a subset of the functions that are present in the controller module: index/2 and reboot/2. This is mostly to have two somewhat different controller actions to work with.

    Let’s first focus on the index/2 function. We could imagine that Thermostat.all_for_user/1 makes a database call of some kind, possibly with Ecto. conn.assigns.current_user would be added to the assigns by the logged_in Plug in the pipeline in the router.

    Let’s naively move over the ThermostatController.index/2 logic to the LiveView module, and take it from there:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      alias HomeAutomation.Thermostat
    
      def mount(_params, _session, socket) do
        thermostats = Thermostat.all_for_user(socket.assigns.current_user)
    
        {:ok, assign(socket, %{thermostats: thermostats})}
      end
    
      def render(assigns) do
        ~H"""
        <div id="thermostats">
          <p>Thermostats</p>
        </div>
        """
      end
    end
    

    Firstly, we’re inserting the index/2 logic into the mount/3 function of ThermostatLive, meaning that the data will be called for on page load.

    Secondly, notice that we changed the argument to Thermostat.all_for_user/1 from conn.assigns.current_user to socket.assigns.current_user. This is just a change of variable name, of course, but it signifies a change in the underlying data structure: you’re not working with a Plug.Conn struct, but rather with a Phoenix.LiveView.Socket.

    So far we’ve written some sample template code inside the render/1 function definition, and we haven’t seen the actual templates that would render the thermostats, so let’s get to those.

    Creating live templates

    Let’s presume that you have a rather simple index page, listing all of your thermostats.

    <h1>Listing Thermostats</h1>
    
    <%= for thermostat <- @thermostats do %>
      <div class="thermostat">
        <div class="row">
          <div class="column">
            <ul>
              <li>Room name: <%= thermostat.room_name %></li>
              <li>Temperature: <%= thermostat.temperature %></li>
            </ul>
          </div>
    
          <div class="column">
            Actions: <%= link("Show", to: Routes.thermostat_path(@conn, :show, thermostat)) %>
            <%= link("Edit", to: Routes.thermostat_path(@conn, :edit, thermostat)) %>
            <%= link("Delete",
              to: Routes.thermostat_path(@conn, :delete, thermostat),
              method: :delete,
              data: [confirm: "Are you sure?"]
            ) %>
          </div>
    
          <div class="column">
            <%= form_for %{}, Routes.thermostat_path(@conn, :reboot), fn f -> %>
              <%= hidden_input(f, :id, value: thermostat.id) %>
              <%= submit("Reboot", class: "rounded-full") %>
            <% end %>
          </div>
        </div>
      </div>
    <% end %>
    
    <%= link("New Thermostat", to: Routes.thermostat_path(@conn, :new)) %>
    

    Each listed thermostat has the standard resource links of Show/Edit/Delete, with a New-link at the very end of the page. The only thing that goes beyond the usual CRUD actions is the form_for, defining a Reboot-button. The Reboot-button will initiate a request to the POST /thermostats/reboot route.

    As previously mentioned, we can either move this template code into the ThermostatLive.render/1 function, or we can create a template file named lib/home_automation_web/live/thermostat_live.html.heex. To get used to the new ways of LiveView, let’s put the code into the render/1 function. You can always extract it later (but remember to delete the render/1 function, if you do!).

    The first step would be to simply copy paste everything, with the small change that you need to replace every instance of @conn with @socket. Here’s what the ThermostatLive will look like:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      alias HomeAutomation.Thermostat
    
      def mount(_params, _session, socket) do
        thermostats = Thermostat.all_for_user(socket.assigns.current_user)
    
        {:ok, assign(socket, %{thermostats: thermostats})}
      end
    
      def render(assigns) do
        ~H"""
        <h1>Listing Thermostats</h1>
    
        <%= for thermostat <- @thermostats do %>
          <div class="thermostat">
            <div class="row">
              <div class="column">
                <ul>
                  <li>Room name: <%= thermostat.room_name %></li>
                  <li>Temperature: <%= thermostat.temperature %></li>
                </ul>
              </div>
    
              <div class="column">
                Actions: <%= link("Show", to: Routes.thermostat_path(@socket, :show, thermostat)) %>
                <%= link("Edit", to: Routes.thermostat_path(@socket, :edit, thermostat)) %>
                <%= link("Delete",
                  to: Routes.thermostat_path(@socket, :delete, thermostat),
                  method: :delete,
                  data: [confirm: "Are you sure?"]
                ) %>
              </div>
    
              <div class="column">
                <%= form_for %{}, Routes.thermostat_path(@socket, :reboot), fn f -> %>
                  <%= hidden_input(f, :id, value: thermostat.id) %>
                  <%= submit("Reboot", class: "rounded-full") %>
                <% end %>
              </div>
            </div>
          </div>
        <% end %>
    
        <%= link("New Thermostat", to: Routes.thermostat_path(@socket, :new)) %>
        """
      end
    end
    

    While this makes the page render, both the links and the form are doing the same “dead” navigation as before, leading to full-page reloads, not to mention that we currently get out from the live page.

    To make the page more live, let’s focus on making the clicking of the Reboot-button result in a LiveView event, instead of a regular POST with subsequent redirect.

    Changing the button to something live

    The Reboot-button is a good target to turn live, as it should just fire an asynchronous event, without redirecting anywhere. Let’s have a look at how the button is currently defined:

    <%= form_for %{}, Routes.thermostat_path(@socket, :reboot), fn f -> %>
      <%= hidden_input(f, :id, value: thermostat.id) %>
      <%= submit("Reboot", class: "rounded-full") %>
    <% end %>
    

    The reason why the “dead” template used a form_for with a submit is two-fold. Firstly , since the action of rebooting the thermostat is not a navigation action, using an anchor tag (<a>) styled to look like a button would not be appropriate: using a form with a submit button is better, since it indicates that an action will be performed, and the action is clearly defined by the form’s method and action attributes. Secondly , a form allows you to include a CSRF token , which is automatically injected into the resulting <form> with form_for.

    Let’s look at what the live version will look like:

    <%= link("Reboot",
      to: "#",
      phx_click: "reboot",
      phx_value_id: thermostat.id,
      data: [confirm: "Are you sure?"]
    ) %>
    

    Let’s break this down a bit:

    A note about <form>

    First thing to note: this is no longer a <form>!

    Above I mentioned CSRF protection being a reason for using the <form>, but the Channel (i.e. the WebSocket connection between server and client) is already protected with a CSRF token, so we can send LiveView events without worrying about this.

    The detail above about navigation technically still applies, but in LiveView one would (generally) use a link with to: “#” for most things functioning like a button.

    As a minor note: you’ll still be using forms in LiveView for data input, although you’ll be using the <.form> component , instead of calling form_for .

    The phx_click event

    The second thing to note is that is the phx_click attribute, and it’s value “reboot”. The key is indicating what event should be fired when interacting with the generated <a> tag. The various possible event bindings can be found here:

    https://hexdocs.pm/phoenix_live_view/bindings.html

    If you want to have a reference for what events you can work with in LiveView, the link above is a good one to bookmark!

    Clarifying a potentially confusing detail: the events listed in the above linked documentation use hyphens (-) as separators in their names. link uses underscores (_), but apart from this, the event names are the same.

    The “reboot” string specifies the “name” of the event that is sent to the server. We’ll see the usage of this string in a second.

    The value attribute

    Finally, let’s talk about the phx_value_id attribute. phx_value_id is special, in that part of the attribute name is user defined. The phx_value_-part of the attribute name indicates to LiveView that the attribute is an “event value”, and what follows after phx_value_ (in our case: id) will be the key name in the resulting “event data map” on the server side. The value of the attribute will become the value in the map.

    This means that this…:

    phx_value_id: "thermostat_13" ,

    …will be received as the following on the server:

    %{id: "thermostat_13"}

    Further explanation can be found in the documentation:

    https://hexdocs.pm/phoenix_live_view/bindings.html#click-events

    Adding the corresponding event to the LiveView module

    Now that we’ve changed the Reboot-button in the template, we can get to the final step: amending the ThermostatLive module to react to the “reboot” event. We need to add a handle_event function to the module, and we’ll use the logic that we saw earlier in ThermostatController.reboot/2:

    defmodule HomeAutomationWeb.ThermostatLive do
      use HomeAutomationWeb, :live_view
    
      alias HomeAutomation.Thermostat
    
      def mount(_params, _session, socket) do
        # ...
      end
    
      def handle_event("reboot", %{"id" => id}, socket) do
        {:ok, thermostat} =
          id
          |> Thermostat.get!()
          |> Thermostat.reboot()
    
        {:noreply,
          put_flash(
            socket,
            :info,
            "Thermostat '#{thermostat.room_name}' rebooted."
          )}
      end
    
      def render(assigns) do
        # ...
      end
    end
    

    This handle_event function will react to the “reboot” event. The first argument to the function is the event name, the second is any passed data (through phx-value-*), and finally the socket.

    A quick note about the :noreply: presume that you’ll be using {:noreply, socket}, as the alternative ({:reply, map, socket}) is rarely useful. Just don’t worry about this, for now.

    That’s it!

    If you’ve been following this guide, trying to adapt it to your application, then you should have something like the following:

    1. A live route.
    2. A live module, where you’ve ported some of the logic from the controller module.
    3. A template that’s been adapted to be rendered by a live module.
    4. An element on the page that, when interacted with, causes an event to fire, with no need for a page refresh.

    At this stage, one would probably want to address the other CRUD actions, at the very least having their navigation point to the live route, e.g. creating a new thermostat should not result in a redirect to the dead route. Even better would be to have the CRUD actions all be changed to be fully live, requiring no page reloads. However, this is unfortunately outside of the scope of this guide.

    I hope that this guide has helped you to take your first steps toward working with LiveView!

    Further reading

    Here’s some closing advice that you might find useful, if you want to continue on your own.

    Exploring generators

    A very educative thing to do is comparing what code Phoenix generates for “dead” pages vs. live pages.

    Following are the commands for first generating a “dead” CRUD page setup for a context (Devices) and entity (Thermostat), and then one generates the same context and entity, but in a live fashion. The resulting git commits illustrate how the same intent is expressed in the two styles.

    $ mix phx.new home_automation --live
    $ cd home_automation
    $ git init .
    $ git add .
    $ git commit -m "Initial commit"
    $ mix phx.gen.html Devices Thermostat thermostats room_name:string temperature:integer
    $ git add .
    $ git commit -m "Added Devices context with Thermostat entity"
    $ git show
    $ mix phx.gen.live Devices Thermostat thermostats room_name:string temperature:integer
    $ git add .
    $ git commit -m "Added Live version of Devices with Thermostat"
    $ git show
    

    Note that when you get to the phx.gen.live step, you’ll have to answer Y to a couple of questions, as you’ll be overwriting some code. Also, you’ll generate a superfluous Ecto migration, which you can ignore.

    Study these generated commits, the resulting files, and the difference between the generated approaches, as it helps a lot with understanding how the transition from dead to live is done.

    Broadcasting events

    You might want your live module to react to specific events in your application. In the case of the thermostat application it could be the change of temperature on any of the thermostats, or the reboot status getting updated asynchronously. In the case of a LiveView chat application, it would be receiving a new message from someone in the conversation.

    A very commonly used method for generating and listening to events is making use of Phoenix.PubSub . Not only is Phoenix.PubSub a robust solution for broadcasting events, it gets pulled in as a dependency to Phoenix, so you should already have the hex installed.

    There are numerous guides out there for how to make use of Phoenix.PubSub, but a good place to start is probably watching how Chris McCord uses LiveView and Phoenix.PubSub to create a Twitter clone, in about 15 minutes (the part with Phoenix.PubSub is about half-way through the video).

    HTTP verbs

    Regarding HTTP verbs, coming from the world of dead routes, you might be wondering:

    I’ve got various GET/POST/PUT/etc. routes that serve different purposes. When building live modules, do all of the routes (with their different HTTP verbs) just get replaced with live?

    Yes, mostly. Generally your live parts of the application will handle their communication over the WebSocket connection, sending various events. This means that any kind of meaning you wish to communicate through the various HTTP verbs will instead be communicated through various events instead.

    With that said, you may still have parts of your application that will still be accessed with regular HTTP requests, which would be a reason to keep these routes around. The will not, however, be called from your live components.

    Credits

    Last year, Stone Filipczak wrote an excellent guide on the SmartLogic blog , on how to quickly introduce LiveView to an existing phoenix app. It was difficult to not have overlap with that guide, so my intention has been to complement it. Either way, I encourage you to check it out!

    The post You’ve been curious about LiveView, but you haven’t gotten into it appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/youve-been-curious-about-liveview-but-you-havent-gotten-into-it/

    • chevron_right

      Erlang Solutions: Captura de datos con Postgres y Elixir

      news.movim.eu / PlanetJabber · Wednesday, 5 April, 2023 - 06:37 · 20 minutes

    La captura de datos es el proceso de identificar y capturar cambios de datos en la base de datos.

    Con captura de datos, los cambios en los datos pueden ser rastreados casi en tiempo real, y esa información puede ser utilizada para apoyar una variedad de casos de uso, incluyendo auditoría, replicación y sincronización.

    Un buen ejemplo de un caso de uso para captura de datos es considerar una aplicación que inserta un registro en la base de datos y envía un evento a una cola de mensajes después de que se ha insertado el registro (escribir dos veces).

    Imagina que estás trabajando en una aplicación de comercio electrónico y después de que se crea y se inserta un pedido en la base de datos, se envía un evento OrderCreated a una cola de mensajes. Los consumidores del evento podrían hacer cosas como crear órdenes de recolección para el almacén, programar transportes para la entrega y enviar un correo electrónico de confirmación del pedido al cliente.

    Pero ¿qué sucede si la aplicación se bloquea después de que se ha insertado el pedido en la base de datos pero antes de lograr enviar el evento a la cola de mensajes? Esto es posible debido al hecho de que no se puede insertar atómicamente el registro Y enviar el mensaje en la misma transacción, por lo que si la aplicación se bloquea después de insertar el registro en la base de datos pero antes de enviar el evento a la cola, se pierde el evento.

    Por supuesto, existen soluciones alternativas para evitar esto: una solución simple es “almacenar” el evento en una tabla de almacenamiento temporal en la misma transacción en la que se escribe el registro, y luego depender de un proceso captura de datos para capturar el cambio en la tabla de almacenamiento y enviar el evento a la cola de mensajes. La transacción es atómica y el proceso de captura de datos puede asegurar que el evento se entregue al menos una vez.

    Para capturar cambios, la captura de datos típicamente utiliza uno de dos métodos: basado en registro o basado en disparadores.

    La captura de datos basado en registro implica leer los registros de transacciones de la base de datos para identificar los cambios de datos, que es el método que utilizaremos aquí al utilizar la replicación lógica de Postgres.

    Replicación de Postgres

    Hay dos modos de replicación en Postgres:

    1. Replicación física: cada cambio del primario se transmite a las réplicas a través del WAL (Write Ahead Log). Esta replicación se realiza byte por byte con direcciones de bloque exactas.
    1. Replicación lógica: en la replicación lógica, el suscriptor recibe cada cambio de transacción individual (es decir, declaraciones INSERT, UPDATE o DELETE) en la base de datos.

    El WAL todavía se transmite, pero codifica las operaciones lógicas para que puedan ser decodificadas por el suscriptor sin tener que conocer los detalles internos de Postgres.

    Una de las grandes ventajas de la replicación lógica es que se puede utilizar para replicar sólo tablas o filas específicas, lo que significa que se tiene un control completo sobre lo que se está replicando.

    Para habilitar la replicación lógica, el wal_level debe ser configurado:

    -- determines how much information is written to the wal. 
    -- Each 'level' inherits the level below it; 'logical' is the highest level
    
    ALTER SYSTEM SET wal_level=logical;
    
    -- simultaneously running WAL sender processes
    ALTER SYSTEM SET max_wal_senders='10';
    
    -- simultaneously defined replication slots
    ALTER SYSTEM SET max_replication_slots='10';
    

    Los cambios requieren un reinicio de la instancia de Postgres.

    Después de reiniciar el sistema, el wal_level se puede verificar con:

    SHOW wal_level;
     wal_level 
    -----------
     logical
    (1 row)
    

    Para suscribirse a los cambios se debe crear una publicación . Una publicación es un grupo de tablas en las que nos gustaría recibir cambios de datos.

    Vamos a crear una tabla simple y definir una publicación para ella:

    CREATE TABLE articles (id serial PRIMARY KEY, title text, description text, body text);
    CREATE PUBLICATION articles_pub FOR TABLE articles;
    

    Para indicar a Postgres que retenga segmentos de WAL, debemos crear un slot de replicación.

    El slot de replicación representa un flujo de cambios desde una o más publicaciones y se utiliza para prevenir la pérdida de datos en caso de una falla del servidor, ya que son a prueba de fallos.

    Protocolo de Replicación

    Para tener una idea del protocolo y los mensajes que se envían, podemos usar pg_recvlogical para iniciar un suscriptor de replicación:

    # Start and use the publication defined above
    # output is written to stdout
    pg_recvlogical --start \
      --host='localhost' \
      --port='5432' \
      --username='postgres' \
      --dbname='postgres' \
      --option=publication_names='articles_pub' \
      --option=proto_version=1 \
      --create-slot \
      --if-not-exists \
      --slot=articles_slot \
      --plugin=pgoutput \
      --file=-
    

    Insertar un registro:

    INSERT INTO articles (title, description, body)
        VALUES ('Postgres replication', 'Using logical replication', 'Foo bar baz');
    

    Cada linea en la salida corresponde a un mensaje de replicación recibido a través de suscripción:

    B(egin) - Begin transaction 
    R(elation) - Table, schema, columns and their types
    I(insert) - Data being inserted
    C(ommit) - Commit transaction
    
    ___________________________________
    
    B
    
    Rarticlesdidtitledescriptionbody
    It35tPostgres replicationtUsing logical replicationtFoo bar baz
    C
    

    Si insertamos múltiples registros en una transacción deberíamos tener dos I entre B y C:

    BEGIN;
    INSERT INTO articles (title, description, body) VALUES ('First', 'desc', 'Foo');
    
    INSERT INTO articles (title, description, body) VALUES ('Second', 'desc', 'Bar');
    COMMIT;
    

    Y la salida:

    C
    B
    
    It37tFirsttdesctFoo
    It38tSecondtdesctBar
    CCopied to clipboard!
    

    La información de la relación, es decir, la tabla, no se transmitió porque ya se recibió la relación al insertar el primer registro.

    Postgres solo envía la relación la primera vez que se encuentra durante la sesión. Se espera que el suscriptor almacene en caché una relación previamente enviada.

    Ahora que tenemos una idea de cómo funciona la replicación lógica, ¡implementémosla en Elixir!

    Implementando la conexión de replicación

    Cree un nuevo proyecto de Elixir:

    mix new cdc
    

    Añadiremos las siguientes dependencias a mix.exs:

    defp deps do
      {:postgrex, "~> 0.16.4"},
      # decode/encode replication messages
      {:postgrex_pgoutput, "~> 0.1.0"}
    end
    

    Postgrex admite la replicación a través del proceso Postgrex.ReplicationConnection.

    defmodule CDC.Replication do
      use Postgrex.ReplicationConnection
      require Logger
    
      defstruct [
        :publications,
        :slot,
        :state
      ]
    
      def start_link(opts) do
        conn_opts = [auto_reconnect: true]
        publications = opts[:publications] || raise ArgumentError, message: "`:publications` is required"
        slot = opts[:slot] || raise ArgumentError, message: "`:slot` is required"
    
        Postgrex.ReplicationConnection.start_link(
          __MODULE__,
          {slot, publications},
          conn_opts ++ opts
        )
      end
    
      @impl true
      def init({slot, pubs}) do
        {:ok, %__MODULE__{slot: slot, publications: pubs}}
      end
    
      
      @impl true
      def handle_connect(%__MODULE__{slot: slot} = state) do
        query = "CREATE_REPLICATION_SLOT #{slot} TEMPORARY LOGICAL pgoutput NOEXPORT_SNAPSHOT"
    
        Logger.debug("[create slot] query=#{query}")
    
        {:query, query, %{state | state: :create_slot}}
      end
    
      @impl true
      def handle_result([%Postgrex.Result{} | _], %__MODULE__{state: :create_slot, publications: pubs, slot: slot} = state) do
        opts = [proto_version: 1, publication_names: pubs]
    
        query = "START_REPLICATION SLOT #{slot} LOGICAL 0/0 #{escape_options(opts)}"
    
        Logger.debug("[start streaming] query=#{query}")
    
        {:stream, query, [], %{state | state: :streaming}}
      end
    
      @impl true
      def handle_data(msg, state) do
        Logger.debug("Received msg=#{inspect(msg, limit: :infinity, pretty: true)}")
        {:noreply, [], state}
      end
    
      defp escape_options([]),
        do: ""
    
      defp escape_options(opts) do
        parts =
          Enum.map_intersperse(opts, ", ", fn {k, v} -> [Atom.to_string(k), ?\s, escape_string(v)] end)
    
        [?\s, ?(, parts, ?)]
      end
    
      defp escape_string(value) do
        [?', :binary.replace(to_string(value), "'", "''", [:global]), ?']
      end
    end
    

    El código esta disponible en GitHub

    Probemos:

    opts = [
      slot: "articles_slot_elixir",
      publications: ["articles_pub"],
      host: "localhost",
      database: "postgres",
      username: "postgres",
      password: "postgres",
      port: 5432,
    ]
    
    CDC.Replication.start_link(opts)
    

    Cuando iniciamos el proceso, ocurre lo siguiente:

    1. Una vez que estamos conectados a Postgres, se llama al callback handle_connect/1, se crea un slot de replicación lógica temporal.
    2. Se llama a handle_result/2 con el resultado de la consulta en el paso 1. Si el slot se creó correctamente, comenzamos a transmitir desde el slot y entramos en el modo de transmisión. La posición solicitada ‘0/0’ significa que Postgres elige la posición.
    3. Cualquier mensaje de replicación enviado desde Postgres se recibe en el callback handle_data/2.

    Mensajes de replicación

    Hay dos tipos de mensajes que un suscriptor recibe:

    1. primary_keep_alive: un mensaje de comprobación, si reply == 1 se espera que el suscriptor responda al mensaje con un standby_status_update para evitar una desconexión por tiempo de espera.

    El standby_status_update contiene el LSN actual que el suscriptor ha procesado.

    Postgres utiliza este mensaje para determinar qué segmentos de WAL se pueden eliminar de forma segura.

    1. xlog_data: contiene los mensajes de datos para cada paso en una transacción.Dado que no estamos respondiendo a los mensajes primary_keep_alive, el proceso se desconecta y se reinicia.

    Arreglemos esto decodificando los mensajes y comenzando a responder con mensajes standby_status_update.

    
    defmodule CDC.Replication do
      use Postgrex.ReplicationConnection
    
      require Postgrex.PgOutput.Messages
      alias Postgrex.PgOutput.{Messages, Lsn}
    
      require Logger
    
      defstruct [
        :publications,
        :slot,
        :state
      ]
    
      def start_link(opts) do
        conn_opts = [auto_reconnect: true]
        publications = opts[:publications] || raise ArgumentError, message: "`:publications` is required"
        slot = opts[:slot] || raise ArgumentError, message: "`:slot` is required"
    
        Postgrex.ReplicationConnection.start_link(
          __MODULE__,
          {slot, publications},
          conn_opts ++ opts
        )
      end
    
      @impl true
      def init({slot, pubs}) do
        {:ok, %__MODULE__{slot: slot, publications: pubs}}
      end
    
      @impl true
      def handle_connect(%__MODULE__{slot: slot} = state) do
        query = "CREATE_REPLICATION_SLOT #{slot} TEMPORARY LOGICAL pgoutput NOEXPORT_SNAPSHOT"
    
        Logger.debug("[create slot] query=#{query}")
    
        {:query, query, %{state | state: :create_slot}}
      end
    
      @impl true
      def handle_result(
            [%Postgrex.Result{} | _],
            %__MODULE__{state: :create_slot, publications: pubs, slot: slot} = state
          ) do
        opts = [proto_version: 1, publication_names: pubs]
    
        query = "START_REPLICATION SLOT #{slot} LOGICAL 0/0 #{escape_options(opts)}"
    
        Logger.debug("[start streaming] query=#{query}")
    
        {:stream, query, [], %{state | state: :streaming}}
      end
    
      @impl true
      def handle_data(msg, state) do
        return_msgs =
          msg
          |> Messages.decode()
          |> handle_msg()
    
        {:noreply, return_msgs, state}
      end
    
      #
      defp handle_msg(Messages.msg_primary_keep_alive(server_wal: lsn, reply: 1)) do
        Logger.debug("msg_primary_keep_alive message reply=true")
        <<lsn::64>> = Lsn.encode(lsn)
    
        [standby_status_update(lsn)]
      end
    
      defp handle_msg(Messages.msg_primary_keep_alive(reply: 0)), do: []
    
      defp handle_msg(Messages.msg_xlog_data(data: data)) do
        Logger.debug("xlog_data message: #{inspect(data, pretty: true)}")
        []
      end
    
      defp standby_status_update(lsn) do
        [
          wal_recv: lsn + 1,
          wal_flush: lsn + 1,
          wal_apply: lsn + 1,
          system_clock: Messages.now(),
          reply: 0
        ]
        |> Messages.msg_standby_status_update()
        |> Messages.encode()
      end
    
      
    defp escape_options([]),
        do: ""
    
      defp escape_options(opts) do
        parts =
          Enum.map_intersperse(opts, ", ", fn {k, v} -> [Atom.to_string(k), ?\s, escape_string(v)] end)
    
        [?\s, ?(, parts, ?)]
      end
    
      defp escape_string(value) do
        [?', :binary.replace(to_string(value), "'", "''", [:global]), ?']
      end
    end
    

    handle_data/2 decodifica el mensaje y lo pasa a handle_msg/1. Si es un primary_keep_alive , respondemos con un standby_status_update.

    El LSN denota una posición de byte en el WAL.

    El suscriptor responde con el LSN que ha manejado actualmente, como no estamos haciendo seguimiento de los mensajes que recibimos, simplemente confirmamos con el LSN enviado desde el servidor.

    A continuación, manejaremos los mensajes xlog_data, la idea aquí es que capturaremos cada operación en una estructura de transacción.

    Capturando transacciones

    El módulo CDC.Protocol manejará los mensajes xlog_data y rastreará el estado de la transacción

    defmodule CDC.Protocol do
      import Postgrex.PgOutput.Messages
      require Logger
    
      alias CDC.Tx
      alias Postgrex.PgOutput.Lsn
    
      
    @type t :: %__MODULE__{
              tx: Tx.t(),
              relations: map()
            }
    
      defstruct [
        :tx,
        relations: %{}
      ]
    
      @spec new() :: t()
      def new do
        %__MODULE__{}
      end
    
      def handle_message(msg, state) when is_binary(msg) do
        msg
        |> decode()
        |> handle_message(state)
      end
    
      def handle_message(msg_primary_keep_alive(reply: 0), state), do: {[], nil, state}
      def handle_message(msg_primary_keep_alive(server_wal: lsn, reply: 1), state) do
        Logger.debug("msg_primary_keep_alive message reply=true")
        <<lsn::64>> = Lsn.encode(lsn)
    
        {[standby_status_update(lsn)], nil, state}
      end
    
      def handle_message(msg, %__MODULE__{tx: nil, relations: relations} = state) do
        tx =
          [relations: relations, decode: true]
          |> Tx.new()
          |> Tx.build(msg)
    
        {[], nil, %{state | tx: tx}}
      end
    
      def handle_message(msg, %__MODULE__{tx: tx} = state) do
        case Tx.build(tx, msg) do
          %Tx{state: :commit, relations: relations} ->
            tx = Tx.finalize(tx)
            relations = Map.merge(state.relations, relations)
            {[], tx, %{state | tx: nil, relations: relations}}
    
          tx ->
            {[], nil, %{state | tx: tx}}
        end
      end
    
      defp standby_status_update(lsn) do
        [
          wal_recv: lsn + 1,
          wal_flush: lsn + 1,
          wal_apply: lsn + 1,
          system_clock: now(),
          reply: 0
        ]
        |> msg_standby_status_update()
        |> encode()
      end
    end
    

    CDC.Tx maneja mensajes recibidos dentro de la transacción, begin, relation, insert/update/delete y commit.

    defmodule CDC.Tx do
      import Postgrex.PgOutput.Messages
      alias Postgrex.PgOutput.Lsn
    
      alias __MODULE__.Operation
    
      @type t :: %__MODULE__{
              operations: [Operation.t()],
              relations: map(),
              timestamp: term(),
              xid: pos_integer(),
              state: :begin | :commit,
              lsn: Lsn.t(),
              end_lsn: Lsn.t()
            }
    
      defstruct [
        :timestamp,
        :xid,
        :lsn,
        :end_lsn,
        relations: %{},
        operations: [],
        state: :begin,
        decode: true
      ]
    
      def new(opts \\ []) do
        struct(__MODULE__, opts)
      end
    
      def finalize(%__MODULE__{state: :commit, operations: ops} = tx) do
        %{tx | operations: Enum.reverse(ops)}
      end
    
      def finalize(%__MODULE__{} = tx), do: tx
    
      @spec build(t(), tuple()) :: t()
      def build(tx, msg_xlog_data(data: data)) do
        build(tx, data)
      end
    
      def build(tx, msg_begin(lsn: lsn, timestamp: ts, xid: xid)) do
        %{tx | lsn: lsn, timestamp: ts, xid: xid, state: :begin}
      end
    
      def build(%__MODULE__{state: :begin, relations: relations} = tx, msg_relation(id: id) = rel) do
        %{tx | relations: Map.put(relations, id, rel)}
      end
    
      def build(%__MODULE__{state: :begin, lsn: tx_lsn} = tx, msg_commit(lsn: lsn, end_lsn: end_lsn))
          when tx_lsn == lsn do
        %{tx | state: :commit, end_lsn: end_lsn}
      end
    
      def build(%__MODULE__{state: :begin} = builder, msg_insert(relation_id: id) = msg),
        do: build_op(builder, id, msg)
    
      def build(%__MODULE__{state: :begin} = builder, msg_update(relation_id: id) = msg),
        do: build_op(builder, id, msg)
    
      def build(%__MODULE__{state: :begin} = builder, msg_delete(relation_id: id) = msg),
        do: build_op(builder, id, msg)
    
      # skip unknown messages
      def build(%__MODULE__{} = tx, _msg), do: tx
    
      defp build_op(%__MODULE__{state: :begin, relations: rels, decode: decode} = tx, id, msg) do
        rel = Map.fetch!(rels, id)
        op = Operation.from_msg(msg, rel, decode)
    
        %{tx | operations: [op | tx.operations]}
      end
    end
    

    CDC.Tx.Operation maneja los mensajes INSERT/UPDATE/DELETE y decodifica los datos combinándolos con la relación

    defmodule CDC.Tx.Operation do
      @moduledoc "Describes a change (INSERT, UPDATE, DELETE) within a transaction."
    
      import Postgrex.PgOutput.Messages
      alias Postgrex.PgOutput.Type, as: PgType
    
      @type t :: %__MODULE__{}
      defstruct [
        :type,
        :schema,
        :namespace,
        :table,
        :record,
        :old_record,
        :timestamp
      ]
    
      @spec from_msg(tuple(), tuple(), decode :: boolean()) :: t()
      def from_msg(
            msg_insert(data: data),
            msg_relation(columns: columns, namespace: ns, name: name),
            decode?
          ) do
        %__MODULE__{
          type: :insert,
          namespace: ns,
          schema: into_schema(columns),
          table: name,
          record: cast(data, columns, decode?),
          old_record: %{}
        }
      end
    
      def from_msg(
            msg_update(change_data: data, old_data: old_data),
            msg_relation(columns: columns, namespace: ns, name: name),
            decode?
          ) do
        %__MODULE__{
          type: :update,
          namespace: ns,
          table: name,
          schema: into_schema(columns),
          record: cast(data, columns, decode?),
          old_record: cast(columns, old_data, decode?)
        }
      end
    
      def from_msg(
            msg_delete(old_data: data),
            msg_relation(columns: columns, namespace: ns, name: name),
            decode?
          ) do
        %__MODULE__{
          type: :delete,
          namespace: ns,
          schema: into_schema(columns),
          table: name,
          record: %{},
          old_record: cast(data, columns, decode?)
        }
      end
    
      defp into_schema(columns) do
        for c <- columns do
          c
          |> column()
          |> Enum.into(%{})
        end
      end
    
      defp cast(data, columns, decode?) do
        Enum.zip_reduce([data, columns], %{}, fn [text, typeinfo], acc ->
          key = column(typeinfo, :name)
    
          value =
            if decode? do
              t =
                typeinfo
                |> column(:type)
                |> PgType.type_info()
    
              PgType.decode(text, t)
            else
              text
            end
    
          Map.put(acc, key, value)
        end)
      end
    end
    

    Como antes, el mensaje primary_keep_alive con reply == 1 envía un standby_status_update. Cuando recibimos un mensaje xlog_data, creamos un nuevo %Tx{} que usamos para “construir” la transacción hasta que recibimos un msg_commit que marca el final de la transacción.

    Cualquier mensaje de inserción, actualización o eliminación crea una CDC.Tx.Operation en la transacción, cada operación contiene un relation_id que se utiliza para buscar la relación desde tx.relations.

    La operación junto con la relación nos permite decodificar los datos. La información de columna y tipo se recupera de la relación y se utiliza para decodificar los valores en términos de Elixir.

    Una vez que estamos en un estado de commit, fusionamos Tx.relations con Protocol.relations, ya que un mensaje de relación sólo se transmitirá la primera vez que se encuentre una tabla durante la sesión de conexión, Protocol.relations contiene todos los msg_relation que se nos han enviado durante la sesión.

    El módulo CDC.Replication ahora se ve así:

    defmodule CDC.Replication do
      use Postgrex.ReplicationConnection
    
      alias CDC.Protocol
    
      require Logger
    
      defstruct [
        :publications,
        :protocol,
        :slot,
        :state
      ]
    
      def start_link(opts) do
        conn_opts = [auto_reconnect: true]
        publications = opts[:publications] || raise ArgumentError, message: "`:publications` is required"
        slot = opts[:slot] || raise ArgumentError, message: "`:slot` is required"
    
        Postgrex.ReplicationConnection.start_link(
          __MODULE__,
          {slot, publications},
          conn_opts ++ opts
        )
      end
    
      @impl true
      def init({slot, pubs}) do
        {:ok,
         %__MODULE__{
           slot: slot,
           publications: pubs,
           protocol: Protocol.new()
         }}
      end
    
      @impl true
      def handle_connect(%__MODULE__{slot: slot} = state) do
        query = "CREATE_REPLICATION_SLOT #{slot} TEMPORARY LOGICAL pgoutput NOEXPORT_SNAPSHOT"
    
        Logger.debug("[create slot] query=#{query}")
    
        {:query, query, %{state | state: :create_slot}}
      end
    
      @impl true
      def handle_result(
            [%Postgrex.Result{} | _],
            %__MODULE__{state: :create_slot, publications: pubs, slot: slot} = state
          ) do
        opts = [proto_version: 1, publication_names: pubs]
    
        query = "START_REPLICATION SLOT #{slot} LOGICAL 0/0 #{escape_options(opts)}"
    
        Logger.debug("[start streaming] query=#{query}")
    
        {:stream, query, [], %{state | state: :streaming}}
      end
    
      @impl true
      def handle_data(msg, state) do
        {return_msgs, tx, protocol} = Protocol.handle_message(msg, state.protocol)
    
        if not is_nil(tx) do
          Logger.debug("Tx: #{inspect(tx, pretty: true)}")
        end
    
        {:noreply, return_msgs, %{state | protocol: protocol}}
      end
    
      
    defp escape_options([]),
        do: ""
    
      defp escape_options(opts) do
        parts =
          Enum.map_intersperse(opts, ", ", fn {k, v} -> [Atom.to_string(k), ?\s, escape_string(v)] end)
    
        [?\s, ?(, parts, ?)]
      end
    
      defp escape_string(value) do
        [?', :binary.replace(to_string(value), "'", "''", [:global]), ?']
      end
    end
    

    handle_data/2 llama a Protocol.handle_message/1 que devuelve una tupla con tres elementos {messages_to_send :: [binary()], complete_transaction :: CDC.Tx.t() | nil, CDC.Protocolo.t()}

    Por ahora solo inspeccionamos la transacción cuando se emite desde Protocol.handle_message/3, probémoslo:

    Interactive Elixir (1.14.0) - press Ctrl+C to exit (type h() ENTER for help)
    opts = [
      slot: "articles_slot_elixir",
      publications: ["articles_pub"],
      host: "localhost",
      database: "postgres",
      username: "postgres",
      password: "postgres",
      port: 5432,
    ]
    
    {:ok, _} = CDC.Replication.start_link(opts)
    {:ok, pid} = Postgrex.start_link(opts)
    
    insert_query = """
    INSERT INTO articles (title, description, body) 
    VALUES ('Postgres replication', 'Using logical replication', 'with Elixir!')
    """
    
    _ = Postgrex.query!(pid, insert_query, [])
      
    14:03:48.020 [debug] Tx: %CDC.Tx{
      timestamp: ~U[2022-10-31 13:03:48Z],
      xid: 494,
      lsn: {0, 22981920},
      end_lsn: nil,
      relations: %{
        16386 => {:msg_relation, 16386, "public", "articles", :default,
         [
           {:column, [:key], "id", :int4, -1},
           {:column, [], "title", :text, -1},
           {:column, [], "description", :text, -1},
           {:column, [], "body", :text, -1}
         ]}
      },
      operations: [
        %CDC.Tx.Operation{
          type: :insert,
          schema: [
            %{flags: [:key], modifier: -1, name: "id", type: :int4},
            %{flags: [], modifier: -1, name: "title", type: :text},
            %{flags: [], modifier: -1, name: "description", type: :text},
            %{flags: [], modifier: -1, name: "body", type: :text}
          ],
          namespace: "public",
          table: "articles",
          record: %{
            "body" => "with Elixir!",
            "description" => "Using logical replication",
            "id" => 6,
            "title" => "Postgres replication"
          },
          old_record: %{},
          timestamp: nil
        }
      ],
      state: :begin,
      decode: true
    }
    

    Cada cambio en la transacción se almacena en Tx.operations, operation.record es la fila decodificada como un mapa.

    Finalmente, implementemos una forma de suscribirnos a los cambios de CDC.Replication:

    defmodule CDC.Replication do
      use Postgrex.ReplicationConnection
    
      alias CDC.Protocol
    
      require Logger
    
      defstruct [
        :publications,
        :protocol,
        :slot,
        :state,
        subscribers: %{}
      ]
    
      def start_link(opts) do
        conn_opts = [auto_reconnect: true]
        publications = opts[:publications] || raise ArgumentError, message: "`:publications` is required"
        slot = opts[:slot] || raise ArgumentError, message: "`:slot` is required"
    
        Postgrex.ReplicationConnection.start_link(
          __MODULE__,
          {slot, publications},
          conn_opts ++ opts
        )
      end
    
      def subscribe(pid, opts \\ []) do
        Postgrex.ReplicationConnection.call(pid, :subscribe, Keyword.get(opts, :timeout, 5_000))
      end
    
      def unsubscribe(pid, ref, opts \\ []) do
        Postgrex.ReplicationConnection.call(
          pid,
          {:unsubscribe, ref},
          Keyword.get(opts, :timeout, 5_000)
        )
      end
    
      @impl true
      def init({slot, pubs}) do
        {:ok,
         %__MODULE__{
           slot: slot,
           publications: pubs,
           protocol: Protocol.new()
         }}
      end
    
      @impl true
      def handle_connect(%__MODULE__{slot: slot} = state) do
        query = "CREATE_REPLICATION_SLOT #{slot} TEMPORARY LOGICAL pgoutput NOEXPORT_SNAPSHOT"
    
        Logger.debug("[create slot] query=#{query}")
    
        {:query, query, %{state | state: :create_slot}}
      end
    
      @impl true
      def handle_result(
            [%Postgrex.Result{} | _],
            %__MODULE__{state: :create_slot, publications: pubs, slot: slot} = state
          ) do
        opts = [proto_version: 1, publication_names: pubs]
    
        query = "START_REPLICATION SLOT #{slot} LOGICAL 0/0 #{escape_options(opts)}"
    
        Logger.debug("[start streaming] query=#{query}")
    
        {:stream, query, [], %{state | state: :streaming}}
      end
    
      @impl true
      def handle_data(msg, state) do
        {return_msgs, tx, protocol} = Protocol.handle_message(msg, state.protocol)
    
        if not is_nil(tx) do
          notify(tx, state.subscribers)
        end
    
        {:noreply, return_msgs, %{state | protocol: protocol}}
      end
    
      # Replies must be sent using `reply/2`
      # https://hexdocs.pm/postgrex/Postgrex.ReplicationConnection.html#reply/2
      @impl true
      def handle_call(:subscribe, {pid, _} = from, state) do
        ref = Process.monitor(pid)
    
        state = put_in(state.subscribers[ref], pid)
    
        Postgrex.ReplicationConnection.reply(from, {:ok, ref})
    
        {:noreply, state}
      end
    
      def handle_call({:unsubscribe, ref}, from, state) do
        {reply, new_state} =
          case state.subscribers do
            %{^ref => _pid} ->
              Process.demonitor(ref, [:flush])
    
              {_, state} = pop_in(state.subscribers[ref])
              {:ok, state}
    
            _ ->
              {:error, state}
          end
    
        from && Postgrex.ReplicationConnection.reply(from, reply)
    
        {:noreply, new_state}
      end
    
      @impl true
      def handle_info({:DOWN, ref, :process, _, _}, state) do
        handle_call({:unsubscribe, ref}, nil, state)
      end
    
      defp notify(tx, subscribers) do
        for {ref, pid} <- subscribers do
          send(pid, {:notification, self(), ref, tx})
        end
    
        :ok
      end
    
      defp escape_options([]),
        do: ""
    
      defp escape_options(opts) do
        parts =
          Enum.map_intersperse(opts, ", ", fn {k, v} -> [Atom.to_string(k), ?\s, escape_string(v)] end)
    
        [?\s, ?(, parts, ?)]
      end
    
      defp escape_string(value) do
        [?', :binary.replace(to_string(value), "'", "''", [:global]), ?']
      end
    end
    
    

    Y lo podemos usar así:

    opts = [
      slot: "articles_slot",
      publications: ["articles_pub"],
      host: "localhost",
      database: "postgres",
      username: "postgres",
      password: "postgres",
      port: 5432,
    ]
    
    {:ok, pid} = CDC.Replication.start_link(opts)
    {:ok, pg_pid} = Postgrex.start_link(opts)
    {:ok, ref} = CDC.Replication.subscribe(pid)
    
    insert_query = """
    INSERT INTO articles (title, description, body) 
    VALUES ('Postgres replication', 'Using logical replication', 'with Elixir!')
    """
    
    _ = Postgrex.query!(pg_pid, insert_query, [])
    flush()
    
    {:notification, #PID<0.266.0>, #Reference<0.2499916608.3416784901.94813>,
     %CDC.Tx{
       timestamp: ~U[2022-10-31 13:26:35Z],
       xid: 495,
       lsn: {0, 22983536},
       end_lsn: nil,
       relations: %{
         16386 => {:msg_relation, 16386, "public", "articles", :default,
          [
            {:column, [:key], "id", :int4, -1},
            {:column, [], "title", :text, -1},
            {:column, [], "description", :text, -1},
            {:column, [], "body", :text, -1}
          ]}
       },
       operations: [
         %CDC.Tx.Operation{
           type: :insert,
           schema: [
             %{flags: [:key], modifier: -1, name: "id", type: :int4},
             %{flags: [], modifier: -1, name: "title", type: :text},
             %{flags: [], modifier: -1, name: "description", type: :text},
             %{flags: [], modifier: -1, name: "body", type: :text}
           ],
           namespace: "public",
           table: "articles",
           record: %{
             "body" => "with Elixir!",
             "description" => "Using logical replication",
             "id" => 7,
             "title" => "Postgres replication"
           },
           old_record: %{},
           timestamp: nil
         }
       ],
       state: :begin,
       decode: true
     }}
    

    Conclusión

    Si está buscando una manera de capturar cambios de su base de datos con cambios mínimos en su configuración existente, definitivamente vale la pena considerar Cambiar la captura de datos. Con Elixir y postgrex hemos implementado un mini Debezium en ~400 LOC. La fuente completa está disponible aquí .

    Si necesita ayuda con la implementación de Elixir, nuestro equipo de expertos líder en el mundo siempre está aquí para ayudarlo. Contáctenos hoy para saber cómo podemos ayudarlo.

    The post Captura de datos con Postgres y Elixir appeared first on Erlang Solutions .

    • chevron_right

      Ignite Realtime Blog: Spark 3.0.2 Released

      news.movim.eu / PlanetJabber · Friday, 31 March, 2023 - 18:16 · 1 minute

    The Ignite Realtime community is happy to announce the availability of Spark version 3.0.2

    The release contains bug fixes and updates two plugins Translator and Roar.

    Many Spark translations are incomplete. Please help us translate Spark

    Full list of changes can be found in the changelog .

    We encourage users and developers to get involved with Spark project by providing feedback in the forums or submitting pull requests on our GitHub page.

    You can download Spark from the Downloads page. Below are the sha256 checksums:

    db7febafd05a064ffed4800066d11d0a2d27288aa3c0b174ed21a20672ab4669 *spark_3_0_2-64bit.exe
    647712b43942f7dc399901c9078ea1de7fcca14de3b5898bc3bdeee7751cf1b3 *spark_3_0_2-64bit.msi
    c1415417081095656c5fd7eb42e09ac6168f550af3c5254eb35506a404283a3c *spark_3_0_2-arm.exe
    78994eaa31d6c074bc72b9ac7a1f38f63488f9272c1a8870524f986dda618b6b *spark_3_0_2-with-jre.dmg
    ef3ba8eef5b88edc5e4ce9e13e9fa41ef2fad136cc6b518c52da79051c2a7c39 *spark_3_0_2-with-jre.exe
    d18dd7613333647f3fae6728731df7e3ef957103a84686ce1bb7befb9d32285f *spark_3_0_2-with-jre.msi
    39b03b4f4363bc053d2c5ed0c7452f97ced38a1c307ab77ce7f8b0e36459f498 *spark_3_0_2.deb
    5bc3bf000b4fe8a49d424ea53ccd6b62dae9291e987394f08c62ff0d69f0aec9 *spark_3_0_2.dmg
    e511c10db5f72e8b311f79dc3ac810040d44c5488f606cb860d7250d1dcf3709 *spark_3_0_2.exe
    f54d3990dd0ca76a1ca447af0bef9b4244861c7da9f6ab38a755c9cc578344c8 *spark_3_0_2.msi
    fff7fa157fe187b5307ef2383c87b8193a0416ffe41ffcb2ea0b0e6672a917f9 *spark_3_0_2.rpm
    a5cf06ccbe82dc308f2d2493bc0d89f552729fb993af392003263f0cd9caac16 *spark_3_0_2.sh
    03eae1e4e88fdc303752f3b5c0ef8bb00653cfdf28ee964699bba892e45110e4 *spark_3_0_2.tar.gz
    

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: 5 Key Tech Priorities for Fintech Leaders in 2023

      news.movim.eu / PlanetJabber · Thursday, 30 March, 2023 - 10:09 · 6 minutes

    The fintech industry is a major disruptor. Each year, it impacts how consumers interact with financial companies and brings new and innovative means to meet ever-growing customer expectations and occupy market space.

    As a business owner or executive in this space, you have no choice but to stay on top of your game to increase efficiency.

    In simpler terms, if your business doesn’t scale, it could fail.

    That might sound a tad extreme, but you’re dealing in a market that is set to be worth $698.48 billion by 2030. Understanding the trends and focuses of the time helps you know where you’re headed in order to remain competitive.

    So let’s talk about these trends that drive the market.

    Embed finance

    The embedded finance market is expected to see massive growth this year. The market has been growing at a rapid pace since 2020. Fintech companies have been steadily outpacing traditional banking services when it comes to gaining the trust of consumers.

    According to our latest report , searches for the term have increased by a staggering 488% in the last five years.

    Embedded finance solutions can make up 50% of total banking revenue streams in the near future. There is also a significant market for embedded financial products in the areas of deposit accounts, payments, insurance, and lending.

    Source: Mckinsey analysis

    Arguably the greatest benefit to embedding finance is that under the direction of inclusive fintech startups, it has the potential to empower potential customers who were previously excluded from the conventional financial industry. Similarly, emerging markets can provide a less stifling environment with lower prices and a larger customer base, which would further encourage innovation.

    We’re also likely to see traditional financial services and fintech firms, such as banks and payment processors, collaborating more closely to adopt embedded finance. This could result in payment fintechs providing more inclusive services tailored to various business models, leasing more potential while banks provide fundamental infrastructure.

    Embedded finance solutions will place much greater emphasis on technological advantages and operational capability to address your business’s current issues. Think risk and compliance management. Innovations like distributed computing and artificial intelligence (AI) will have multiple effects across all businesses, strengthening their ability to embrace embedded finance.

    AI

    Financial institutions are continuing to adopt AI and machine learning, the industry is set to see a projected savings of up to 22% by the year 2030 .

    How is this possible? AI-driven chatbots and assistants can make the customer experience very efficient. They can answer customer questions, monitor spending and tailor product recommendations based directly on customer preferences.

    While chatbots do not quite replicate the human experience, their growth across the market this year means we can expect more of them in 2023.

    Source: Adobe

    Think about your price comparison sites when looking for insurance, or travel rates. These services have been made possible through the application of Natural Language Processing (NLP), which allows the ability to make payments and offer personalised service at any given time.

    A key element of AI technology is its ability to predict human interactions, combining the best of two words, intelligence and behavioural finance. What is Behavioural Finance? It focuses on human psychology to influence economic markets which in turn, impacts market outcomes. It is considered the future of fintech, as supported by the development of data analytics and large amounts of consumer data. This data, combined with AI algorithms, creates the ability to provide such personalised services.

    The industry is starting to recognize the goldmine that is AI. It doesn’t just drive efficiency but also makes allows businesses to be smarter in their approach to interacting with customers in the market.

    Blockchain

    Blockchain is one of the most exciting trends in Fintech today. It addresses the problem of unsecured, expensive fund transfer processes, with high-speed transactions at a much lower cost.

    It works as a digital ledger that verifies, records, and facilitates various types of transactions. Its major selling point is security and autonomy for customers, as businesses and individuals can safely transfer digital assets without relying on a central authority or financial institutions.

    But this isn’t just the only thing that appeals to users.

    Blockchain is used in various fintech applications, such as:

    • Decentralised finance (DeFi): Creates decentralised financial services- for example borrowing, health insurance, lending services etc
    • Digital assets: Provides a secure, transparent method of storing and trading digital assets- got example, NFTs and cryptocurrencies.
    • Cross-border payment services: Enable secure and fast cross-border payments
    • Supply chain management: Provides transparent and secure methods of tracking goods and assets within supply chain networks.
    • Identity management- Creates decentralised identity systems, offering increased security and privacy.

    As blockchain eliminates the need for traditional banking, it now makes it easier for customers to utilise these financial offerings. This is particularly significant for third-world and developing countries, where access to traditional banks can sometimes be challenging.

    Customer Experience

    We’ve spoken a lot about customer experience expectations already, but it is integral to the future of fintech services. After all, the focus will always be on the people your business serves.

    Banks that consistently optimise the customer experience are set to grow 3.2x faster than their competitors.

    It’s up to businesses to provide seamless, user-friendly experiences to attract and retain their customers. This means focusing investment on user-centred design, more personalised services and omnichannel support.

    Put yourselves in your customer’s shoes to understand their pain points, and use that data to create customised solutions that far exceed expectations.

    Sustainability and ESG

    Any forward-thinking businesses will have Environmental, Social and Corporate Governance (ESG) and sustainability at the heart of their plans this year.

    Customers, investors and employees alike are keen to see businesses contribute to making society more sustainable. And with a net-zero goal to be met by 2050 (in the UK), organisations across the globe are under much greater scrutiny to implement supporting policies.

    According to FIS Global , “ESG is top of mind for financial services firms globally, with 60% of executives saying they are developing new ESG products and services.”

    So if your business isn’t environmentally friendly, it might have an impact on your customer base. Brands that care about the planet and show strategic planning are the ones set to thrive. If you don’t care, expect your customers to find a business that does.

    This increased sustainability interest has led to a lot of financial institutions offering a diverse portfolio of sustainable options, such as loans backed by environmental enterprises, such as investment into wind or solar farms. ESG bonds are also rising in popularity.

    A trend we’ll see this year will be sustainable (green) financing. This is another way that financial regulations and products are orchestrated to meet environmentally sustainable outcomes.

    Green Financing will be a key Fintech trend that will aid the private sector in doing positive work for the environment. It will also encourage private-public alliances on financial mechanisms like green bonds.

    Looking ahead

    With the ongoing development of the Fintech industry, technological innovation must work in tandem with the economy’s call for greater financial inclusion.

    As firms across the globe embrace the potential of tech to champion the future, consider these trends and how (or if) your business is incorporating them.

    Want a more in-depth look into fintech themes and how they should inform your tech strategy and decision-making? You can download our latest whitepaper to hear from the experts.

    Our team are also on hand if you’d like a chat about your fintech project or any projects you might have. Don’t hesitate to contact us .

    The post 5 Key Tech Priorities for Fintech Leaders in 2023 appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/5-key-tech-priorities-for-fintech-leaders-in-2023/