phone

    • chevron_right

      Isode: M-Link 19.4 Limited Release

      news.movim.eu / PlanetJabber · Thursday, 22 June, 2023 - 16:33 · 9 minutes

    M-Link 19.4 provides a very significant update and in particular provides the M-Link User Server product. It also provides M-Link MU Server, M-Link MU Gateway and M-Link Edge.   It does not provide M-Link IRC Gateway, which remains M-Link 17.0 only.

    M-Link 19.4 Limited Release is provided ahead of the full M-Link 19.4 release. M-Link 19.4 Limited Release is fully supported by Isode for production deployment. There is one significant difference with a standard Isode release:

    • Updates to M-Link 19.4 Limited Release will include additional functionality. This contrasts to standard Isode releases where updates are “bug fix only”. There will be a series of updates which will culminate in the full M-Link 19.4 release.

    Goals

    There are three reasons that this approach is being taken:

    1. To provide a preview for those interested to look at the new capabilities of M-Link 19.4.
    2. To enable production deployment of M-Link 19.4 ahead of full release for customers who do not need all of the features of the full M-Link 19.4 release.  M-Link 19.4 limited release provides ample functionality for a baseline XMPP user service.
    3. To enable customer review of what will be in M-Link 19.4 full release. We are planning to not provide all M-Link 17.0 capabilities in M-Link 19.4 full release. A list is provided below of the current plan. Based on feedback, we may bring more functionality into M-Link 19.4 full release. There is a trade-off between functionality and shipping date, which we will review with customers.

    Benefits

    M-Link 19.4 User Server and M-Link 19.4 MU Server offer significant benefits over M-Link 17.0:

    • M-Link 19.4 is fully Web managed, and M-Link Console is no longer used. This is the most visible difference relative to M-Link 17.0.  This enables management without installing anything on the management client.  It is helpful for deployments also using Web management in M-Link  Edge  and M-Link  MU Gateway (using either 19.3 or 19.4 versions).
    • Flexible link handling, as provided previously in M-Link 19.3
    1. Multiple links may be established with a peer.  These links may be prioritized, so that for example a SATCOM link will be used by default with fall back to HF in the event of primary link failure.  Fall forward is also supported, so that the SATCOM link is monitored and traffic will revert when it becomes available again.
    2. Automatic closure of idle remote peer sessions after configurable period.
    3. Support for inbound only links, primarily to support Icon-Topo.
    4. “Whitespace pings” to X2X (XEP-0361) sessions to improve failover after connectivity failures.
    • M-Link MU Server allows the HF Radio improvements of M-Link 19.3 MU Gateway to be used in a single server, rather than deploying M-Link 19.3 MU Gateway plus M-Link 17.0 User Server
    • The session monitoring improvements  previously provided in  M-Link 19.3
    1. Shows sessions of each type (S2S, X2X (XEP-0361), GCXP (M-Link Edge), and XEP-0365 (SLEP)) with information on direction and authentication
    2. Enable monitoring for selected sessions to show traffic, including ability to monitor session initialization.
    3. Statistics for sessions, including volume of data, and number of stanzas.
    4. Peer statistics, providing summary information and number of sessions for each peer.
    5. Statistics for the whole server, giving session information for the whole server.
    • Use of the capabilities previously provided M-Link 19.3 to provide metrics on activity to enable us to feed them into the a Prometheus database using the statsd protocol. Prometheus is a widely used time series database used to store metrics: https://prometheus.io/ . Grafana is a graphing front end often used with Prometheus: https://grafana.com/ .  Grafana provides dashboards to present information.  Isode will make available sample Grafana dashboards on request to evaluators and customers.  Metrics that can be presented include:
    1. Stanza count and rate for each peer
    2. Number of bytes sent and received for each link
    3. Number of sessions (C2S; S2S; GCXP; X2X; and XEP-0365 (SLEP))
    4. Message queue size for peers – important for low bandwidth links
    5. Message latency for each peer – important for high latency links

    M-Link 19.4 (Limited Release) Update Plan

    This section sets out the plan for providing updates to M-Link 19.4 (Limited Release)

    • User and Role provisioning (replacing Internet Mail View)
    • Special function mailboxes
    • Redirections
    • Standard SMTP distribution lists
    • Military Distribution Lists
    • Profiler Configuration
    • File Transfer by Email (FTBE) account provisioning

    Update 1: FMUC

    Federated Multi-User Chat (FMUC) is being added as the first update, as we are aware of customers who would be in a position to deploy 19.4 once this capability is added.

    Update 2: Archive Administration

    The initial archive capability is fully functional. Administration adds a number of functions, including the ability to export, back up and truncate the archive. These capabilities are seen as important for operational deployment of archiving.

    Update 3: Clustering

    Clustering is the largest piece of work and the most significant omission from the limited release. It is expected to take a number of months work to complete this, based on core work already done.

    Update 4: Miscellaneous

    There are a number of smaller tasks that are seen as essential for R19.4 final release, which will likely be provided incrementally. If any are seen as high priority for the preview, it would be possible to address prior to the clustering update.

    • Server-side XEP-0346 Form Discovery and Publishing (FDP). This will enable third-party clients to use FDP.
    • Certificate checking using CRL (Certificate Revocation List) and OCSP (Online Certificate Status Protocol).
    • Complete implementation of XEP-0163 Personal Eventing Protocol (PEP). This is mostly complete in the initial preview.
    • Administration. The initial preview supports single administrator with password managed by M-Link.
      • Option for multiple administrators
      • Option for administrators specified and authenticated by LDAP
      • Administrators with per-domain administration rights
    • Additional audit logging, so that 19.4 will provide all of the log messages that 17.0 does.
    • XEP-0198 Stream Management  support for C2S (initial preview supports it in S2S and XEP-0361)
    • Web monitoring of C2S connections
    • XEP-0237 Roster versioning
    • C2S SASL EXTERNAL to provide client strong authentication

    Update 5: Upgrade

    To provide an upgrade from M-Link 17.0. This capability is best developed last.

    Items Under Consideration for M-Link 19.4 Final Release

    There is a trade-off between functionality included and ship date. The following capabilities supported in in 17.0 are under consideration for inclusion in M-Link 19.4. We ask for customer review of these items.

    Unless we get clear feedback requesting inclusion of these features, we will not include them in 19.4 and will consider them as desirable features for a subsequent release.

    • XEP-0114 Jabber Component Protocol that allows use of third party components.
    • SASL GSSAPI support to enable client authentication using Windows SSO
    • Archiving PubSub events (on user and pubsub domains)
    • Configuring what to archive per domain (R17 supports: nothing, events only (create, destroy, join, etc), events and messages)
    • Assigning MUC affiliations to groups, to simplify MUC rights administration
    • XEP-0227 configuration support to facilitate server migration
    • “Send Announcement” to broadcast information to all users
    • PDF/A archiving to provide a simple and stable long term archive

    Features post 19.4

    After 19.4 Final Release is made, future releases will be provided on the normal Isode model of major and minor releases with updates as bug fix only.

    Customer feedback is requested to help us set priorities for these subsequent releases.

    M-Link IRC Gateway

    M-Link IRC gateway is the only M-Link product not provided in M-Link 19.4. M-Link 17.0 IRC Gateway works well as an independent product.

    When we do a new version, we plan to provide important new functionality and not simply move the 17.0 functionality into a new release.

    New Capabilities

    The R19.4 User Server focus has been to deliver functionality equivalent to 17.0 on the R19.3 base. After 19.4 we are considering adding new features. Customers are invited to provide requirements and to comment on the priority of identified potential new capabilities set out here:

    • FMUC Clustering.  M-Link 19.4 (and 17.0) FMUC nodes cannot be clustered.
    • FMUC use with M-Link IRC Gateway. Currently, IRC cannot be used with FMUC. This would be helpful for IRC deployment.
    • STANAG 4774/4778 Confidentiality Labels.
    • RFC 7395 Websocket support as an alternative to BOSH.
    • OAuth (RFC 6749) support
    • Support of PKCS#11 Hardware Security Modules

    M-Link 17.0 User Server Features not in R19.4

    This section sets out a number of 17.0 features that are not planned for R19.4. If there is a clear customer requirement, we could include in R19.4. We are interested in customer input on priority of these features relative to each other and to other potential work.

    The following capabilities are seen to have clear benefit and Isode expects to add them.

    • Security Label and related configuration for individual MUC Rooms.  In 19.4 this can be configured per MUC domain, so an equivalent capability can be obtained by using a MUC domain for each security setting required,
    • XEP-0012 Last Activity
    • Option to limit the number of concurrent sessions for a user
    • Management of PKI identities and certificates in R19.4 is done with PEM files, which is pragmatic.  Use of PKCS#10 Certificate Signing Requests is a more elegant approach.
    • XMPPS (port 5223) has clear security benefits. The 17.0 implementation has limited management which means that it is not generally useful in practice.

    The following capabilities are potentially desirable.  Customer feedback is sought.

    • XEP-0346 Form Discovery and Publishing (FDP)
      • WebApp viewer.  We believe this would be better done in a client (e.g., Swift).
      • Gateway Java app, which converted new FDP forms to text and submitted to MUC.
      • Administration of FDP data on Server.
    • Per-Domain Search Settings, so that users can be constrained as to which domains can be searched
    • Internal Access Control Lists, for example to permit M-Link Administrators to edit user rosters.
    • Generic PubSub administration

    Features in M-Link 17.0 that Isode plans to drop

    There are a number of features provided in M-Link 17.0 that Isode has no current plans to provide going forward, either because they are provided by other mechanisms or they are not seen to add value. These are listed here  primarily to validate that no customers need these functions.

    • Schematron blocking rules
      • These have been replaced with XSLT transform rules
    • IQ delegation that enables selected stanzas sent to users to be instead processed by a component
    • XEP-50 user preferences
      • This ad-hoc allowed users to set preferences overriding server defaults to indicate which types of stanzas they wanted to store in offline storage and whether to auto-accept or auto-subscribe presence.
    • Management of XEP-0191 block lists by XEP-0050 ad hoc
      • Management of block lists, where desired, is expected to be performed by XEP-0191
    • XEP-114 Component permissions
    • Pubsub presence, apart from that provided by PEP
    • XEP-78 (non-SASL authentication)
      • This is obsolete
    • Some internal APIs that are not longer needed
    • Support for a security label protocol (reverse engineered by Isode) used in the obsolete CDCIE product
    • Security Checklist
      • M-Link Console had a security checklist which checked the configuration to see if there was anything insecure
      • This does not make sense in context of the Web interface which aims to flag security issues in appropriate part of UI
    • Conversion of file based archive to Wabac
      • M-Link Console had an option to “Convert and import file-based archive…” in the “Archive” menu
      • This was needed to support archive migration from older versions of M-Link
    • Pubsub-based statistics. M-Link 17.0 recorded statistics using PubSub. M-Link 19.4 does this using Prometheus, which can be integrated with Grafana dashboards.
    • XMPP-based group discovery – the ability to use XMPP discovery on an object  and get a list of groups back.
    • XML-file archives
      • This was a write-only archive format used by older versions of M-Link before introduction of the current archive database. M-Link 17.0 continued to support this option.

    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/m-link-19-4-limited-release/

    • chevron_right

      Erlang Solutions: Lifting Your Loads for Maintainable Elixir Applications

      news.movim.eu / PlanetJabber · Wednesday, 14 June, 2023 - 08:55 · 9 minutes

    This post will discuss one particular aspect of designing Elixir applications using the Ecto library: separating data loading from using the data which is loaded.  I will lay out the situations and present some solutions, including a new library called ecto_require_associations .

    Applications will differ, but let’s look at this example from the Plausible Analytics repo[1]:

    def plans_for(user) do

    user = Repo.preload(user, :subscription)

    # … other code …

    raw_plans =

    cond do

    contains?(v1_plans, user.subscription) ->

    v1_plans

    contains?(v2_plans, user.subscription) ->

    v2_plans

    contains?(v3_plans, user.subscription) ->

    v3_plans

    contains?(sandbox_plans, user.subscription) ->

    sandbox_plans

    true ->

    # … other code …

    end

    # … other code …

    end

    Here the subscription association is preloaded for the given user, which is then immediately used as part of the logic of plans_for/1 . I’d like to try to convince you that in cases like this, it would be better to “lift” that loading of data out of the function.

    The Ecto Library

    Ecto was uses the “repository pattern”. With Elixir being a language drawing a lot of people from the Ruby community, this was a departure from the “active record pattern” used in the “ActiveRecord” library made popular by Ruby on Rails. This pattern uses classes to fetch data (e.g. User.find(id) ) and object methods to update data (e.g. user.save ). Since “encapsulation” is a fundamental aspect of object-oriented programming, it seems logical at first to hide away the database access in this way. Many found, however, that while encapsulation of business logic makes sense, also encapsulating the database logic often led to complexity when trying to control the lifecycle of a record.

    The repository pattern – as implemented by Ecto – requires explicit use of a “Repo” module for all queries to and from the database. With this separation of database access, Ecto.Schema modules can focus on defining application data and Ecto.Changeset module focus on validating and transforming application data.

    This separation is made very clear in Ecto’s README which states: “Ecto is also commonly used to map data from any source into Elixir structs, whether they are backed by a database or not.”

    Even if you’re fairly familiar with Ecto, I highly recommend watching Darin Wilson’s “Thinking in Ecto” presentation for a really good overview of the hows and whys of Ecto.

    Ecto’s separation of a repository layer makes even more sense in the context of functional programming.  For context in discussing solutions to the problem presented above, it’s important to understand a bit about functional programming.

    One big idea in functional programming (or specifically “purely functional programming”) is the notion of minimising and isolating side-effects. A function with side-effects is one which interacts with some global state such as memory, a file, or a database. Side-effects are commonly thought of as changing global state, but reading global state means that the output of the function could change depending on when it’s run.

    What benefits does avoiding side-effects give us?

    • Separating database access from operations on data is a great separation of concerns which can lead to more modular and therefore more maintainable code.
    • Defining a function where the output depends completely on the input makes the function easier to understand and therefore easier to use and maintain.
    • Automated tests for functions without side-effects can be much simpler because
    • you don’t need to setup external state.

    A Solution

    A first approach to lifting the Repo.preload in the example above would be to do just that. That would suddenly make the function “pure” and dependent only on the caller passing in a value for the user’s subscription field. The problem comes when the person writing code which calls plans_for/1 forgets to preload. Since Ecto defaults associations on schema structs to have an %Ecto.Association.NotLoaded{} value, this approach would lead to a confusing error message like:

    (KeyError) key :paddle_plan_id not found in: #Ecto.Association.NotLoaded<association :subscription is not loaded>

    This is because the contains/2 function accesses subscription.paddle_plan_id .

    So, it would probably be better to explicitly look to see if the subscription is loaded. We could do this with pattern matching in an additional function definition:

    def plans_for(%{subscription: %Ecto.Association.NotLoaded{}}), do: raise "Expected subscription to be preloaded"

    Or, if we want to avoid referencing the Ecto.Association.NotLoaded module in your application’s code, there’s even a function Ecto provides to allow you to check at runtime:

    def plans_for(user) do

    if(!Ecto.assoc_loaded?(user.subscription) do

    raise "Expected subscription to be preloaded")

    end
    # …

    This can get repetitive and potentially error-prone if you have a larger set of associations that you would like your function to depend on. I’ve created a small library called ecto_require_associations to take care of the details for you. If you’d like to load multiple associations you can use the same syntax used by Ecto’s preload function:

    def plans_for(user) do

    EctoRequireAssociations.ensure!(user, [:subscriptions, site_memberships: :site])
    # …

    The above call would check:

    • If the subscriptions association has been loaded for the user
    • If the site_memberships association has been loaded for the user
    • If the site association has been loaded on each site membership

    If, for example, one or more of the site memberships hasn’t been loaded then an exception is raised like:

    (ArgumentError) Expected association to be set: `site_memberships.site`

    It can even work on a list of records given to it, just like Repo.preload.

    Going Too Far The Other Way

    Hopefully I’ve convinced you that the above approach can be helpful for creating more maintainable code. At the same time, I want to caution against another potential problem on the “other side”. Let’s say we have a function to get a user like this:

    defmodule MyApp.Users do

    def get_user(id) do

    MyApp.Repo.get(id)

    |> MyApp.Repo.preload([:api_keys, :subscriptions, site_memberships: :site])

    end

    def get_admins do

    from(user in User, where: user.is_admin)

    |> MyApp.Repo.all()

    |> MyApp.Repo.preload([:api_keys, :subscriptions, site_memberships: :site])  end

    When loading data to support pure functions, it could be tempting to load everything that might be needed by all functions which have a user argument. The risk then becomes one of loading too much data. Functions like get_user and get_admins are likely to be used all over your application, and generally you won’t need all of the associations loaded. This is a scaling problem that isn’t a problem until your application gets popular.One common pattern to solve this is to simply have a preloads argument:

    defmodule MyApp.Users do

    def get_user(id, preloads \\ []) do

    MyApp.Repo.get(id)

    |> MyApp.Repo.preload(preloads)

    end

    def get_admins(preloads \\ []) do

    from(user in User, where: user.is_admin)

    |> MyApp.Repo.all()

    |> MyApp.Repo.preload(preloads)

    end

    end

    # Usage

    MyApp.Users.get_admins([:api_keys])

    This does solve the problem and allows you to load associations only where you need them. I would say, however, that this code falls into the same trap as the Active Record library by intertwining database and application logic.

    The Ecto library, your schemas, and your associations aren’t secrets.  You absolutely should encapsulate things like your complex query logic, the details for how you calculate numbers, or the decisions you make based on data. But it’s fine to ask Ecto to preload the associations and let your query functions just do querying. This can give you a clean separation of concerns:

    defmodule MyApp.Users do

    def get_user(id), do: MyApp.Repo.get(id)

    def get_admins do

    from(user in User, where: user.is_admin)

    |> MyApp.Repo.all()

    end

    end

    # Usage:

    MyApp.Users.get_admins()|> MyApp.Repo.preload(:api_keys)

    That said, if you have associations you need for specific functions, you may want to create functions which can preload without the caller knowing the details. This saves repetition and helps clarify overlapping dependencies:

    defmodule MyApp.Users do

    # You can call `preload_for_access_check(user)` to load the required data

    def can_access?(user, resource) do

    EctoRequireAssociations.ensure!(user, [:roles, :permissions])

    # …

    end

    def preload_for_access_check(user) do

    MyApp.Repo.preload(user, [:roles, :permissions])

    end

    def preload_for_something_else(user) do

    MyApp.Repo.preload(user, [:roles, :regions])

    end

    end

    Remember that Repo.preload won’t load data if it’s already been set on the struct unless you specify force: true!

    So if we shouldn’t put our data loading along with our business logic or with our queries, where should it go? The answer to this is fuzzier, but it makes sense to think about what parts of your code should exist as “coordination” points. Let’s talk about some good options:

    Phoenix Controllers, Channels, LiveViews, and Email handlers

    Controllers, LiveViews, and email handlers are all places where we render a template, and generally we are loading some sort of data to be able to do that. Channels and LiveViews are dealing with events which are also often a place where we’ll need to load data to provide some sort of update to a client. In both cases, this is the place where we know what data will be needed, so it makes sense to keep the responsibility of choosing what data to load in this code.

    Absinthe Resolvers

    Absinthe is a library for implementing a GraphQL API. Not only will you need to preload data, sometimes you may use the Dataloader to efficiently load data outside of manual preloading. This highlights how loading of associated data is a separate concern from evaluating it.

    Scripts and Tasks

    Scripts and mix tasks are another place for coordinating loading and logic. This code might even be one-off and/or short-lived, so it may not even make sense to define functions inside of context modules. Depending on the importance and lifecycle of a script/task it could be that none of the above discussion is applicable.

    High Level Context Functions

    The discussion above suggests pushing loading logic up and out of context -type modules. However, if you have a high-level function which is an entrypoint into some complex code, then it may make sense to coordinate your loading and logic there. This is especially true if the function is used from multiple places in your application.

    Conclusion

    It seems like a small detail, but making more functions purely functional and isolating your database access can have compounding effects on the maintainability of your code. This can be especially true when the codebase is maintained by more than one person, making it easier for everybody to change the code without worrying about side-effects. Try it out and see!

    [1]: Please don’t see this as me picking on the Plausible Analytics folks in any way. I think that their project is great and the fact that they open-sourced it makes it a great resource for real-world examples like this one!

    The post Lifting Your Loads for Maintainable Elixir Applications appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/lifting-your-loads-for-maintainable-elixir-applications/

    • chevron_right

      JMP: JMP is Launched and Out of Beta

      news.movim.eu / PlanetJabber · Monday, 12 June, 2023 - 10:47 · 2 minutes

    JMP has been in beta for over six years, and today we are finally launching! With feedback and testing from thousands of users, our team has made improvements to billing, phone network compatibility, and also helped develop the Cheogram Android app which provides a smooth onboarding process, good Android integration, and phone-like UX for users of that platform. There is still a long road ahead of us, but with so much behind us we’re comfortable saying JMP is ready for launch, and that we know we can continue to work with our customers and our community for even better things in the future. Check out our launch on Product Hunt today as well!

    JMP’s pricing has always been “while in beta” so the first question this raises for many is: what will the price be now? The new monthly price for a customer’s first JMP phone number is now $4.99 USD / month ($6.59 CAD), but we are running a sale so that all customers will continue to pay beta pricing of $2.99 USD / month ($3.59 CAD) until the end of August . We are extending until that time the option for anyone who wishes to prepay for up to 3 years and lock-in beta pricing . Contact support if you are interested in the prepay option. After August, all accounts who have not pre-paid will be put on the new plan with the new pricing. Those who do pre-pay won’t see their price increase until the end of the period they pre-paid for.  The new plan will also include a multi-line discount, so second, third, etc JMP phone numbers will be $2.45 USD / month ($3.45 CAD) when they are set to all bill from the same balance.  The new plan will also finally have zero-rated toll free calling.  All other costs (per-minute costs, etc) remain the same, see the pricing page for details.

    The account settings bot now has an option “Create a new phone number linked to this balance” so that you can get new numbers using your existing account credit and linked for billing to the same balance without support intervention.

    Thanks so much to all of you who have helped us get this far.  There is lots more exciting stuff coming this year, and we are so thankful to have such a supportive community along the way with us.  Don’t forget we’ll be at FOSSY in July , and be sure to check out our launch on Product Hunt today as well.

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/launch-2023

    • chevron_right

      Erlang Solutions: Sign up for the RabbitMQ Summit Waiting List

      news.movim.eu / PlanetJabber · Monday, 12 June, 2023 - 09:02 · 1 minute

    Mark your calendars! The Very Early Bird tickets for the RabbitMQ Summit are set to open on 15th June, 2023. In joining the waiting list, you will receive exclusive access to the conference’s best-priced tickets.

    This is your chance to secure your spot at the RabbitMQ Summit at a discounted rate, allowing you to make the most of this incredible learning and networking opportunity.

    Event date and location

    Join us on 20th Oct 2023 at the Estrel Hotel- Berlin in Germany.

    About the RabbitMQ Summit

    The RabbitMQ Summit 2023 is an eagerly anticipated event that brings together software developers, system architects, and industry experts from around the world.

    RabbitMQ is utilised by more than 35,000 companies globally and is considered a top choice for implementing the Advanced Message Queuing Protocol (AMQP).

    By joining us, you’ll have the opportunity to learn from industry experts, who will provide valuable insights and share best practices for successfully integrating RabbitMQ into your organisation.

    We’re looking forward to creative talks, great vibes and most importantly, meeting fellow Rabbit MQ enthusiasts!

    Ticket details

    You can join the waitlist and sign up for exclusive access here .

    For more information

    You can visit the RabbitMQ Summit website for more details and follow our Twitter account for the latest updates.

    The post Sign up for the RabbitMQ Summit Waiting List appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/sign-up-for-the-rabbitmq-summit-waiting-list/

    • chevron_right

      Erlang Solutions: Call for Speakers at the RabbitMQ Summit

      news.movim.eu / PlanetJabber · Thursday, 8 June, 2023 - 08:22 · 1 minute

    Are you a user, operator, developer, engineer, or simply someone with interesting user stories to tell about RabbitMQ? If so, we have some exciting news for you! The RabbitMQ Summit 2023 is just around the corner, and we are thrilled to invite you to submit your talks for this highly anticipated event.

    The RabbitMQ Summit brings together a vibrant, diverse community of enthusiasts from all corners of the world. It’s a unique opportunity to connect, learn, and exchange ideas with fellow RabbitMQ users, experts, and developers.

    This year, we have curated two exciting themes that encompass the breadth and depth of RabbitMQ usage and its impact across industries. We invite you to consider these themes when submitting your proposals:

    1. Features Theme-  in-depth tech explanation of RabbitMQ features, e.g.: How to use RabbitMQ in an economical fashion explained by developers, a roadmap of how to use streams.

    2. Industry Theme-  how RabbitMQ is used in the industry, e.g.: In an IoT space, troubleshooting, etc.

    Think creatively and propose talks that bring unique perspectives, cutting-edge techniques, and actionable insights to the table. Don’t hesitate to submit your ideas – every story counts.

    Event date and location

    20 Oct 2023 at the Estrel Hotel- Berlin, Germany

    Submission deadlines

    All talks should be submitted by 23:59 BST on 11 June 2023 . You will be informed about your submission status by 31 July 2023 .

    You can submit your talks here .

    Ticket Details

    There is no event fee for speakers. For just attendees, The public Early Bird tickets will be available on 15th June 2023. You can sign up for the waiting list here

    For more information

    You can visit the RabbitMQ Summit website for more details and follow our Twitter account for the latest updates.

    The post Call for Speakers at the RabbitMQ Summit appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/call-for-speakers-at-the-rabbitmq-summit/

    • chevron_right

      Erlang Solutions: How ChatGPT improved my Elixir code. Some hacks are included.

      news.movim.eu / PlanetJabber · Thursday, 1 June, 2023 - 10:53 · 3 minutes

    I have been working as an Elixir developer for quite some time and recently came across the ChatGPT model. I want to share some of my experience interacting with it.

    During my leisure hours, I am developing an open-source Elixir initiative, Crawly , that facilitates the extraction of structured data from the internet.

    Here I want to demonstrate how ChatBot helped me to improve the code. Let me show you some prompts I have made and some outputs I’ve got back.

    Writing documentation for the function

    I like documentation! It’s so nice when the code you’re working with is documented! What I don’t like — is writing it. It’s hard, and it’s even harder to keep it updated.

    So let’s consider the following example from our codebase:

    My prompt:

    Reasonably good for my taste. Here how it looks like after a few clarifications:

    Improving my functions

    Let’s look at other examples of the code I have created! Here is an example of code I am not super proud of. Please don’t throw stones at me; I know I should not create atoms dynamically. Well, you know how it can be- you want to have something, but you don’t have enough time with idea to improve things later and that later never happens. I think every human developer knows that story.

    Suggesting tests for the code

    Writing tests is a must-have in the modern world. TDD makes software development better. As a human, I like to have tests. However, I find it challenging to write them. Can we use the machine to do that? Let’s see!

    Regarding tests, the response code needs to be more accurate. It might be possible to improve results by feeding chatBot with Elixir test examples, but these tests are good as hints.

    Creating Swagger documentation from JSON Schema

    I like using JSON schema for validating HTTP requests. The question is — is that possible to convert a given JSON schema into a Swagger description so we don’t have to do double work?

    Let’s see. Fortunately, I have a bit of JSON schema-type data in Crawly. Let’s see if we can convert it into Swagger.

    Well, it turns out it’s not a full Swagger format; which is a bit of a shame. Can we improve it a bit so it can be used immediately? Yes, let’s prompt engineer it a bit:

    Now I can preview the result in the real Swagger editor, and I think I like it:

    Conclusion

    What can I say? ChatGPT is a perfect assistant that makes development way less boring for regular people.

    Using ChatGPT when programming with Elixir can bring several advantages. One of the most significant advantages is that it can provide quick and accurate responses to various programming queries, including syntax and documentation. This can help programmers save time and improve their productivity.

    Additionally, ChatGPT can offer personalised and adaptive learning experiences based on individual programmers’ skill levels and preferences. This can help programmers learn Elixir more efficiently and effectively.

    The use of AI and NLP in programming is likely to become more widespread in the near future. As AI technology continues to improve, it may become more integrated into the programming process, helping programmers to automate repetitive tasks and improve their overall productivity. ChatGPT is just one example of how AI is already being used to improve the programming experience, and more innovations in this field will likely emerge in the future.

    Honestly, I am super excited and would love to explore the possibilities of AI even further!

    The post How ChatGPT improved my Elixir code. Some hacks are included. appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/how-chatgpt-improved-my-elixir-code-some-hacks-are-included/

    • chevron_right

      Erlang Solutions: Here’s why you should consider investing in RabbitMQ during a recession

      news.movim.eu / PlanetJabber · Thursday, 25 May, 2023 - 08:11 · 6 minutes

    Europe and the US are leading the way in the forecasted recession for 2023, due to persistently high inflation and increasing interest rates. With minimal projected GDP growth, modern technologies can play a crucial role in reducing the impact of economic downturns.

    As caution looms, it can be tempting to reign in on your investment. Your initial thought is to balance the books and save money where you can which, on the surface, sounds like the sensible thing to do.

    But consider this. Investing in technology, specifically RabbitMQ in the face of recession can actually reduce the cost of doing business and save you money over time. In fact, leading digital organisations that have prioritised this has a lower cost of doing business and create a significant competitive edge in the current inflationary environment.

    Carving out a tech budget can signify that you’re conscious of the current challenges while being hopeful for the future. Being mindful of the long game can allow businesses to thrive as well as survive.

    So if you really want to recession-proof your business? We’ll tell you why it’s best to invest.

    Executives are already planning tech investments

    In a survey complied by Qualtrics, a majority of C-suite level executives (85%) expect spending to increase this year, as their companies go through both a workforce and digital transformation, with tech priorities as follows:

    1. Tech modernisation (73%)
    2. Cybersecurity (73%)
    3. Staffing/retaining workforce (72%)
    4. Training/reskilling workforce (70%)
    5. Remote work support (70%)

    This is a major shift from previous cycles we’ve seen. In the past, tech investments have been one of the first on the chopping block. But now, businesses have taken note that investing in technology is not a cost, but a business driver and differentiator.

    The game of business survival has changed

    Prior to the pandemic in 2020, digital capabilities were not considered a high priority by businesses when considering preparations for a potential economic downturn. Fast forward to today, and we can see that COVID-19 has expedited digital change for businesses.

    Companies have accelerated the digitisation of customer and supply-chain interactions by three to four years, and a sudden change in consumer habits has steered the ship.

    Consumers made a drastic shift towards online channels, which meant that industries had to ramp up their digital offerings fast. Rates of adoption during this time were lightyears of ahead when compared to previous years (as shown above).
    Fast forward to today and 48% of consumers say their buying habits have permanently changed since the pandemic. Businesses have to maintain their digital presence, in order to serve a growing market.

    Adapting to demand

    Technologies such as RabbitMQ have been integral to support the high volume of incoming requests and efficiently accommodate digital acceleration.

    Consider this- your website might be selling tickets to an event, and you’re anticipating a record number of visitors. Your main consideration is ensuring your satisfying customer wait times. So what’s the solution? A faster operating system.

    RabbitMQ technology helps with customer wait times through its ability to handle high volumes of incoming requests. These requests are distributed to various parts of the system, ensuring that each one is processed promptly. By effectively handling the increased load, RabbitMQ helps businesses maintain shorter wait times, accommodating increased or decreased message volumes with ease, as business demands fluctuate during a recession.

    Sustainability is steering the tech agenda

    Do you think businesses are slowing down on ESG during a recession? Think again.

    A recent study revealed nearly half of CFOs plan to increase investment in ESG initiatives this year despite high inflation, ongoing supply chain challenges and the risk of recession.

    According to the World Economic Forum, 90% of executives believe sustainability is important. However, only 60% of organisations have sustainability strategies.

    Addressing ESG initiatives during a recession requires a thoughtful approach from C-Suite leaders and should be apparent across your tech offerings. While not all software is inherently sustainable, tools like RabbitMQ can support sustainable practices.

    For example, organisations can further enhance the sustainability of their messaging infrastructure by running RabbitMQ on energy-efficient hardware, leveraging cloud services with renewable energy commitments, or optimising their message routing algorithms to minimise resource usage. These additional considerations can contribute to a more sustainable use of RabbitMQ as part of a broader sustainability framework within a business.

    Providing professional, reliable, and cutting-edge products and services with ESG values can bring meaningful change to people’s lives. And building social responsibility is critical to support the entire ecosystem.

    Tech built for business operations

    IT was once run by very centralised systems. But today, most software used by businesses is outsourced- meaning all the services are disjointed and require training to understand how to manage each one individually, which can be costly.

    SOA stands for Service-Oriented Architecture. In relation to RabbitMQ, SOA refers to a design approach where software applications are built as a collection of individual services that communicate with each other to perform specific tasks. RabbitMQ can play a role in facilitating communication between these services.

    An_Empirical_Analysis_on_the_Microservices_Architecture_Pattern_and_Service-Oriented_Architecture

    Imagine your company’s different departments- sales, inventory, shipping, HR etc. Each department has its own software application that handles specific tasks. In a traditional architecture, these applications might be tightly coupled, meaning they directly interact with each other.

    With a service-oriented architecture, the applications are designed as separate services. Each service focuses on a specific function, like processing sales orders, managing inventory, or tracking shipments. These services can communicate with each other by sending messages.

    RabbitMQ acts as a messenger between these services. When one service needs to send information or request something from another service, it can publish a message to RabbitMQ. The receiving service listens and responds accordingly. This decoupled approach allows services to interact without having direct knowledge of each other, promoting flexibility and scalability.

    For example, when a sales application receives a new order, it can publish a message to RabbitMQ containing the order details. The inventory service, subscribed to the relevant message queue, receives the order information, updates the inventory, and sends a confirmation back through RabbitMQ. The shipping service can then listen for shipping requests and initiate the shipping process.

    During a recession, SOA with RabbitMQ enables a more modular and flexible system, where services can be developed, deployed, and scaled independently. It simplifies communication between different components, promotes loose coupling, and allows for efficient integration and the ability to quickly adapt to changing market conditions of a recession.

    Tech supports recession-proofing goals

    Investment in innovative digital initiatives is indispensable for a constantly evolving digital transformation journey, especially during market shifts. Programmes such as RabbitMQ provide organisations with the flexibility required to swiftly shift to new solutions, responding to market changes more quickly.

    To conclude, although recessions can be intimidating, it is crucial for businesses to embrace technology as a means to prepare themselves and ensure a positive customer experience. Leveraging technological solutions allows businesses to stay resilient, adapt to changing market dynamics, and position themselves for long-term success, even in challenging economic times.

    If you’d like to talk about your current tech space, feel free to drop us a line .

    The post Here’s why you should consider investing in RabbitMQ during a recession appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/heres-why-you-should-consider-investing-in-rabbitmq-during-a-recession/

    • chevron_right

      Ignite Realtime Blog: CVE-2023-32315: Openfire Administration Console authentication bypass

      news.movim.eu / PlanetJabber · Tuesday, 23 May, 2023 - 16:47

    We’ve had an important security issue reported that affects all recent versions of Openfire. We’ve fixed it in the newly published 4.6.8 and 4.7.5 releases. We recommend people upgrade as soon as possible. More info, including mitigations for those who cannot upgrade quickly, is available in this security advisory: CVE-2023-32315: Administration Console authentication bypass .

    Related to this issue, we have also made available updates to three of our plugins:

    If you’re using these plugins, it is recommended to update them immediately.

    When you are using the REST API plugin, or any proprietary plugins, updating Openfire might affect availability of their functionality. Please find work-arounds in the security advisory .

    If you have any questions, please stop by our community forum or our live groupchat .

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic

    • wifi_tethering open_in_new

      This post is public

      discourse.igniterealtime.org /t/cve-2023-32315-openfire-administration-console-authentication-bypass/92869

    • chevron_right

      Ignite Realtime Blog: Openfire 4.7.5 Release

      news.movim.eu / PlanetJabber · Tuesday, 23 May, 2023 - 16:42

    The Ignite Realtime Community is happy to announce the 4.7.5 release of Openfire!

    This release primarily addresses the issue that is subject of security advisory CVE-2023-32315 , but also pulls in a number of improvements and bugfixes

    You can find download artifacts available here with the following sha256sum values

    f70faf11b4798fefb26a20f7d60288d275a6d568db78faf79a4194cbae72eab4  openfire-4.7.5-1.noarch.rpm
    d1283d417dacb74d67334c06420679aae62d088bd3439c8135ccfc272fd5b95b  openfire_4.7.5_all.deb
    60d8efb96a1891cda2deac2cda9808cf6adec259f090d3a7fb2b7ca21484d75b  openfire_4_7_5.exe
    98d36c2318706c545345274234e2f5ccbf0f72f7801133effea342e2776b8bb0  openfire_4_7_5.tar.gz
    e95348be890aff64a7447295ab18eebb29db4bdc346b802df0c878ebbbf1d18e  openfire_4_7_5_x64.exe
    a5bb8c9b944b915bdf7ecf92cd2a689d0cf09e88bfc2df960f38000f6b788194  openfire_4_7_5.zip
    

    If you have any questions, please stop by our community forum or our live groupchat . We are always looking for volunteers interested in helping out with Openfire development!

    For other release announcements and news follow us on Twitter and Mastodon .

    1 post - 1 participant

    Read full topic