phone

    • chevron_right

      The XMPP Standards Foundation: The XMPP Newsletter April 2022

      news.movim.eu / PlanetJabber • 5 May, 2022 • 8 minutes

    Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of April 2022.

    Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

    Newsletter translations

    This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

    XSF Announcements

    XSF and Google Summer of Code 2022

    • The XSF has been accepted as hosting organization at Google Summer of Code 2022 (GSoC) .
    • XMPP Newsletter via mail: We migrated to our own server for a mailing list in order to move away from Tinyletter. It is a read-only list on which, once you subscribe to it, you will receive the XMPP Newsletter on a monthly basis. We moved away from Tinyletter due to privacy concerns.
    • By the way, have you checked our nice XMPP RFC page ? :-)

    XSF fiscal hosting projects

    The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective . For more information, see the announcement blog post . Current projects:

    XMPP Community Projects

    A new community space for XMPP related projects and individuals has been created in the Fediverse! Join us on our new Lemmy instance and chat about all XMPP things!

    Lemmy

    Are you looking for an XMPP provider that suits you? There is a new website based on the data of XMPP Providers . XMPP Providers has a curated list of providers and tools for filtering and creating badges for them. The machine-readable list of providers can be integrated in XMPP clients to simplify the registration. You can help by improving your website (as a provider), by automating the manual tasks (as a developer), and by adding new providers to the list (as an interested contributor). Read the first blog post !

    XMPP Providers

    Events

    Talks

    Thilo Molitor presented their new even more privacy friendly push design in Monal at the Berlin XMPP Meetup!

    Monal push

    Articles

    The Mellium Dev Communiqué for April 2022 has been released and can be found over on Open Collective .

    Maxime “pep.” Buquet wrote some thoughts regarding “Deal on Digital Markets Act: EU rules to ensure fair competition and more choice for users” in his Interoperability in a “Big Tech” world article. In a later article he describes part of his threat model , detailing how XMPP comes into play and proposing ways it could be improved.

    German “Freie Messenger” shares some thoughts on interoperability and the Digital Markets Act (DMA). They also offer a comparison of “XMPP/Matrix”

    Software news

    Clients and applications

    BeagleIM 5.2 and SiskinIM 7.2 just got released with fixes for OMEMO encrypted message in MUC channels, MUC participants disappearing randomly, and issues with VoIP call sending an incorrect payload during call negotiation.

    converse.js published version 9.1.0 . It comes with a new dark theme, several improvements for encryption (OMEMO), an improved stanza timeout, font icons, updated translations, and enhancements of the IndexedDB. Find more in the release notes .

    Gajim Development News : This month came with a lot of preparations for the release of Gajim 1.4 🚀 Gajim’s release pipeline has been improved in many ways, allowing us to make releases more frequently. Furthermore, April brought improvements for file previews on Windows.

    Go-sendxmpp version v0.4.0 with experimental Ox (OpenPGP for XMPP) support has been released.

    JMP offers international call rates based on a computing trie . There are also new commands and team members .

    Monal 5.1 has been released . This release brings OMEMO support in private group chats, communication notifications on iOS 15, and many improvements.

    PravApp project is a plan to get a lot of people from India to invest small amounts to run an interoperable XMPP-based messaging service that is easier to join and discover contacts, similar to the Quicksy app. Prav will be Free Software, which respects users' freedom. The service will be backed by a cooperative society in India to ensure democratic decision making in which users can take part as well. Users will control the privacy policy of the service.

    Psi+ 1.5.1619 (2022-04-09) has been released.

    Poezio 0.14 has been released alongside with multiple backend libraries. This new release brings in lots of bug fixes and small improvements. Big changes are coming, read more in the article.

    Poezio Stickers

    Profanity 0.12.1 has been released, which brings some bug fixes.

    UWPX ships two small pre-release updates concering a critical fix for a crash that occurs when trying to render an invalid user avatar and issues with the Windows Store builds . Besides that it also got a minor UI update this month.

    Servers

    Ignite Realtime Community:

    • Version 9.1.0 release 1 of the Openfire inVerse plugin has been released, which enables deployment of the third-party Converse client 37 in Openfire.
    • Version 4.4.0 release 1 of the Openfire JSXC plugin has been released, which enables deployment the third-party JSXC client 13 in Openfire.
    • Version 1.2.3 of the Openfire Message of the Day plugin has been released, and it ships with German translations for the admin console
    • Version 1.8.0 of the Openfire REST API plugin has been released, which adds new endpoints for readiness, liveliness and cluster status.

    Libraries

    slixmpp 1.8.2 has been released. It fixes RFC3920 sessions, improves certificate errors handling, and adds a plugin for XEP-0454 (OMEMO media sharing).

    The mellium.im/xmpp library v0.21.2 has been released! Highlights include support for PEP Native Bookmarks , and entity capabilities . For more information, see the release announcement .

    Extensions and specifications

    Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001 , which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process . Communication around Standards and Extensions happens in the Standards Mailing List ( online archive ).

    By the way, xmpp.org features a new page about XMPP RFCs .

    Proposed

    The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

    New

    • No new XEPs this month.

    Deferred

    If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

    • No XEPs deferred this month.

    Updated

    • Version 0.4 of XEP-0356 (Privileged Entity)
      • Add “iq” privilege (necessary to implement XEPs such as Pubsub Account Management ( XEP-0376 )).
      • Roster pushes are now transmitted to privileged entity with “roster” permission of “get” or “both”. This can be disabled.
      • Reformulate to specify than only initial stanza and “unavailable” stanzas are transmitted with “presence” pemission.
      • Namespace bump. (jp)

    Last Call

    Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Stable.

    • No Last Call this month.

    Stable

    • No XEPs advanced to Stable this month.

    Deprecated

    • No XEP deprecated this month.

    Call for Experience

    A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

    • No Call for Experience this month.

    Spread the news!

    Please share the news on other networks:

    Subscribe to the monthly XMPP newsletter
    Subscribe

    Also check out our RSS Feed !

    Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board .

    Help us to build the newsletter

    This XMPP Newsletter is produced collaboratively by the XMPP community. Therefore, we would like to thank Adrien Bourmault (neox), anubis, Anoxinon e.V., Benoît Sibaud, cpm, daimonduff, emus, Ludovic Bocquet, Licaon_Kter, mathieui, MattJ, nicfab, Pierre Jarillon, Ppjet6, Sam Whited, singpolyma, TheCoffeMaker, wurstsalat, Zash for their support and help in creation, review, translation and deployment. Many thanks to all contributors and their continuous support!

    Each month’s newsletter issue is drafted in this simple pad . At the end of each month, the pad’s content is merged into the XSF Github repository . We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

    Tasks we do on a regular basis:

    • gathering news in the XMPP universe
    • short summaries of news and events
    • summary of the monthly communication on extensions (XEPs)
    • review of the newsletter draft
    • preparation of media images
    • translations

    License

    This newsletter is published under CC BY-SA license .

    • wifi_tethering open_in_new

      This post is public

      xmpp.org /2022/05/the-xmpp-newsletter-april-2022/

    • chevron_right

      Gajim: Development News April 2022

      news.movim.eu / PlanetJabber • 30 April, 2022 • 1 minute

    This month came with a lot of preparations for the release of Gajim 1.4 🚀 Gajim’s release pipeline has been improved in many ways, allowing us to make releases more frequently. Furthermore, April brought improvements for file previews on Windows.

    Changes in Gajim

    For two and a half years I (wurstsalat) have been writing (and translating) Gajim’s monthly development news. Keeping this up on a monthly basis takes a lot of time and effort. Upcoming development news will be released on an irregular basis, focussing on features instead of monthly progress.

    It has been a while since the release of Gajim 1.3.3. But why does it take so long until a new version gets released? One of the reasons is the amount of manual work it takes to update every part of Gajim’s internals for a new release. This does not include functional changes, but only things which need to be updated (version strings, translations, changelogs, etc.) before a new version can be deployed. Note that Gajim is available for multiple distributions on Linux, for Flatpak, and for Windows, which makes releasing a new version more complicated. In order to make releases happen more frequently, i.e. reducing the manual work involved in deploying a new version, great efforts have been made:

    • deployment pipelines have been established on Gajim’s Gitlab
    • the process of applying Weblate translations has been integrated better
    • changelogs will be generated automatically from git ’s commit history
    • Flatpak update process has been simplified

    There are more improvements to come, but this should already make deploying a new version much easier.

    What else happened:

    • Sentry integration has been improved
    • libappindicator is now used on Wayland, if available
    • downloading a file preview can now be cancelled
    • mime type guessing for file previews has been improved on Windows
    • audio previews are now available on Windows
    • Security Labels ( XEP-0258 ) selector has been improved
    • improvements for private chat messages

    Plugin updates

    Gajim’s OpenPGP plugin received an update with some usability improvements.

    Changes in python-nbxmpp

    python-nbxmpp is now ready for being deployed quickly as well.

    As always, feel free to join gajim@conference.gajim.org to discuss with us.

    Gajim

    • wifi_tethering open_in_new

      This post is public

      gajim.org /post/2022-04-30-development-news-april/

    • chevron_right

      JMP: Newsletter: New Staff, New Commands

      Stephen Paul Weber • news.movim.eu / PlanetJabber • 27 April, 2022 • 1 minute

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    The JMP team is growing.  This month we added root , whom many of you will know from the chatroom.  root has been a valuable and helpful member of the community for quite some time, and we are pleased to add them to the team.  They will be primarily helping with support and documentation, but also with, let’s face it, everything else.

    The account settings bot has a new command for listing recent financial transactions.  You can use this command to check on your auto top-ups, recent charges for phone calls, rewards for referrals, etc.  There is now also a command for changing your Jabber ID, so if you find yourself in a situation where you are changing for any reason you can do that yourself without waiting for support to do it manually.

    This month also saw the release of Cheogram Android 2.10.5-2 .  This version has numerous bug fixes for crashes and other edge cases and is based on the latest upstream code which includes a security fix, so be sure to update!  Support for TOR and extended connection settings has also been fixed, a new darker theme added, and UI tweaks to recognize that messages are often encrypted with TLS.

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/april-newsletter-2022

    • chevron_right

      Erlang Solutions: What are the key trends in digital payments? part 2/2

      news.movim.eu / PlanetJabber • 25 April, 2022 • 3 minutes

    In the second and final part of this article, we take a look at some of the important developments in how payments work using our fintech industry knowledge and experience working on some of the most performant fintech systems in the world such as Vocalink’s Instant Payments Solution (IPS).

    In part 1 we looked at the rapid growth in e-commerce, demand for faster payments and consumer adoption of relatively new ways of payments. You can read it in full here.

    If you are involved in the payments industry in the Nordics region of Europe, make sure to catch up with our Nordics Managing Director, Erik Schön, who will be taking part in a panel discussion at NextGen Nordics taking place in Stockholm, Sweden, on 27 April.

    Leveraging payments data

    The diverse range of digital touchpoints involved in a cashless payments ecosystem provides vast amounts of data. This is of significant value to banks and fintechs to grow client relationships based on analytics and insights. Companies that can unlock the true value of payment activity data by leveraging powerful AI and ML tools can offer more efficient, tailored products and a more secure, protected environment in which to operate.

    We can expect the full implementation of the messaging standard ISO20022 to be a potentially vital part of improving the amount and quality of payments data available in the industry. ISO20022, as the global standard for payment messaging, provides, for the first time, a shared language to be used for transactions made by anyone, anywhere.

    Financial crime

    All of the converging trends discussed here and in part 1 of this post present a clear dilemma in modern payments – customers are demanding easier in real-time transactions actioned on any connected device, while regulators are concerned about increased exposure to fraud. Payments companies, therefore, need to strike the right balance delivering new user-friendly processes that are also highly secure and compliant.

    Rapidly increased e-commerce provides an opening for fraudsters. The use of AI and ML allows payment companies to detect fraud earlier by learning the financial habits of clients so that unusual behaviour is highlighted. Fraud prevention security measures, like voice-activated transactions, biometric authentication, and smart assistant payment verification, all have a part to play in securing the future of digital payments.

    Future models of payments infrastructure

    On the frontend, we are seeing more forms of instant payments and innovations such as digital wallets and embedded finance. Disruption of the payment industry is moving on from providing these user-friendly frontend mobile apps to improving backend processing and the infrastructure used to execute payments.

    For instance, real-time payments technology and the utilisation of cryptocurrencies, stablecoins, CBDCs, blockchain, AI and ML, can all help improve the speed and security of payment gateway services used for cross-border and multi-channel payment systems that have suffered from being slow and expensive transaction fees.

    With the emergence of blockchain and decentralised finance, transactions can bypass the traditional financial initiations altogether to enable direct payments between parties and revolutionise the industry.

    Conclusion

    The modern payments ecosystem is varied with lots of layers and companies filling niche use cases. Overall, it can be said that the journey towards more digital, open and real-time operations mirrors the way society at large is increasingly living online.

    Many of the trends covered in this article may be set to converge in the emerging metaverse space, but the shape of that is some way from being determined. What will money look like in the metaverse? How will existing cross-border payment rails integrate with a borderless, global ecosystem?

    To discuss the answer to these questions and the P27 response to the metaverse, you can join us at NextGen Nordics taking place in Stockholm, Sweden, on 27 April, where our Nordics Managing Director, Erik Schön, will be taking part in a panel discussion on the topic.

    Our payments expertise

    We help create digital and mobile payment solutions that facilitate e-commerce, payments processing and backend services that enhance the customer experience and protect customer data. We work with clients who provide services in DeFi, cryptocurrency, blockchain, GameFi, clearing and settlements, embedded finance, real-time payments, payment gateways and more.

    Our solutions use battle-tested technologies delivered by domain experts for secure, compliant and resilient solutions.

    Let’s start a conversation about your payments project >

    To make sure you don’t miss more great fintech content from us, you should sign up for our Fintech Matters mailing list.

    Sign up here >

    The post What are the key trends in digital payments? part 2/2 appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/what-are-the-key-trends-in-digital-payments-part-2-2/

    • chevron_right

      Erlang Solutions: What are the key trends in digital payments? part 1/2

      Michael Jaiyeola • news.movim.eu / PlanetJabber • 21 April, 2022 • 4 minutes

    Payments are the backbone of a functioning global economy. A payments system can be defined as any system that can be used to settle a financial transaction by exchanging monetary value. Payments are a part of financial services that have undergone rapid and transformational change over recent years, and the Erlang Solutions team has been at the cutting-edge of many of these changes working on exciting client projects with global card payments companies to fintech startups.

    In this two part article, we take a look at some of the main drivers of change to the way payments work and to the broader payments ecosystem using our fintech industry knowledge and experience working on some of the most performant fintech systems in the world such as Vocalink’s Instant Payments Solution (IPS).

    If you are involved in the payments industry in the Nordics region of Europe, make sure to catch up with our Nordics Managing Director, Erik Schön, who will be taking part in a panel discussion at NextGen Nordics taking place in Stockholm, Sweden, on 27 April.

    The digital payments landscape

    Evolving customer expectations alongside technological advances are driving innovation that prioritises speed, near to real-time payments, frictionless transactions and decentralised models. Also, compounded by the pandemic, significant growth of digital commerce has led to record payment volumes in most markets. Combined, these factors make payments one of the most interesting areas of financial services. There are opportunities for innovative fintechs to provide better client experiences, traditional players to expand their services and tech enablers to offer alternative ways for transactions to flow.

    With market competition driving fee decreases, it is a challenge for traditional players to maintain the same levels of profitability while using existing payment infrastructure. We have seen some of our fintech clients launch into the payments ecosystem offering a more diverse range of services, and traditional payments companies are responding by leveraging the huge amounts of data at their disposal to guide a strategy of adding to their offering. These new services are in areas including loyalty, tailored offers, data insights, risk management and more.

    At the heart of payments, these themes have been massively accelerated over the pandemic years. You can take a deeper dive into this and other emerging tech trends in financial services by downloading our Fintech Trends in 2022 white paper.

    A cashless world

    Consumers’ shift to digital channels is driving demand for seamless fulfilment and instant gratification. A recent Capgemini World Payments Report survey found an increase from 24% to 46% in respondents who had e-commerce accounting for more than half of their monthly spending when comparing before the pandemic to now.

    With 91% of the global population expected to own a smartphone by 2026, according to Statista [1] , these customers are unlikely to return to the way things were done before, having experienced the efficiencies offered by digital payments,

    Nayapay , one of our clients in the South Asian market using our MongooseIM chat engine , is an example of a new player in the payments space seizing the opportunity to disrupt local markets. Their chat-based payments app targets the unbanked in Pakistan and is built around fusing the penetration of smartphone usage and people’s willingness to integrate transactions into their daily social, digital activities.

    Demand for faster payments

    Demand for instant transactions is driving change in cross-border payments, international remittances and e-commerce. Previously, mirroring the instantaneousness of a cash transaction via electronic means had been an ongoing technical challenge. Now, the introduction of real-time clearing and settlement facilities in many markets makes processing payments almost instantly possible. At the beginning of 2020, The Clearing House’s Real-Time Payments System had 19 large institutions participating. Today, it has 114 banks and credit unions as direct members.

    Frustration with the latency and cost of the traditional banking model has led to the emergence of alternative options. Innovative solutions such as the P27 initiative in the Nordic region show how fintech can blend with conventional systems to provide better payments infrastructure for all. P27, named from the 27 million citizens in the Nordic region, aims to integrate the payments of four countries and currencies into a single immediate payment system, again using the Vocalink IPS.

    Growth in embedded payments

    Embedded finance is changing the way payments are made where financial products are added to the transactional flow in non-financial platforms. With consumers demanding ever more convenient, frictionless ways to make payments using various devices from wallets to wearables, embedded or contextual payment options add convenience and speed to the payments process.

    On the merchant side of things, embedded finance helps them to better understand the best payment terms to offer customers, provide seamless checkout, request payment, and offer financing such as buy now pay later (BNPL), all within a single customer experience.

    Aside from BNPL, other financial products (like lending and card-issuing) are also moving into contextual environments. Major banks can extend their reach to millions of new users through Banking-as-a-Service (BaaS) APIs to technology businesses and platforms outside of the financial services industry.

    Stay tuned…

    To make sure you don’t miss the second part of this look at modern payments where we examine what these trends mean for strategy setting by industry players and what the future might look like, you should sign up to our Fintech Matters mailing list.

    Sign up here >>

    The post What are the key trends in digital payments? part 1/2 appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/what-are-the-key-trends-in-digital-payments-part-1-2/

    • chevron_right

      Erlang Solutions: Understanding Processes for Elixir Developers

      Carlo Gilmar • news.movim.eu / PlanetJabber • 21 April, 2022 • 9 minutes

    This post is for all developers who want to try Elixir or are trying their first steps in Elixir. This content is aimed at those who already have previous experience with the language.

    This will help to explain one of the most important concepts in the BEAM: processes. Although Elixir is a general-purpose programming language, you don’t need to understand how the virtual machine works, but if you want to take advantage of all the features, this will help you to understand a few things about how to design programs in the Erlang ecosystem.

    Why understand processes?

    Scalability, fault-tolerance, concurrent design, distributed systems – the list of things that Elixir famously make easy is long and continues to grow. All these features come from the Erlang Virtual Machine, the BEAM. Elixir is a general-purpose programming language. You can learn the basics of this functional programming language without understanding processes, especially if you come from another paradigm. But, understanding processes is an excellent way to grow your understanding of what makes Elixir so powerful and how to harness it for yourself because it represents one of the key concepts from the BEAM world.

    What is a process in the BEAM world

    A process is an isolated entity where code execution happens.

    Processes are everywhere in an Erlang system, note the iex, the observer and the OTP patterns are examples of how a process looks. They are lightweight, allowing us to run concurrent programs, and build distributed and fault-tolerant designs. There are a wide variety of other processes using the Erlang observer application. Because of how the BEAM runs, you can run a program inside a module without ever knowing the execution is a process.

    Most Elixir users would have at least heard of OTP. OTP allows you to use all the capabilities of the BEAM, and understanding what happens behind the scenes will help you to take full advantage of abstractions when designing processes.

    A process is an isolated entity where code execution happens and they are the base to design software systems taking advantage of the BEAM features.

    You can understand what a process is as just a block of memory where you’re going to store data and manipulate it. A process is a built-in memory with the following parts:

    • Stack: To keep local variables.
    • Heap: To keep larger structures.
    • Mailbox: Store messages sent from other processes.
    • Process Control Block: Keeps track of the state of the process.

    Process lifecycle

    We could define the process lifecycle for now as:

    1. Process creation
    2. Code execution
    3. Process termination

    Process Creation

    The function spawn helps us to create a new process. We will need to provide a function to be executed inside it, this will return the process identifier PID . Elixir has a module called Process to provide functions to inspect a process.

    Let’s look at the following example using the iex :

    1. Create an anonymous function to print a string. (The function self/0 gives the process identifier).
    2. PROCESS CREATION: Invoke the function spawn with the function created as param. This will create a new process.
    3. CODE EXECUTION: The process created will execute the function provided immediately. You should see the message printed.
    4. TERMINATING: After the code execution, the process will terminate. You can check this using the function Process.alive?/1 , if the process is still alive, you should get a true value.
    iex(1)> execute_fun = fn -> IO.puts "Hi! ⭐  I'm the process #{inspect(self())}." end
    #Function<45.65746770/0 in :erl_eval.expr/5>
    
    iex(2)> pid = spawn(execute_fun)
    Hi! ⭐  I'm the process #PID<0.234.0>.
    #PID<0.234>
    
    
    iex(2)> Process.alive?(pid) 
    false

    Receiving messages

    A process is an entity that executes code inside but also can receive and process messages from other processes (they can communicate to each other only by messages). Sometimes this might be confusing, but these are different and complementary parts.

    The receive statement helps us to process the messages stored in the mailbox. You can send messages using the send statement, but the process will only store them in the mailbox. To start to process them, you should implement the receive statement, this will put the process into a mode that waits for sent messages to arrive.

    1. Let’s create a new module with the function waiting_for_receive_messages/0 to implement the receive statement.
    2. PROCESS CREATION: Spawn a new process with the module created. This will execute the function with the receive statement.
    3. CODE EXECUTION: Since the function provided has the receive statement, this will put the process into a waiting mode, so the process will be alive until processing a message. We can verify this using the Process.alive?/1.
    4. RECEIVE A MESSAGE: We can send a new message to this process using the function send/2. Remember the

    TERMINATING: Once the message has been processed, our process will die.

    
    iex(1)> defmodule MyProcess do
    ...(1)>   def awaiting_for_receive_messages do
    ...(1)>     IO.puts "Process #{inspect(self())}, waiting to process a message!"
    ...(1)>     receive do
    ...(1)>       "Hi" ->
    ...(1)>         IO.puts "Hi from me"
    ...(1)>       "Bye" ->
    ...(1)>         IO.puts "Bye, bye from me"
    ...(1)>       _ ->
    ...(1)>         IO.puts "Processing something"
    ...(1)>     end
    ...(1)>     IO.puts "Process #{inspect(self())}, message processed. Terminating..."
    ...(1)>   end
    ...(1)> end
    
    iex(2)> pid = spawn(MyProcess, :awaiting_for_receive_messages, [])
    Process #PID<0.125.0>, waiting to process a message!
    #PID<0.125.0>
    
    
    iex(3)> Process.alive?(pid)
    true
    
    
    iex(4)> send(pid, "Hi")
    Hi from me
    "Hi"
    Process #PID<0.125.0>, message processed. Terminating...
    
    
    iex(5)> Process.alive?(pid)
    false
    
    

    Keeping the process alive

    One option is to enable the process to run and process messages from the mailbox. Remember how the receive/0 statement works: we could just call this statement after processing a message to make a continuous cycle and prevent termination.

    1. Modify our module to call the same function after processing a message.
    2. PROCESS CREATION: Spawn a new process.
    3. RECEIVE A MESSAGE: Send a new message to be executed in the process created. After this, we’ll call the same function that invokes the receive statement to wait to process another message. This will prevent the process termination.
    4. CODE EXECUTION: Use the Process.alive?/1 and verify that our process is alive.
    iex(1)> defmodule MyProcess do
    ...(1)>   def awaiting_for_receive_messages do
    ...(1)>     IO.puts "Process #{inspect(self())}, waiting to process a message!"
    ...(1)>     receive do
    ...(1)>       "Hi" ->
    ...(1)>         IO.puts "Hi from me"
    ...(1)>         awaiting_for_receive_messages()
    ...(1)>       "Bye" ->
    ...(1)>         IO.puts "Bye, bye from me"
    ...(1)>         awaiting_for_receive_messages()
    ...(1)>       _ ->
    ...(1)>         IO.puts "Processing something"
    ...(1)>         awaiting_for_receive_messages()
    ...(1)>     end
    ...(1)>   end
    ...(1)> end
    
    iex(2)>
    nil
    
    
    iex(3)> pid = spawn(MyProcess, :awaiting_for_receive_messages, [])
    Process #PID<0.127.0>, waiting to process a message!
    #PID<0.127.0>
    
    
    iex(4)> Process.alive?(pid)
    true
    
    
    iex(5)> send(pid, "Hi")
    Hi from me
    Process #PID<0.127.0>, waiting to process a message!
    "Hi"
    
    iex(6)> Process.alive?(pid)
    true
    
    

    Hold the state

    Up to this point we have understood how to create a new process, process messages from the mailbox and how to keep it alive. The mailbox is an important part of the process where you can store messages, but we have other parts of memory that allow us to keep an internal state. Let’s see how to hold an internal state.

    1. Let’s modify our module to receive a list to store all the messages received. We just need to call the same function and send the list updated as param. We’ll print the list of messages as well.
    2. PROCESS CREATION: Spawn a new process. We’ll send an empty list as param to store the messages sent.
    3. RECEIVE A MESSAGE: Send a new message to be executed in the process created. After receiving the message, we’ll call the same function with the list of messages updated as an argument. This will prevent the process termination, and will update the internal state.
    4. RECEIVE A MESSAGE: Send another message, and see the output. You should get the list of the messages processed.

    CODE EXECUTION: Use the Process.alive?/1 and verify that our process is alive.

    defmodule MyProcess do
      def awaiting_for_receive_messages(messages_received \\ []) do
        receive do
          "Hi" = msg ->
            IO.puts "Hi from me"
            [msg|messages_received]
            |> IO.inspect(label: "MESSAGES RECEIVED: ")
            |> awaiting_for_receive_messages()
    
          "Bye" = msg ->
            IO.puts "Bye, bye from me"
            [msg|messages_received]
            |> IO.inspect(label: "MESSAGES RECEIVED: ")
            |> awaiting_for_receive_messages()
    
          msg ->
            IO.puts "Processing something"
            [msg|messages_received]
            |> IO.inspect(label: "MESSAGES RECEIVED: ")
            |> awaiting_for_receive_messages()
        end
      end
    end
    
    
    iex(3)> pid = spawn(MyProcess, :awaiting_for_receive_messages, [])
    #PID<0.132.0>
    
    iex(4)> Process.alive?(pid)
    true
    
    iex(5)> send(pid, "Hi")
    Hi from me
    "Hi"
    MESSAGES RECEIVED: : ["Hi"]
    
    
    iex(6)> send(pid, "Bye")
    Bye, bye from me
    "Bye"
    MESSAGES RECEIVED: : ["Bye", "Hi"]
    
    
    iex(7)> send(pid, "Heeeey!")
    Processing something
    "Heeeey!"
    MESSAGES RECEIVED: : ["Heeeey!", "Bye", "Hi"]
    
    

    How to integrate Elixir reasoning in your processes

    Well done! I hope all of these examples and explanations were enough to illustrate what a process is. It’s important to keep in mind the anatomy and the life cycle,to understand what’s happening behind the scenes.

    You can design with the process, or with OTP abstractions. But the concepts behind this are the same, let’s look at an example with Phoenix Live View:

    defmodule DemoWeb.ClockLive do
      use DemoWeb, :live_view
    
      def render(assigns) do
        ~H"""
        <div>
          <h2>It's <%= NimbleStrftime.format(@date, "%H:%M:%S") %></h2>
          <%= live_render(@socket, DemoWeb.ImageLive, id: "image") %>
        </div>
        """
      end
    
      def mount(_params, _session, socket) do
        if connected?(socket), do: Process.send_after(self(), :tick, 1000)
    
        {:ok, put_date(socket)}
      end
    
      def handle_info(:tick, socket) do
        Process.send_after(self(), :tick, 1000)
        {:noreply, put_date(socket)}
      end
    
      def handle_event("nav", _path, socket) do
        {:noreply, socket}
      end
    
      defp put_date(socket) do
        assign(socket, date: NaiveDateTime.local_now())
      end
    end

    While the functions render/1 and mount/3 allow you to set up the Live View, the functions handle_info/2 and handle_event/3 are updating the socket, which is an internal state. Does this sound familiar to you? This is a process! A live view is an OTP abstraction to create a process behind the scenes, and of course this contains other implementations. For this particular case the essence of the process is present when the Live View reloads the HTML, keeps all the variables inside the state, and handles all the interactions while modifying it.

    Understanding processes gives you the concepts to understand how the BEAM works and to learn how to design better programs. Many of the libraries written in Elixir or the OTP abstractions these concepts as well, so next time you use one of these in your projects, think about these explanations to better understand what’s happening under the hood.

    Thanks for reading this. If you’d like to learn more about Elixir check out our training schedule or join us at ElixirConf EU 2022.

    About the author

    Carlo Gilmar is a software developer at Erlang Solutions based in Mexico City. He started his journey as a developer at Making Devs, he’s the founder of Visual Partner -Ship, a creative studio to mix technology and visual thinking.

    The post Understanding Processes for Elixir Developers appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/understanding-processes-for-elixir-developers/

    • chevron_right

      Erlang Solutions: Introducing Stream Support In RabbitMQ

      Erlang Admin • news.movim.eu / PlanetJabber • 14 April, 2022 • 9 minutes

    In July 2021, streams were introduced to RabbitMQ, utilizing a new blazingly-fast protocol that can be used alongside AMQP 0.9.1. Streams offer an easier way to solve a number of problems in RabbitMQ, including large fan-outs, replay & time travel, and large logs, all with very high throughput (1 million messages per second on a 3-node cluster). Arnaud Cogoluègne s, Staff Engineer @ VMware introduced streams and how they are best used.

    This talk was recorded at the RabbitMQ Summit 2021. The 4th edition of RabbitMQ Summit is taking place as a hybrid event, both in-person (CodeNode venue in London) and virtual, on 16th September 2022 and brings together some of the world’s biggest companies, using RabbitMQ, all in one place.

    Streams: A New Type of Data Structure in RabbitMQ

    Streams are a new data structure in RabbitMQ that open up a world of possibilities for new use cases. They model an append-only log, which is a big change from traditional RabbitMQ queues, as they have non-destructive consumer semantics. This means that when you read messages from a Stream, you don’t remove them, whereas, with queues, when a message is read from a queue, it is destroyed. This re-readable behaviour of RabbitMQ Streams is facilitated by the append-only log structure.

    RabbitMQ also introduced a new protocol, the Stream protocol, which allows much faster message flow, however, you can access Streams through the traditional AMQP 0.9.1 protocol as well, which remains the most used protocol in RabbitMQ. They are also accessible through the other protocols that RabbitMQ supports, such as MQTT and STOMP.

    Streams strong points

    Streams have unique strengths that allow them to shine for some use cases. These include:

    Large fan-outs

    When you have multiple applications in your system needing to read the same messages, you have a fan-out architecture. Streams are great for fan-outs, thanks to their non-destructive consuming semantics, removing the need to copy the message inside RabbitMQ as many times as there are consumers.

    Time-travel

    Streams also offer replay and time-travelling capabilities. Consumers can attach anywhere in a stream, using an absolute offset or a timestamp, and they can read and re-read the same data as many times as needed.

    Throughput

    Thanks to the new stream protocol, streams have the potential to be significantly faster than traditional queues. If you want high throughput or you are working with large messages, streams can often be a suitable option.

    Large messages

    Streams are also good for large logs. Messages in streams are always persistent on the file system, and the messages don’t stay in the memory for long. Upon consumption, the operating system’s file cache is used to allow for fast message flow.

    The Log Abstraction

    A stream is immutable, you are able to add messages, but once a message has entered the stream, it cannot be removed. This makes the log abstraction of the stream quite a simple data structure compared to queues where messages are always added, and removed. This brings us to another important concept, the offset. The offset is just a technical index of a message inside the stream, or a timestamp. Consumers can instruct RabbitMQ to start reading from an offset instead of the beginning of the stream. This allows for easy time-travelling and replaying of messages. Consumers can also push the offset tracking responsibility to RabbitMQ.

    We can have any number of consumers on a stream, they don’t compete with each other, one consuming application will not steal messages from the other applications, and the same application can read the stream of messages many times.

    Queues can store messages in memory or on disk, they can be only on one node or mirrored, Streams are persistent and replicated at all times. When we create a stream, it’s going to have a leader located on one node, and replicas on other nodes. The replicas will follow the leader and synchronize the data. The leader is the only one that can create write operations and the replicas will only be used to serve consumers.

    RabbitMQ Queues vs. Streams

    Streams are here to complement queues and to expand the use cases for RabbitMQ. Traditional queues are still the best tool for the most common use-cases in RabbitMQ, but they do have their limitations, there are times when they are not the best fit.

    Streams are, similarly to queues, a FIFO data structure, i.e.the oldest message published will be read first. Providing an offset lets the client skip the beginning of the stream but the messages will be read in the order of publishing.

    In RabbitMQ you have a traditional queue with a couple of messages and a consuming application. After registering the consumer, the broker will start dispatching messages to the client, and the application can start processing.

    When, at this point, the message is at an important point in its lifetime, it’s present on the sender’s side, and also on the consuming side. The broker still needs to care about the message because it can be rejected and it must know that it hasn’t been acknowledged yet. After the application finishes processing the message, it can acknowledge it and after this point the broker can get rid of the message and consider it processed. This is what we can call destructive consumption, and it is the behaviour of Classic and Quorum Queues. When using Streams, the message stays in the Stream as long as the retention policy allows for it.

    Implementing massive fan-out setups with RabbitMQ was not optimal before Streams. You have a message come in, they go to an exchange, and they are routed to a queue. If you want another application to process the messages, you need to create a new queue, bind the queue to the exchange, and start consuming. This process creates a copy of the message for each application, and if you need yet another application to process the same messages, you need to repeat the process; so yet another queue, a new binding, new consumer, and a new copy of the message.

    This method works, and it’s been used for years, but it doesn’t scale elegantly when you have many consumer applications. Streams provide a better way to implement this, as the messages can be read by each consumer separately, in order, from the Stream

    RabbitMQ Stream Throughput using AMQP and the Stream Protocol

    As explained in the talk, there was higher throughput with Streams compared to Quorum  Queues. They got about 40,000 messages per second with Quorum Queues and 64,000 messages per second with Streams. This is because Streams are a simpler data structure than Quorum Queues, they don’t have to deal with complicated things like message acknowledgment, rejected messages, or requeuing.

    Quorum Queues still are state-of-the-art replicated and persistent queues, Streams are for other use cases. When using the dedicated Stream protocol, throughputs of one million messages per second are achievable.

    The Stream Protocol has been designed with performance in mind and it utilises low level techniques such as the sendfile libC API, OS page cache, and batching which makes it faster than AMQP Queues.

    RabbitMQ Stream Plugin and Clients

    Streams are available through a new plugin in the core distribution. When turned on, RabbitMQ will start listening on a new port which can be used by clients understanding the  Stream Protocol. It’s integrated with the existing infrastructure that is present in RabbitMQ, such as the management UI, the REST API, Prometheus.

    There is a dedicated stream Java and Go client using this new stream protocol. The Java client is the reference implementation. A tool for performance testing is also available. Clients for other languages are also actively worked on by the community and the core team.

    The stream protocol is a bit simpler than AMQP; there’s no routing; you just publish to a stream, there’s no exchange involved, and you consume from a stream just like from a queue. No logic is needed to decide where the message should be routed. When you publish a message from your client applications, it goes to the network, then almost directly to storage.

    There is excellent interoperability between streams and the rest of RabbitMQ. The messages can be consumed from an AMQP 0.9.1 client application and it also works the other way around.

    Example Use Case for Interoperability

    Queues and Streams live in the same namespace in RabbitMQ, therefore you can specify the name of the Stream you want to consume from using the usual AMQP clients and by using the x-stream-offset parameter for basicConsume.

    It’s very easy to publish with AMQP clients because it’s the same as with Queues, you publish to an exchange.

    Above is an example of how you can imagine using streams. You have a publisher publishing messages to an exchange, depending on the routing key of your messages, the message routed to different queues. So you have a queue for each region in the world. For example, you have a queue for Americas, you have a queue for Europe, you have a queue for Asia, and one for HQ, you have a dedicated consuming application that will do some specific processing for the region.

    If you upgrade to RabbitMQ 3.9 or later, you can just create a stream, bind it to the exchange with a wild card so that all messages are still routed to the queues, but the stream gets all the messages. Then you can point an application using the Stream Protocol to this stream, and we can imagine that this application will do some worldwide analytics every day without even reading the stream very quickly. This is how we can imagine streams can fit into existing applications.

    Guarantees for RabbitMQ Streams

    Streams support at-least-once delivery, as they support a similar mechanism to AMQP Publish Confirms. There’s also a deduplication mechanism, the broker filtering out duplicate messages based on the publishing sequence number, such as a key in a database, or line number in a file.

    On both sides, we have flow control, so fast publishers’ TCP connections will be blocked. The broker will only send messages to the client when it is ready to accept them.

    Summary

    Streams are a new replicated and persistent data structure in RabbitMQ, they model an append-only log. They are good for large fan-outs, support replay and time-travelling features, and they’re good for high throughput scenarios, and for large logs. They store their data on the file system and never in memory.

    If you think Streams or RabbitMQ could be useful to you but don’t know where to start, talk to our experts , we’re always happy to help. If you want to see the latest features and case studies from the world of RabbitMQ, join us at RabbitMQ Summit 2022.

    The post Introducing Stream Support In RabbitMQ appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/introducing-stream-support-in-rabbitmq%ef%bf%bc/

    • chevron_right

      Maxime Buquet: An overview of my threat model

      pep. (blog@bouah.net) • news.movim.eu / PlanetJabber • 13 April, 2022 • 7 minutes

    I was interested in knowing what kind of threat model people had when using XMPP, so I asked on the newly created XMPP-related community forum – which uses Lemmy ! A decentralized alternative to Reddit using Activity Pub. I had an idea for myself, but I didn’t realize it was going to be this long an answer. So I decided to write it down here instead. I’ll be posting the link there .

    Building up a threat model is identifying what and/or whom you are trying to protect against. This allows you to take steps to ensure you are actually being protected against what you think you want to protect against. A threat model is to be refined, improved, etc.

    I have two main use-cases and I’ll go through one of them, the other one being less involved, even though definitely influenced by this one. This is surely incomplete but it should give a pretty good overview still.

    I started doing some activism the past years and I’ve had to adapt regarding communications. It seems not many people in these groups are aware of the amount of information that’s recoverable by an attacker. I was surprised how very little security culture there was, even though I wasn’t doing much of it myself before (because I didn’t think I needed it, really). As you may have guessed, this concerns a lot more than just instant messaging but this is what this article focuses on.

    The threat model

    For this use-case, I want it to make it hard for anybody to trace my actions back to my civil identity and those of my friends. While I know this is never going to be perfect, and the attacker here has way more resources than we have, we do what is possible to reduce the impact on us. I am also aware that many attacks are theoretical and may be used nowhere in practice, but that doesn’t mean we should ignore them either.

    Online, I want to protect myself against passive state-level surveillance, but also targeted surveillance to some extent. Offline, I need to protect the devices I use. In case they are seized by the police, I want to prevent them from getting too much information so they get less material to charge us with. But if it gets to this, there’s many chances they are going to be able to associate my different identities.

    Some may think with this threat model in mind I wouldn’t trust the server administrator, but this is a false dichotomy. What I don’t want is my data falling in the hands of an intruder such as the police overtaking the server. Server admins are legally required to give encryption passphrases in many jurisdictions, for one, but also mistakes are human and hacking into a server may not be so hard with the right amount of resources.

    How does this work with XMPP?

    First, this is not proper to XMPP: we don’t use our civil identities, we use pseudonyms. In these circles we mostly don’t know each other’s civil identities, and it’s not useful anyway. It’s the same online for example in the free software community, where there’s no reason why you’d need this information.

    We use Tor , so the ISP and middle boxes don’t know where we connect to, and the XMPP server doesn’t know where we connect from.

    We create accounts on populated public XMPP servers, and connect to them using TLS – which has been the default for a long time now – and use member-only / private (non-public) rooms to talk together, with OMEMO . We don’t know all of the people in the room but there is some kind of trust chain.

    We’re not verifying OMEMO fingerprints as we may not know everybody in the room, and changing devices/OMEMO keys also causes pain regarding user experience when combined with FP verification.

    On devices (PCs, smartphones), we use full-disk encryption where possible. As we generally use second-hand phones, the feature may not be available all the time. A pretty generic advice I give is to put a passphrase to the OS and also clear client logs regularly. It can be configured in Conversations on Android, I don’t know about iOS clients.

    The baseline is: your smartphone is your weak point, even though most of us have one because it’s convenient. This is certainly the first piece that will incriminate you, if it’s not you or your friends doing so inadvertently.

    What I’d like to improve in XMPP?

    There are so many details that I have no clue about that could be used against me to correlate my different identities.

    I use multiple accounts on Conversations , as well as Dino on the desktop for this use-case. Randomizing connections to the various accounts could be one thing to improve.

    I don’t use Poezio for anything else than my civil identity, because Poezio isn’t very much used. Even though it may also be the case for Dino..

    Currently in server logs, a few things can be used to identify a client, such as the resource string set by the client to something similar to clientname.randombits , or the disco#info which lists capabilities of a client. Both are actually stored on the server for possibly good reasons, but that’s always more information to identity somebody.

    I remember developers asking for the resource to be easily distinguishable for debugging purposes. Having something à la docker container names should be good enough for this (a list of adjectives and names combined into random <adjective>_<name> ). I am not entirely sure what to do about disco#info being stored.

    A good point for public servers is that they don’t seem to store archives forever anymore (since GDPR ? Or for disk-space concerns maybe). They will generally have 2 weeks / 1 month of (encrypted) activity which, I give you, may be enough in some cases to incriminate someone, but it’s probably better than logs that go back to -infinity.

    The roster is also stored as plaintext on the server and can easily be taken by the police. Encrypted roster may not be as far as we imagine. There have been similar efforts done in Dovecot to encrypt the user mailbox with a user-provided passphrase. This wouldn’t prevent servers from recreating it based on activity when logged in, but that’s already more efforts required and many wouldn’t bother – leaving this data unavailable as plaintext by default.

    On the client, I would like more private defaults. Tor support is a MUST, fortunately Conversations has it, and it’s possible to use it with Dino but one has to know how to set it up on their system and there’s no way to enforce using Tor, and it’s not shown whether it’s in use either. Same issue in Poezio.

    Storing logs forever is also one thing that I find annoying. It can be configured in Conversations but it’s not limited by default. It’s hidden in Expert Setting as Never to delete messages automatically.

    Dino doesn’t have any settings regarding logs. I’d have to clear them myself by going through the sqlite database (pretty technical already). Poezio has a use_log setting that stores every message (and presence depending on config), and it’s also True by default.

    Interactions with OMEMO between non-contacts is a mess. Some servers have the mod_block_strangers module deployed as an anti-spam measure: when a user from such a server joins a private room, non-contacts will be prevented from fetching their keys. Dino creates the OMEMO node as only accessible by contacts (to prevent deanonymization in some Prosody MUCs ). And Conversations doesn’t allow sending encrypted messages if it doesn’t have keys of all participants in a private room.

    I am not even talking about OMEMO implementations (using OMEMO 0.3.0 ) which per the spec only encrypt the <body/> element in a message, leaking actual data depending on the feature used, or restricting the feature set greatly. This is fixed in the newer version of the spec but deployed nowhere at the moment.

    I am also not talking about why XMPP and not say Signal, or Telegram. I have already talked about this in part in other articles but that may warrant its own article at some point.

    This article only scratches the surface. There are many more details that would need to be ironed-out. And of course implementations need to make choices and can’t answer every single use-cases out there. I do wish Privacy was more of a concern though.

    Where is “Privacy by default” gone? Somebody bring it back please.

    • wifi_tethering open_in_new

      This post is public

      bouah.net /2022/04/an-overview-of-my-threat-model/

    • chevron_right

      JMP: Computing International Call Rates with a Trie

      Stephen Paul Weber • news.movim.eu / PlanetJabber • 13 April, 2022 • 4 minutes

    A few months ago we launched International calling with JMP.  One of the big tasks leading up to this launch was computing the rate card: that is, how much calls to different destinations would cost per minute.  While there are many countries in the world, there are even more calling destinations.  Our main carrier partner for this feature lists no fewer than 59881 unique phone number prefixes in the rates they charge us.  This list is, quite frankly, incomprehensible.  One can use it to compute the cost of a call to a particular number, but it gives no confidence about the cost of calls in general.  Many items on this list are similar, and so I set out to create a better list.

    My first attempt was a simple one-pass algorithm.  This would record each prefix with its price and then if a longer prefix with a different price were discovered it would add that as well.  This removes the most obvious effectively-duplicate data, but still left a very large list.  I added our markup and various rounding rules (since increments of whole cents are easier to understand in most cases anyway, for example) which did cut down a bit further, but it became clear that the one-pass was not going to be sufficient.  Consider:

    1. +00 at $0.01
    2. +0010 at $0.02
    3. +0011 at $0.02
    4. +0012 at $0.02
    5. +0013 at $0.02
    6. +0014 at $0.02
    7. +0015 at $0.02
    8. +0016 at $0.02
    9. +0017 at $0.02
    10. +0018 at $0.02
    11. +0019 at $0.02

    There are many sets of prefixes that look like this in the data.  Of course the right answer here is that +001 is $0.02, which is much easier to understand than this list, but the algorithm cannot know that until it has seen all 10 overlapping prefixes.  Even worse:

    1. +00 at $0.01
    2. +0010 at $0.02
    3. +0011 at $0.02
    4. +0012 at $0.02
    5. +0013 at $0.02
    6. +0014 at $0.02
    7. +0015 at $0.03
    8. +0016 at $0.02
    9. +0017 at $0.02
    10. +0018 at $0.02
    11. +0019 at $0.02

    From this input we would like:

    1. +00 at $0.01
    2. +001 at $0.02
    3. +0015 at $0.03

    So just checking if the prefixes we have so far are a fully-overlapped set is not enough.  Well, no problem, it’s not that much data, perhaps I can implement a brute-force approach and be done with it.

    Brute force is very slow.  On this data it completed, but as I found I kept wanting to tweak rounding rules and other parts of the overlap detection the speed became really problematic.  So I was searching for a non-bruteforce way that would be optimal across all prefixes and fast enough to re-run often in order to play with the effects of rounding rules.

    Trie

    As I was discussing the problem with a co-worker, trying to speed up lookups we were thinking about trees.  Maybe a tree where traversal to the next level was determined by the next digit in the prefix?  As we explored what this would look like, it became obvious that we were inventing a Trie .  So I grabbed a gem and started monkeypatching things.

    Most Trie implementations are about answering yes/no questions and don’t store anything but the prefix in the tree.  I wanted to be able to “look down” from any node in the tree to see if the data was overlapping, and so storing rates right in the nodes seemed useful:

    def add_with(chars, rate)
        if chars.empty? # leaf node for this prefix
            @rate = rate
            terminal!
        else
            add_to_children_tree_with(chars, rate)
        end
    end

    But sometimes we have a level that doesn’t have a rate, so we need to compute its rate from the majority-same rate of its children:

    def rate
        # This level has a known rate already
        return @rate if @rate
    
        groups =
            children_tree.each_value.to_a         # Immediate children
            .select { |x| x.rate }                # That have a rate
            .combination(2)                       # Pairwise combinations
            .select { |(x, y)| x.rate == y.rate } # That are the same
            .group_by { |x| x.first.rate }        # Group by rate
        unless groups.empty?
            # Whichever rate has the most entries in the children is our rate
            @rate = groups.max_by { |(_, v)| v.length }.first
            return @rate
        end
    
        # No rate here or below
        nil
    end

    This algorithm is naturally recursive on the tree, so even if the immediate children don’t have a rate they will compute from their children, etc.  And finally a traversal to turn this all back into the flat list we want to store:

    def each
        if rate
            # Find the rate of our parent in the tree,
            # possibly computed in part by asking us
            up = parent
            while up
                break if up.rate
                up = up.parent
            end
    
            # Add our prefix and rate to the list unless parent has it covered
            yield [to_s, rate] unless up&.rate == rate
        end
    
        # Add rates from children also
        children_tree.each_value do |child|
            child.each { |x| yield x }
        end
    end

    This (with rounding rules, etc) cut the list from our original of 59881 down to 4818.  You can browse the result .  It’s not as short as I was hoping for, but many destinations are manageable now, and thanks to a little bit of Computer Science we can tweak it in the future and just rerun this quick script.

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/2022-computing-call-rates