phone

    • chevron_right

      XMPP Providers: xmpp.earth

      news.movim.eu / PlanetJabber · Saturday, 30 March, 2024 - 00:00

    Available since Aug 1, 2022 · Website: EN
    Service
    Cost: Free of charge
    Legal notice: EN
    Bus factor : 2 persons
    Organization: Non-governmental
    Server
    Server / Data locations: 🇩🇪 | 🇸🇪
    Professional hosting
    Green hosting
    Server software: unknown

    Account

    You can register on this provider directly via:
    An email address is not required for registration
    No CAPTCHA must be solved for registration
    You cannot reset your password
    Messages are stored for 365 days

    File Sharing ( HTTP Upload )

    Allows to share files up to 52 MB
    Shared files are stored up to 10240 MB
    Stores files for 7 days
    Compatibility / Security
    Compatibility 100
    Security ( C2S ) unknown
    Security ( S2S ) unknown
    Contact
    Email -
    Chat -
    Group chat EN

    Provider Category

    A

    Categories explained
    xmpp.earth badge
    Embed badge

    This provider offers a Provider File .

    Latest change: Feb 1, 2024 · Something changed?

    Providers are checked for updates on a daily basis.

    Badge copied!
    • chevron_right

      XMPP Providers: xmpp.is

      news.movim.eu / PlanetJabber · Saturday, 30 March, 2024 - 00:00

    Available since May 19, 2015 · Website: EN

    Alternative Addresses

    xmpp.chat xmpp.co xmpp.cx xmpp.fi xmpp.si xmpp.xyz
    Service
    Cost: Free of charge
    Legal notice: EN
    Bus factor : unknown
    Organization: Company
    Server
    Server / Data locations: 🇷🇴 | 🇺🇸
    Professional hosting
    Green hosting
    Server software: Prosody 0.12.4

    Account

    You can register on this provider via a web browser
    Register (EN)
    An email address is not required for registration
    No CAPTCHA must be solved for registration
    You can reset your password: EN
    Messages are stored for 30 days

    File Sharing ( HTTP Upload )

    Allows to share files up to 10 MB
    Shared files are stored up to an unlimited size
    Stores files for 7 days
    Compatibility / Security
    Compatibility 100
    Security ( C2S ) unknown
    Security ( S2S ) unknown
    Contact
    Email -
    Chat -
    Group chat -

    Provider Category

    D

    Categories explained
    Why not category "A"
    • No support via XMPP chat or email
    • Registration via XMPP apps is not supported (at any time)
    • Sharing files is not allowed up to a size that is large enough
    xmpp.is badge
    Embed badge

    This provider offers a Provider File .

    Latest change: Feb 3, 2024 · Something changed?

    Providers are checked for updates on a daily basis.

    Badge copied!
    • chevron_right

      XMPP Providers: xmpp.jp

      news.movim.eu / PlanetJabber · Saturday, 30 March, 2024 - 00:00 · 1 minute

    Listed since Jan 16, 2024 · Website: unknown
    Service
    Cost: unknown
    No legal notice available
    Bus factor : unknown
    Organization: unknown
    Server
    Server / Data location: unknown
    Professional hosting : unknown
    No green hosting
    Server software: XMPP.JP 1

    Account

    You can register on this provider via a web browser
    Register (EN)
    An email address is not required for registration
    No CAPTCHA must be solved for registration
    You cannot reset your password
    Messages are stored for an unknown time or less than 1 day

    File Sharing ( HTTP Upload )

    Allows to share files up to an unknown size or less than 1 MB
    Shared files are stored up to an unknown size or less than 1 MB
    Stores files for an unknown time or less than 1 day
    Compatibility / Security
    Compatibility 100
    Security ( C2S ) unknown
    Security ( S2S ) unknown
    Contact
    Email EN
    Chat -
    Group chat -

    Provider Category

    D

    Categories explained
    Why not category "A"
    • Not free of charge or unknown
    • Registration via XMPP apps is not supported (at any time)
    • Legal notice is missing
    • Sharing files is allowed up to an unknown size or less than 1 MB
    • Shared files are stored for an unknown time or less than 1 day
    • Shared files are stored up to an unknown size or less than 1 MB
    • Messages are stored for an unknown time or less than 1 day
    • No professional hosting or unknown
    • Server location is unknown
    • Provider is too young or not listed long enough
    xmpp.jp badge
    Embed badge

    This provider does not offer a Provider File .

    Latest change: Feb 4, 2024 · Something changed?

    Providers are checked for updates on a daily basis.

    Badge copied!
    • chevron_right

      XMPP Providers: yax.im

      news.movim.eu / PlanetJabber · Saturday, 30 March, 2024 - 00:00

    Available since Nov 17, 2013 · Website: EN
    Service
    Cost: Free of charge
    Legal notice: EN
    Bus factor : 3 persons
    Organization: Non-governmental
    Server
    Server / Data location: 🇩🇪
    Professional hosting
    No green hosting
    Server software: Prosody 0.12 nightly build 212 (2023-10-27, 24070d47a6e7)

    Account

    You can register on this provider directly via:
    An email address is not required for registration
    No CAPTCHA must be solved for registration
    You cannot reset your password
    Messages are stored for 14 days

    File Sharing ( HTTP Upload )

    Allows to share files up to 80 MB
    Shared files are stored up to 500 MB
    Stores files for 30 days
    Compatibility / Security
    Compatibility 100
    Security ( C2S ) unknown
    Security ( S2S ) unknown
    Contact
    Email -
    Chat -
    Group chat -

    Provider Category

    D

    Categories explained
    Why not category "A"
    • No support via XMPP chat or email
    yax.im badge
    Embed badge

    This provider offers a Provider File .

    Latest change: Feb 12, 2024 · Something changed?

    Providers are checked for updates on a daily basis.

    Badge copied!
    • chevron_right

      XMPP Providers: yourdata.forsale

      news.movim.eu / PlanetJabber · Saturday, 30 March, 2024 - 00:00 · 1 minute

    Listed since Jan 9, 2024 · Website: unknown
    Service
    Cost: unknown
    No legal notice available
    Bus factor : unknown
    Organization: unknown
    Server
    Server / Data location: unknown
    Professional hosting : unknown
    No green hosting
    Server software: ejabberd 23.10

    Account

    You can register on this provider directly via:
    An email address is not required for registration
    A CAPTCHA must be solved for registration
    You cannot reset your password
    Messages are stored for an unknown time or less than 1 day

    File Sharing ( HTTP Upload )

    Allows to share files up to 40 MB
    Shared files are stored up to an unknown size or less than 1 MB
    Stores files for an unknown time or less than 1 day
    Compatibility / Security
    Compatibility 100
    Security ( C2S ) unknown
    Security ( S2S ) unknown
    Contact
    Email -
    Chat -
    Group chat -

    Provider Category

    D

    Categories explained
    Why not category "A"
    yourdata.forsale badge
    Embed badge

    This provider does not offer a Provider File .

    Latest change: Jan 30, 2024 · Something changed?

    Providers are checked for updates on a daily basis.

    Badge copied!
    • wifi_tethering open_in_new

      This post is public

      providers.xmpp.net /provider/yourdata.forsale/

    • chevron_right

      Erlang Solutions: 5 Erlang and Elixir use cases in FinTech

      news.movim.eu / PlanetJabber · Thursday, 28 March, 2024 - 10:34 · 3 minutes

    Erlang, Elixir and other programming languages running on the BEAM virtual machine are powering some of the world’s biggest and most productive financial services systems.

    In this post, we’ll be examining five (of many) use cases, showcasing the power and versatility of these languages and how they’re actively revolutionising the financial services sector.

    Vocalink and Erlang

    For: Ultra-reliable transaction delivery

    Vocalink has been a driving force around the world in the evolution of real-time payments for over a decade. These real-time solutions sit at the centre of the global payments infrastructure and are designed to ‘run on their own rails’ at a massive scale. Their systems are relied on by financial institutions, corporations and governments to provide high-availability and resilient payment solutions.

    Elrang Elixir Vocalink

    Erlang is used for immediate payment switches by many companies facilitating instant bank transfers and bill payment services. Erlang, Mnesia and RabbitMQ are used for Vocalink’s global instant payment systems (IPS) which enable real-time payments in Singapore, Thailand, the US and other countries.

    The Vocalink IPS is now in use by The Clearing House – the banking association and payments companies owned by the largest commercial banks in the US. It has also been selected as the payment technology for the innovative P27 Scandinavian cross/border payment initiative.

    The ability to be able to securely and reliably handle payment requests at scale is thanks to Erlang, which, being originally designed for telecoms switching, is ideal for systems that need to be always on and always processing.

    Solaris and Erlang/ Elixir

    For: Fast and agile development

    Solaris is a Berlin-based FinTech company, making waves at a pace rarely seen before in the financial industry. Their Banking-as-a-Service platform enables businesses to offer their financial products via Elixir-powered APIs.

    It took Solaris less than three years to build the platform, scale up a team, and raise almost €100m in funding with investments.

    Their full banking license enables companies to offer their financial products. Partners can access Solaris modules in the fields of e-money, instant credit, lending, and digital banking as well as services from third-party providers integrated with the platform via an API.

    Goldman Sachs for Erlang

    For: Creating system reliability for application innovation

    Goldman Sachs leverages hundreds of applications, all communicating with each other. The Data Management and Distribution group provides messaging middleware services to the firm’s ecosystem. RabbitMQ, built in Erlang, is a key part of the messaging portfolio at Goldman Sachs.

    The Erlang programming language is being used by Goldman Sachs in part of its hedge fund trading platform. Erlang is used as part of the real-time monitoring solution for this distributed trading system enabling changes to be made rapidly in response to market conditions.

    RabbitMQ is a free, Open Source message broker that is highly available, fault-tolerant and scalable. As a system middleware layer, it enables different services in your application to communicate with each other optimally. The key value of using RabbitMQ at Goldman Sachs has been the freedom it has afforded its developers to innovate with minimal friction. This freedom helps Goldman Sachs to be one of the traditional financial services firms recognised as keeping pace with the most exciting FinTechs.

    Sum Up for Erlang and Elixir

    For: Their 24/7 system availability and fault-tolerance

    SumUP is the leading mobile point-of-sale (mPOS) company in Europe, enabling hundreds of thousands of small businesses in countries worldwide to get paid. They used Erlang to build their payment service from scratch.

    They picked Erlang as the ‘right tool for the job’ because card processing is an industry requiring backend systems of a certain type, where availability and fault-tolerance are a must, and Erlang boasts a battle-proven track record in delivering this in telecoms and FinTech .

    Beyond the original hardware, mobile, and web apps, SumUp has also developed a suite of APIs and SDKs for integrating SumUp payment into other apps and services.

    Erlang, Elixir and Fintech

    As you can see Erlang, Elixir and other BEAM VM-based technologies are being used in production across the Fintech landscape – this includes private and public blockchain, payment and credit card gateways, banking APIs and more traditional infrastructure management.

    It is easy to build prototypes with Erlang which is why it’s a great choice for fast-moving FinTech startups. The Elixir programming language mixes the best of Erlang in terms of runtime, the concurrency model, and fault tolerance, with powerful and battle-tested frameworks.

    There are many more use cases for Erlang and other BEAM VM technologies from the biggest investment banks in the world to nimble FinTech challengers, which we’ll continue to explore.

    The post 5 Erlang and Elixir use cases in FinTech appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/5-erlang-and-elixir-use-cases-in-fintech/

    • chevron_right

      Erlang Solutions: Guess Less with Erlang Doctor

      news.movim.eu / PlanetJabber · Monday, 18 March, 2024 - 16:11 · 13 minutes

    BEAM languages, such as Erlang and Elixir, offer a powerful tracing mechanism, and Erlang Doctor is built on top of it. It stores function calls and messages in an ETS table, which lowers the impact on the traced system, and enables querying and analysis of the collected traces. Being simple, always available and easy to use, it encourages you to pragmatically investigate system logic rather than guess about the reason for its behaviour.
    This blog post is based on a talk I presented at the FOSDEM 2024 conference.

    Introduction

    It is tough to figure out why a piece of code is failing, or how unknown software is working. When confronted with an error or other unusual system behaviour, we might search for the reason in the code, but it is often unclear what to look for, and tools like grep can give a large number of results. This means that there is some guessing involved, and the less you know the code, the less chance you have of guessing correctly. BEAM languages such as Erlang and Elixir include a tracing mechanism, which is a building block for tools like dbg , recon or redbug . They let you set up tracing for specific functions, capture the calls, and print them to the console or to a file. The diagram below shows the steps of such a typical tracing activity, which could be called ad-hoc logging , because it is like enabling logging for particular functions without the need for adding log statements to the code.

    The first step is to choose the function (or other events) to trace, and it is the most difficult one, because usually, we don’t know where to start – for example, all we might know is that there is no response for a request. This means that the collected traces (usually in text format) often contain no relevant information, and the process needs to be repeated for a different function. A possible way of scaling this approach is to trace more functions at once, but this would result in two issues:

    1. Traces are like logs, which means that it is very easy to get overwhelmed with the amount of data. It is possible to perform a text search, but any further processing would require data parsing.
    2. The amount of data might become so large, that either structures like function arguments, return values and message contents become truncated, or the messages end up queuing up because of the I/O bottleneck.

    The exact limit of this approach depends on the individual case, but usually, a rule of thumb is that you can trace one typical module, and collect up to a few thousand traces. This is not enough for many applications, e.g. if the traced behaviour is a flaky test – especially if it fails rarely, or if the impact of trace collection makes it irreproducible.

    Tracing with Erlang Doctor

    Erlang Doctor is yet another tool built on top of the Erlang tracer, but it has an important advantage – by storing the traces in an ETS table, it reduces the impact on the traced system (by eliminating costly I/O operations), while opening up the possibility of further processing and analysis of the collected traces.

    Erlang Doctor is yet another tool built on top of the Erlang tracer, but it has an important advantage – by storing the traces in an ETS table, it reduces the impact on the traced system (by eliminating costly I/O operations), while opening up the possibility of further processing and analysis of the collected traces.

    Being no longer limited by the amount of produced text, it scales up to millions of collected traces, and the first limit you might hit is the system memory. Usually it is possible to trace all modules in an application (or even a few applications) at once, unless it is under heavy load. Thanks to the clear separation between data acquisition and analysis, this approach can be called ad-hoc instrumentation rather than logging. The whole process has to be repeated only in rare situations, e.g. if wrong application was traced. Of course tracing production nodes is always risky, and not recommended, unless very strict limits are set up in Erlang Doctor.

    Getting Started

    Erlang Doctor is available at https://github.com/chrzaszcz/erlang_doctor . For Elixir, there is https://github.com/chrzaszcz/ex_doctor , which is a minimal wrapper around Erlang Doctor. Both tools have Hex packages ( erlang_doctor , ex_doctor ). You have a few options for installation and running, depending on your use case:

    1. If you want it in your Erlang/Elixir shell right now, use the “firefighting snippets” provided in the Hex or GitHub docs. Because Erlang Doctor is just one module (and ExDoctor is two), you can simply download, compile, load and start the tool with a one-liner.
    2. For development, it is best to have it always at hand by initialising it in your ~/.erlang or ~/.iex.exs files. This way it will be available in all your interactive shells, e.g. rebar3 shell or iex -S mix .
    3. For easy access in your release, you can include it as a dependency of your project.

    Basic usage

    The following examples are in Erlang, and you can run them yourself – just clone erlang_doctor , compile it, and execute rebar3 as test shell . Detailed examples for both Erlang and Elixir are provided in the Hex Docs ( erlang_doctor , ex_doctor ). The first step is to start the tool:

    1> tr:start().
    {ok,<0.86.0>}
    

    There is also tr:start/1 with additional options . For example, tr:start(#{limit => 10000}) would stop tracing, when there are 10 000 traces in the ETS table, which provides a safety valve against memory consumption.

    Trace collection

    Having started the Erlang Doctor, we can now trace selected modules – here we are using a test suite from Erlang Doctor itself:

    2> tr:trace([tr_SUITE]).
    ok
    

    The tr:trace/1 function accepts a list of modules or {Module, Function, Arity} tuples. Alternatively, you can provide a map of options to trace specific processes or to enable message tracing. You can also trace entire applications, e.g. tr:trace_app(your_app) or tr:trace_apps([app1, app2]) .

    Let’s trace the following function call. It calculates the factorial recursively, and sleeps for 1 ms before each step:

    3> tr_SUITE:sleepy_factorial(3).
    6
    

    It’s a good practice to stop tracing as soon as you don’t need it anymore:

    4> tr:stop_tracing().
    ok
    

    Trace analysis

    The collected traces are accumulated in an ETS table (default name: trace ). They are stored as tr records, and to display them, we need to load the record definitions:

    5> rr(tr).
    [node,tr]
    

    If you don’t have many traces, you can just list all of them:

    6> tr:select().
    [#tr{index = 1, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [3], ts = 1559134178217371, info = no_info},
     #tr{index = 2, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [2], ts = 1559134178219102, info = no_info},
     #tr{index = 3, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [1], ts = 1559134178221192, info = no_info},
     #tr{index = 4, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [0], ts = 1559134178223107, info = no_info},
     #tr{index = 5, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
         data = 1, ts = 1559134178225146, info = no_info},
     #tr{index = 6, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
         data = 1, ts = 1559134178225153, info = no_info},
     #tr{index = 7, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
         data = 2, ts = 1559134178225155, info = no_info},
     #tr{index = 8, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
         data = 6, ts = 1559134178225156, info = no_info}]
    
    

    The index field is auto-incremented, and data contains an argument list or a return value, while ts is a timestamp in microseconds. To select specific fields of matching records, use tr:select/1 , providing a selector function, which is passed to ets:fun2ms/1 .

    7> tr:select(fun(#tr{event = call, data = [N]}) -> N end).
    [3, 2, 1, 0]
    

    You can use tr:select/2 to further filter the results by searching for a specific term in data . In this simple example we search for the number 2 :

    8> tr:select(fun(T) -> T end, 2).
    [#tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [2], ts = 1705475521744690, info = no_info},
     #tr{index = 7, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
         data = 2, ts = 1705475521750454, info = no_info}]
    

    This is powerful, as it searches all nested tuples, lists and maps, allowing you to search for arbitrary terms. For example, even if your code outputs something like “Unknown error”, you can pinpoint the originating function call. There is a similar function tr:filter/1 , which filters all traces with a predicate function (this time not limited by fun2ms ). In combination with tr:contains_data/2 , you can get the same result as above:

    9> Traces = tr:filter(fun(T) -> tr:contains_data(2, T) end).
    [#tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [2], ts = 1705475521744690, info = no_info},
     #tr{index = 7, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
         data = 2, ts = 1705475521750454, info = no_info}]
    
    


    There is also tr:filter/2 , which can be used to search in a different table than the current one – or in a list. As an example, let’s get only function calls from Traces returned by the previous call:

    10> tr:filter(fun(#tr{event = call}) -> true end, Traces).
    [#tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
         data = [2], ts = 1705475521744690, info = no_info}]

    To find the tracebacks (stack traces) for matching traces, use tr:tracebacks/1 :

    11> tr:tracebacks(fun(#tr{data = 1}) -> true end).
    [[#tr{index = 3, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
          data = [1], ts = 1705475521746470, info = no_info},
      #tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
          data = [2], ts = 1705475521744690, info = no_info},
      #tr{index = 1, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
          data = [3], ts = 1705475521743239, info = no_info}]]

    Note, that by specifying data = 1 , we are only matching return traces, as call traces always have a list in data . Only one traceback is returned, starting with a call that returned 1. What follows is the stack trace for this call. There was a second matching traceback, but it wasn’t shown, because whenever two tracebacks overlap, the longer one is skipped. You can change this with tr:tracebacks/2 , providing #{output => all}) as the second argument. There are more options available, allowing you to specify the queried table/list, the output format, and the maximum amount of data returned. If you only need one traceback, you can call tr:traceback/1 or tr:traceback/2 . Additionally, it is possible to pass a tr record (or an index) directly to tr:traceback/1 .


    To get a list of traces between each matching call and the corresponding return, use tr:ranges/1 :

    12> tr:ranges(fun(#tr{data = [1]}) -> true end).
    [[#tr{index = 3, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
          data = [1], ts = 1705475521746470, info = no_info},
      #tr{index = 4, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
          data = [0], ts = 1705475521748499, info = no_info},
      #tr{index = 5, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
          data = 1, ts = 1705475521750451, info = no_info},
      #tr{index = 6, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
          data = 1, ts = 1705475521750453, info = no_info}]]

    There is also tr:ranges/2 with options , allowing to set the queried table/list, and to limit the depth of nested traces. In particular, you can use #{max_depth => 1} to get only the top-level call and the corresponding return. If you only need the first range, use tr:range/1 or tr:range/2 .

    Last but not least, you can get a particular trace record with tr:lookup/1 , and replay a particular function call with tr:do/1 :

    13> T = tr:lookup(1).
    #tr{index = 1, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
        data = [3], ts = 1559134178217371, info = no_info}
    14> tr:do(T).
    6
    

    This is useful e.g. for checking if a bug has been fixed without running the whole test suite, or to reproduce an issue while capturing further traces. This function can be called with an index as the argument: tr:do(1) .

    Quick profiling

    Although there are dedicated profiling tools for Erlang, such as fprof and eprof , you can use Erlang Doctor to get a hint about possible bottlenecks and redundancies in your system with function call statistics. One of the advantages is that you already have the traces collected from your system, so you don’t need to trace again. Furthermore, tracing only specific modules gives you much simpler output, that you can easily read and process in your Erlang shell.

    Call statistics

    To get statistics of function call times, you can use tr:call_stat/1 , providing a function that returns a key by which the traces will be aggregated. The simplest use case is to get the total number of calls and their time. To do this, we group all calls under one key, e.g. total :

    15> tr:call_stat(fun(_) -> total end).
    #{total => {4,7216,7216}}

    The tuple {4,7216,7216} means that there were four calls in total with an accumulated time of 7216 microseconds, and the “own” time was also 7216 μs – this is the case because we have aggregated all traced functions. To see different values, let’s group the stats by the function argument:

    16> tr:call_stat(fun(#tr{data = [N]}) -> N end).
    #{0 => {1,1952,1952}, 1 => {1,3983,2031}, 2 => {1,5764,1781}, 3 => {1,7216,1452}}

    Now it is apparent that although sleepy_factorial(3) took 7216 μs, only 1452 μs were spent in the function itself, and the remaining 5764 μs were spent in the nested calls. To filter out unwanted function calls, just add a guard:

    17> tr:call_stat(fun(#tr{data = [N]}) when N < 3 -> N end).
    #{0 => {1,1952,1952}, 1 => {1,3983,2031}, 2 => {1,5764,1781}}

    There are additional utilities: tr:sorted_call_stat/1 and tr:print_sorted_call_stat/2 , which gives you different output formats.

    Call tree statistics

    If your code is performing the same operations very often, it might be possible to optimise it. To detect such redundancies, you can use tr:top_call_trees/0 , which detects complete call trees that repeat several times, where corresponding function calls and returns have the same arguments and return values, respectively. As an example, let’s trace a call to a function which calculates the 4th element of the Fibonacci sequence recursively. The trace table should be empty, so let’s clean it up first:

    18> tr:clean().
    ok
    19> tr:trace([tr_SUITE]).
    ok
    20> tr_SUITE:fib(4).
    3
    21> tr:stop_tracing().
    ok
    

    Now it is possible to print the most time-consuming call trees that repeat at least twice:

    22> tr:top_call_trees().
    [{13, 2, #node{module = tr_SUITE,function = fib, args = [2],
                   children = [#node{module = tr_SUITE, function = fib, args = [1],
                                     children = [], result = {return,1}},
                               #node{module = tr_SUITE, function = fib, args = [0],
                                     children = [], result = {return,0}}],
                   result = {return,1}}},
     {5, 3, #node{module = tr_SUITE,function = fib, args = [1],
                  children = [], result = {return,1}}}]

    The resulting list contains tuples {Time, Count, Tree} where Time is the accumulated time (in microseconds) spent in the Tree , and Count is the number of times the tree repeated. The list is sorted by Time , descending. In the example, fib(2) was called twice, which already shows that the recursive implementation is suboptimal. You can see the two repeating subtrees in the call tree diagram:

    The second listed tree consists only of fib(1) , and it was called three times. There is also tr:top_call_trees/1 with options , allowing customisation of the output format – you can set the minimum number of repetitions, maximum number of presented trees etc.

    ETS table manipulation

    To get the current table name, use tr:tab/0 :

    23> tr:tab().
    trace
    

    To switch to a new table, use tr:set_tab/1 . The table need not exist.

    24> tr:set_tab(tmp).
    ok
    

    Now you can collect traces to the new table without changing the original one. You can dump the current table to file with tr:dump/1 – let’s dump the tmp table:

    25> tr:dump("tmp.ets").
    ok
    

    In a new Erlang session, you can load the data with tr:load/1 . This will set the current table name to tmp . Finally, you can remove all traces from the ETS table with tr:clean/0 . To stop Erlang Doctor, just call tr:stop/0 .

    Summary

    Now you have an additional utility in your Erlang/Elixir toolbox, which you can try out whenever you need to debug an issue or learn about unknown or unexpected system behaviour. Just remember to be extremely cautious when using it in a production environment. If you have any feedback, please provide it on GitHub, and if you like the tool, consider giving it a star.

    The post Guess Less with Erlang Doctor appeared first on Erlang Solutions .

    • chevron_right

      Ignite Realtime Blog: Openfire inVerse plugin version 10.1.7.1 released!

      news.movim.eu / PlanetJabber · Friday, 15 March, 2024 - 12:55

    We have made available a new version of the inVerse plugin for Openfire! This plugin allows you to easily deploy the third-party Converse client in Openfire. In this release, the version of the client that is bundled in the plugin is updated to 10.1.7.

    The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page .

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: gen_statem Unveiled

      news.movim.eu / PlanetJabber · Wednesday, 13 March, 2024 - 11:32 · 12 minutes

    gen_statem and protocols

    This blog post is a deep dive into some of the concepts discussed in my recent conference talk at FOSDEM . The presentation explored some basic theoretical concepts of Finite State Machines, and some special powers of Erlang’s gen_statem in the context of protocols and event-driven development, and building upon this insight, this post delves into harnessing the capabilities of the gen_statem behaviour. Let’s jump straight into it!

    Protocols

    The word protocol comes from the Greek “πρωτόκολλον”, from πρῶτος (prôtos, “first”) + κόλλα (kólla, “glue”), used in Byzantine greek as the first sheet of a papyrus-roll, bearing the official authentication and date of manufacture of the papyrus 1 . Over time, the word describing the first page became a synecdoche for the entire document.

    The word protocol was then used primarily to refer to diplomatic or political treaties until the field of Information Technology overloaded the word to describe “treaties” too, but between machines, which as in diplomacy, governs the manner of communication between two entities . As the entities communicate, a given entity receives messages describing the interactions that peers are establishing with it, creating a model where an entity reacts to events .
    In this field of Technology, so much of the job of a programmer is implementing such communication protocol , which reacts to events . The protocol defines the valid messages and the valid order, and any side effects an event might have. You know many such protocols: TCP, TLS, HTTP, or XMPP, just to name some good old classics.

    The event queue

    As a BEAM programmer, implementing such an event-driven program is an archetypical paradigm you’re well familiar with: you have a process, which has a mailbox, and the process reacts to these messages one by one. It is the actor model in a nutshell: an actor can, in response to a message it receives:

    • send a finite number of messages to other Actors;
    • create a finite number of new Actors;
    • designate the behaviour to be used for the next message it receives.

    It is ubiquitous to implement such actors as a gen_server, but, pay attention to the last point: designate the behaviour to be used for the next message it receives . When a given event (a message) implies information about how the next event should be processed, there is implicitly a transformation of the process state. What you have is a State Machine in disguise.

    Finite State Machines

    Finite State Machines (FSM for short) are a function 𝛿 of an input state and an input event, to an output state where the function can be applied again. This is the idea of the actor receiving a message and designating the behaviour for the next: it chooses the state that will be input together with the next event.

    FSMs can also define output, in such cases they are called Finite State Transducers (FST for short, often simplified to FSMs too), and their definition adds another alphabet for output symbols, and the function 𝛿 that defines the machine does return the next state together with the next output symbol for the current input.

    gen_statem

    When the function’s input is the current state and an input symbol, and the output is a new state and a new output symbol, we have a Mealy machine 2 . And when the output alphabet of one machine is the input alphabet of another, we can then intuitively compose them. This is the pattern that gen_statem implements.
    gen_statem has three important features that are easily overlooked, taking the best of pure Erlang programming and state machine modelling: it can simulate selective receives, offers an extended mailbox, and allows for complex data structures as the FSM state.

    Selective receives

    Imagine the archetypical example of an FSM, a light switch. The switch is for example digital and translates requests to a fancy light-set using an analogous cable protocol. The code you’ll need to implement will look something like the following:

    handle_call(on, _From, {off, Light}) ->
    	on = request(on, Light),
    	{reply, on, {on, Light}};
    handle_call(off, _From, {on, Light}) ->
    	off = request(off, Light),
    	{reply, off, {off, Light}};
    handle_call(on, _From, {on, Light}) ->
    	{reply, on, {on, Light}};
    handle_call(off, _From, {off, Light}) ->
    	{reply, off, {off, Light}}.
    
    

    But now imagine the light request was to be asynchronous, now your code would look like the following:

    handle_call(on, From, {off, undefined, Light}) ->
    	Ref = request(on, Light),
    	{noreply, {off, {on, Ref, From}, Light}};
    handle_call(off, From, {on, undefined, Light}) ->
    	Ref = request(off, Light),
    	{noreply, {on, {off, Ref, From}, Light}};
    
    handle_call(off, _From, {on, {off, _, _}, Light} = State) ->
    	{reply, turning_off, State};  %% ???
    handle_call(on, _From, {off, {on, _, _}, Light} = State) ->
    	{reply, turning_on, State}; %% ???
    handle_call(off, _From, {off, {on, _, _}, Light} = State) ->
    	{reply, turning_on_wait, State};  %% ???
    handle_call(on, _From, {on, {off, _, _}, Light} = State) ->
    	{reply, turning_off_wait, State}; %% ???
    
    handle_info(Ref, {State, {Request, Ref, From}, Light}) ->
    	gen_server:reply(From, Request),
    	{noreply, {Request, undefined, Light}}.
    

    The problem is, that now the order of events is not defined, and reorderings of the user requesting a switch and the light system announcing finalising the request are possible, so you need to handle these cases. When the switch and the light system had only two states each, you had to design and write four new cases: the number of new cases grows by multiplying the number of cases on each side. And each case is a computation of the previous cases, effectively creating a user-level callstack.

    So we now try migrating the code to a properly explicit state machine, as follows:

    off({call, From}, off, {undefined, Light}) ->
    	{keep_state_and_data, [{reply, From, off}]};
    off({call, From}, on, {undefined, Light}) ->
    	Ref = request(on, Light),
    	{keep_state, {{Ref, From}, Light}, []};
    off({call, From}, _, _) ->
    	{keep_state_and_data, [postpone]};
    off(info, {Ref, Response}, {{Ref, From}, Light}) ->
    	{next_state, Response, {undefined, Light}, [{reply, From, Response}]}.
    
    on({call, From}, on, {undefined, Light}) ->
    	{keep_state_and_data, [{reply, From, on}]};
    on({call, From}, off, {undefined, Light}) ->
    	Ref = request(off, Light),
    	{keep_state, {{Ref, From}, Light}, []};
    on({call, From}, _, _) ->
    	{keep_state_and_data, [postpone]};
    on(info, {Ref, Response}, {{Ref, From}, Light}) ->
    	{next_state, Response, {undefined, Light}, [{reply, From, Response}]}.

    Now the key lies in postponing requests: this is akin to Erlang’s selective receive clauses, where the mailbox is explored until a matching message is found. Events that arrive out of order can this way be treated when the order is right.

    This is an important difference between how we learn to program in pure Erlang, with the power of selective receives where we chose which message to handle, and how we learn to program in OTP, where generic behaviours like gen_server force us to handle the first message always, but in different clauses depending on the semantics of the message ( handle_cast , handle_call and handle_info) . With the power to postpone a message, we effectively choose which message to handle without being constrained with the code location.

    This section is inspired really by Ulf Wiger’s fantastic talk, Death by Accidental Complexity 3 , so if you’ve known of the challenge he explained, this section hopefully serves as a solution to you.

    Complex Data Structures

    This was much explained in the previous blog on state machines 4 : by using gen_statem ’s handle_event_function callback, apart from all the advantages explained in the aforementioned blog, we can also reduce the implementation of 𝛿 to a single function called handle_event , which makes the previous code take advantage of a lot of code reuse, see the following equivalent state machine:

    handle_event({call, From}, State, State, {undefined, Light}) ->
    	{keep_state_and_data, [{reply, From, State}]};
    handle_event({call, From}, Request, State, {undefined, Light}) ->
    	Ref = request(Request, Light),
    	{keep_state, {{Ref, From}, Light}, []};
    handle_event({call, _}, _, _, _) ->
    	{keep_state_and_data, [postpone]};
    handle_event(info, {Ref, Response}, State, {{Ref, From}, Light}) ->
    	{next_state, Response, {undefined, Light}, [{reply, From, Response}]}.

    This section was extensively described in the previous blog post so to learn more about it, please enjoy your read!

    An extended mailbox

    We saw that the function 𝛿 of the FSM in question is called when a new event is triggered. In implementing a protocol, this is modelled by messages to the actor’s mailbox. In a pure FSM, a message that has no meaning within a state would crash the process, but in practice, while the order of messages is not defined, it might be a valid computation to postpone them and process them when we reach the right state.

    This is what a selective receive would do, by exploring the mailbox and looking for the right message to handle for the current state. In OTP, the general practice is to leave the lower-level communication abstractions to the underlying language features, and code in a higher and more sequential style as defined by the generic behaviours: in gen_statem , we have an extended view of the FSM’s event queue.

    There are two more things to notice we can do with gen_statem actions: one is to insert ad-hoc events with the construct { next_event , EventType , EventContent }, and the other is to insert timeouts, which can be restarted automatically on any new event, any state change, or not at all. These seem like different event queues for our eventful state machine, together with the process’s mailbox, but really it is only one queue we can see as an extended mailbox.

    The mental picture is as follows: There is only one event queue, which is an extension of the process mailbox, and this queue has got three pointers:

    • A head pointing at the oldest event;
    • A current pointing at the next event to be processed.
    • A tail pointing at the youngest event;

    This model is meant to be practically identical to how the process mailbox is perceived 5 .

    • postpone causes the current position to move to its next younger event, so the previous current position is still in the queue reachable from head .
    • Not postponing an event i.e consuming it causes the event to be removed from the queue and current position to move to its next younger event.
    • NewState =/= State causes the current position to be set to head i.e the oldest event.
    • next_event inserts event(s) at the current position i.e as just older than the previous current position.
    • { timeout , 0 , Msg } inserts a timeout , Msg event after tail i.e as the new youngest received event.

    Let’s see the event queue in pictures:

    handle_event(Type1, Content1, State1, Data) ->
    {keep_state_and_data, [postpone]};
    HGVaFFeo_p2LoBSF4FGgIsIDvPdtUgrHwvVqYrYJ_6dixNCnlOfcqbhKdq2aUJsXJXyX5cE9XrPFNZgd9TK-nemTVZ3mGDNWB49jYpYQuAa2BBiSBxg1rcErMmdy4DP_HWqlVrbYGszv-nG2fZgQyu4 When the first event to process is 1 , after any necessary logic we might decide to postpone it. In such case, the event remains in the queue, reachable from HEAD , but Current is moved to the next event in the queue, event 2 .
    handle_event(Type1, Content1, State1, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type2, Content2, State1, Data) ->
    {next_state, State2};
    MY9miQWI7q7UfVBFB0wQ_PT0Rc079lbws7twcbkra-SGFAQ498uQvX2uDwnuNarL7IoNyoW4slTJRyZo-vXJ3Cu73Kt8QuoaW4_w0VHYHp8q1S7GKRM8vVTP1VjYiyCdzatvk7cj4g3ZiWPvjfqklQs When handling event 2 , after any necessary logic, we decide to transition to a new state. In this case, 2 is removed from the queue, as it has been processed, and Current is moved to HEAD , which points again to 1 , as the state is now a new one.
    handle_event(Type1, Content1, State1, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type2, Content2, State1, Data) ->
    {next_state, State2};
    ...
    handle_event(Type1, Content1, State2, Data) ->
    {keep_state_and_data, [{next_event, TypeA, ContentA}]};
    vDnpkI96IACNer4s7DeO3cm800mpwR1O0LECK6Eo1gL7QSEWvfl5uPvNi5mlNjnThLSGSycS6RaGCfeCTtznMNeDMuIk1vfoAWYSwIXrXqX2rm-kdSoYM5amZ5-tgvGLDigpiE5xJNqb5SPMHgZrqKs After any necessary handling for 1 , we now decide to insert a next_event called A . Then 1 is dropped from the queue, and A is inserted at the point where Current was pointing. HEAD is also updated to the next event after 1 , which in this case is now A .
    handle_event(Type1, Content1, State1, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type2, Content2, State1, Data) ->
    {next_state, State2};
    ...
    handle_event(Type1, Content1, State2, Data) ->
    {keep_state_and_data, [{next_event, TypeA, ContentA}]};
    ...
    handle_event(TypeA, ContentA, State2, Data) ->
    {keep_state_and_data, [postpone]};
    qJU4hr2XIzB9SLiQHE-kWa3DlKW7AR1qBhv7vYIsPUPcsSYl099zL-_pBbA372iEjP5TY3HC3gqOKvpA2TcClfmp_rBqB1zzegximVbHC8n79wcfeCBD2uXFHD935921jsFoVM8TKFa4KNKyrQkuw58 Now we decide to postpone A, so Current is moved to the next event in the queue, 3 .
    handle_event(Type1, Content1, State1, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type2, Content2, State1, Data) ->
    {next_state, State2};
    ...
    handle_event(Type1, Content1, State2, Data) ->
    {keep_state_and_data, [{next_event, TypeA, ContentA}]};
    ...
    handle_event(TypeA, ContentA, State2, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type3, Content3, State2, Data) ->
    keep_state_and_data;
    aeWhUhDTbQvZE5B-k5I1BUY7lykHf6hQhuG0znQ3WJO7xt-ubiScNcz1GOvL3joV_ekdPrm1Yajl1LJe8Dc0ZevBiRE9lV3sy_3KSYxxKBNndfT2RbSFEcdwCpmOtp9C95LpjhpMlfiSGxjdFa0ZT64 3 is processed normally, and then dropped from the queue. No other event is inserted nor postponed, so Current is simply moved to the next, 4 .
    handle_event(Type1, Content1, State1, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type2, Content2, State1, Data) ->
    {next_state, State2};
    ...
    handle_event(Type1, Content1, State2, Data) ->
    {keep_state_and_data, [{next_event, TypeA, ContentA}]};
    ...
    handle_event(TypeA, ContentA, State2, Data) ->
    {keep_state_and_data, [postpone]};
    ...
    handle_event(Type3, Content3, State2, Data) ->
    keep_state_and_data;
    ...
    handle_event(Type4, Content4, State2, Data) ->
    {keep_state_and_data,
    [postpone, {next_event, TypeB, ContentB}]};
    c9YOJlW23U8QAUnL0mxt6uIEzDu-ohCMQfYOu0ECMId0so0fBI7wg4wWzVnOwT8jCasPOznyOwN0_nOFoYXZRukLde4CmeAEr32PzZ7DiCSp4RNI6zbtZ_VTEVBto1nki3RCK_cakHf7RaxY3B-HiRQ And 4 is now postponed, and a new event B is inserted, so while HEAD still remains pointing at A, 4 is kept in the queue and Current will now point to the newly inserted event B .

    This section is in turn inspired by this comment on GitHub 6 .

    Conclusions

    We’ve seen how protocols are governances over the manners of communication between two entities and that these governances define how messages are transmitted, processed, and how they relate to each other. We’ve seen how the third clause of the actor model dictates that an actor can designate the behaviour to be used for the next message it receives and how this essentially defines the 𝛿 function of a state machine and that Erlang’s gen_statem behaviour is an FSM engine with a lot of power over the event queue and the state data structures.

    Do you have protocol implementations that have suffered from extensibility problems? Have you had to handle an exploding number of cases to implement when the event order might reorder in any possible way? If you’ve suffered from death by accidental complexity, or your code has suffered from state machines in disguise, or your testing isn’t comprehensive enough by default, the tricks and points of view of this post should help you get started, and we can always help you keep moving forward!

    1. πρωτόκολλον — Wiktionary
    2. Mealy Machine — Wikipedia
    3. https://www.infoq.com/presentations/Death-by-Accidental-Complexity/
    4. https://github.com/erlang/otp/pull/960#issuecomment-198141456
    5. https://github.com/erlang/otp/pull/960#issuecomment-198141456
    6. Reimplementing technical debt with state machines

    The post gen_statem Unveiled appeared first on Erlang Solutions .