phone

    • chevron_right

      Paul Schaub: Implementing Packet Sequence Validation using Pushdown Automata

      news.movim.eu / PlanetJabber • 26 October, 2022 • 6 minutes

    This is part 2 of a small series on verifying the validity of packet sequences using tools from theoretical computer science. Read part 1 here .

    In the previous blog post I discussed how a formal grammar can be transformed into a pushdown automaton in order to check if a sequence of packets or tokens is part of the language described by the grammar. In this post I will discuss how I implemented said automaton in Java in order to validate OpenPGP messages in PGPainless.

    In the meantime, I made some slight changes to the automaton and removed some superfluous states. My current design of the automaton looks as follows:

    If you compare this diagram to the previous iteration, you can see that I got rid of the states “Signed Message”, “One-Pass-Signed Message” and “Corresponding Signature”. Those were states which had ε -transitions to another state, so they were not really useful.

    For example, the state “One-Pass-Signed Message” would only be entered when the input “OPS” was read and ‘m’ could be popped from the stack. After that, there would only be a single applicable rule which would read no input, pop nothing from the stack and instead push back ‘m’. Therefore, these two rule could be combined into a single rule which reads input “OPS”, pops ‘m’ from the stack and immediately pushes it back onto it. This rule would leave the automaton in state “OpenPGP Message”. Both automata are equivalent.

    One more minor detail: Since I am using Bouncy Castle, I have to deal with some of its quirks. One of those being that BC bundles together encrypted session keys (PKESKs/SKESKs) with the actual encrypted data packets (SEIPD/SED). Therefore when implementing, we can further simplify the diagram by removing the SKESK|PKESK parts:

    Now, in order to implement this automaton in Java, I decided to define enums for the input and stack alphabets, as well as the states:

    public enum InputAlphabet {
        LiteralData,
        Signature,            // Sig
        OnePassSignature,     // OPS
        CompressedData,
        EncryptedData,        // SEIPD|SED
        EndOfSequence         // End of message/nested data
    }
    public enum StackAlphabet {
        msg,                 // m
        ops,                 // o
        terminus             // #
    }
    public enum State {
        OpenPgpMessage,
        LiteralMessage,
        CompressedMessage,
        EncryptedMessage,
        Valid
    }

    Note, that there is no “Start” state, since we will simply initialize the automaton in state OpenPgpMessage , with ‘m#’ already put on the stack.

    We also need an exception class that we can throw when OpenPGP packet is read when its not allowed. Therefore I created a MalformedOpenPgpMessageException class.

    Now the first design of our automaton itself is pretty straight forward:

    public class PDA {
        private State state;
        private final Stack<StackAlphabet> stack = new Stack<>();
        
        public PDA() {
            state = State.OpenPgpMessage;    // initial state
            stack.push(terminus);            // push '#'
            stack.push(msg);                 // push 'm'
        }
    
        public void next(InputAlphabet input)
                throws MalformedOpenPgpMessageException {
            // TODO: handle the next input packet
        }
    
        StackAlphabet popStack() {
            if (stack.isEmpty()) {
                return null;
            }
            return stack.pop();
        }
    
        void pushStack(StackAlphabet item) {
            stack.push(item);
        }
    
        boolean isEmptyStack() {
            return stack.isEmpty();
        }
    
        public boolean isValid() {
            return state == State.Valid && isEmptyStack();
        }
    }

    As you can see, we initialize the automaton with a pre-populated stack and an initial state. The automatons isValid() method only returns true , if the automaton ended up in state “Valid” and the stack is empty.

    Whats missing is an implementation of the transition rules. I found it most straight forward to implement those inside the State enum itself by defining a transition() method:

    public enum State {
    
        OpenPgpMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                if (stackItem != OpenPgpMessage) {
                    throw new MalformedOpenPgpMessageException();
                }
                swith(input) {
                    case LiteralData:
                        // Literal Packet,m/ε
                        return LiteralMessage;
                    case Signature:
                        // Sig,m/m
                        automaton.pushStack(msg);
                        return OpenPgpMessage;
                    case OnePassSignature:
                        // OPS,m/mo
                        automaton.push(ops);
                        automaton.push(msg);
                        return OpenPgpMessage;
                    case CompressedData:
                        // Compressed Data,m/ε
                        return CompressedMessage;
                    case EncryptedData:
                        // SEIPD|SED,m/ε
                        return EncryptedMessage;
                    case EndOfSequence:
                    default:
                        // No transition
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        LiteralMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                switch(input) {
                    case Signature:
                        if (stackItem == ops) {
                            // Sig,o/ε
                            return LiteralMessage;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    case EndOfSequence:
                        if (stackItem == terminus && automaton.isEmptyStack()) {
                            // ε,#/ε
                            return valid;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    default:
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        CompressedMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                switch(input) {
                    case Signature:
                        if (stackItem == ops) {
                            // Sig,o/ε
                            return CompressedMessage;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    case EndOfSequence:
                        if (stackItem == terminus && automaton.isEmptyStack()) {
                            // ε,#/ε
                            return valid;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    default:
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        EncryptedMessage {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                StackAlphabet stackItem = automaton.popStack();
                switch(input) {
                    case Signature:
                        if (stackItem == ops) {
                            // Sig,o/ε
                            return EncryptedMessage;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    case EndOfSequence:
                        if (stackItem == terminus && automaton.isEmptyStack()) {
                            // ε,#/ε
                            return valid;
                        } else {
                            throw new MalformedOpenPgpMessageException();
                        }
                    default:
                        throw new MalformedOpenPgpMessageException();
                }
            }
        },
    
        Valid {
            @Overrides
            public State transition(InputAlphabet input, PDA automaton)
                    throws MalformedOpenPgpMessageException {
                // Cannot transition out of Valid state
                throw new MalformedOpenPgpMessageException();
            }
        }
        ;
    
        abstract State transition(InputAlphabet input, PDA automaton)
                throws MalformedOpenPgpMessageException;
    }

    It might make sense to define the transitions in an external class to allow for different grammars and to remove the dependency on the PDA class, but I do not care about this for now, so I’m fine with it.

    Now every State has a transition() method, which takes an input symbol and the automaton itself (for access to the stack) and either returns the new state, or throws an exception in case of an illegal token.

    Next, we need to modify our PDA class, so that the new state is saved:

    public class PDA {
        [...]
    
        public void next(InputAlphabet input)
                throws MalformedOpenPgpMessageException {
            state = state.transition(input, this);
        }
    }

    Now we are able to verify simple packet sequences by feeding them one-by-one to the automaton:

    // LIT EOS
    PDA pda = new PDA();
    pda.next(LiteralData);
    pda.next(EndOfSequence);
    assertTrue(pda.isValid());
    
    // OPS LIT SIG EOS
    pda = new PDA();
    pda.next(OnePassSignature);
    pda.next(LiteralData);
    pda.next(Signature);
    pda.next(EndOfSequence);
    assertTrue(pda.isValid());
    
    // COMP EOS
    PDA pda = new PDA();
    pda.next(CompressedData);
    pda.next(EndOfSequence);
    assertTrue(pda.isValid());

    You might say “Hold up! The last example is a clear violation of the syntax! A compressed data packet alone does not make a valid OpenPGP message!”.

    And you are right. A compressed data packet is only a valid OpenPGP message, if its decompressed contents also represent a valid OpenPGP message. Therefore, when using our PDA class, we need to take care of packets with nested streams separately. In my implementation, I created an OpenPgpMessageInputStream , which among consuming the packet stream, handling the actual decryption, decompression etc. also takes care for handling nested PDAs. I will not go into too much details, but the following code should give a good idea of the architecture:

    public class OpenPgpMessageInputStream {
        private final PDA pda = new PDA();
        private BCPGInputStream pgpIn = ...; // stream of OpenPGP packets
        private OpenPgpMessageInputStream nestedStream;
    
        public OpenPgpMessageInputStream(BCPGInputStream pgpIn) {
            this.pgpIn = pgpIn;
            switch(pgpIn.nextPacketTag()) {
                case LIT:
                    pda.next(LiteralData);
                    ...
                    break;
                case COMP:
                    pda.next(CompressedData);
                    nestedStream = new OpenPgpMessageInputStream(decompress());
                    ...
                    break;
                case OPS:
                    pda.next(OnePassSignature);
                    ...
                    break;
                case SIG:
                    pda.next(Signature);
                    ...
                    break;
                case SEIPD:
                case SED:
                    pda.next(EncryptedData);
                    nestedStream = new OpenPgpMessageInputStream(decrypt());
                    ...
                    break;
                default:
                    // Unknown / irrelevant packet
                    throw new MalformedOpenPgpMessageException();
        }
    
        boolean isValid() {
            return pda.isValid() &&
                   (nestedStream == null || nestedStream.isValid());
    
        @Override
        close() {
            if (!isValid()) {
                throw new MalformedOpenPgpMessageException();
            }
            ...
        }
    }

    The key thing to take away here is, that when we encounter a nesting packet ( EncryptedData , CompressedData ), we create a nested OpenPgpMessageInputStream on the decrypted / decompressed contents of this packet. Once we are ready to close the stream (because we reached the end), we not only check if our own PDA is in a valid state, but also whether the nestedStream (if there is one) is valid too.

    This code is of course only a rough sketch and the actual implementation is far more complex to cover many possible edge cases. Yet, it still should give a good idea of how to use pushdown automata to verify packet sequences 🙂 Feel free to check out my real-world implementation here and here .

    Happy Hacking!

    • wifi_tethering open_in_new

      This post is public

      blog.jabberhead.tk /2022/10/26/implementing-packet-sequence-validation-using-pushdown-automata/

    • chevron_right

      Erlang Solutions: Learning functional and concurrent programming concepts with Elixir

      news.movim.eu / PlanetJabber • 19 October, 2022 • 9 minutes

    If you are early in the process of learning Elixir or considering learning it in the future, you may have wondered a few things.  What is the experience like? How easy is it to pick up functional and concurrent programming concepts when coming from a background in languages which lack those features? Which aspects of the language are the most challenging for newcomers to learn?

    In this article, I will relate my experience as a new Elixir developer, working to implement the dice game Yatzy as my first significant project with the language.

    So far in my education and career, I have worked primarily with Java.

    This project was my first extensive exposure to concepts such as recursive functions, concurrent processes, supervision trees, and finite state machines, all of which will be covered in more depth throughout this article.

    The rules of Yatzy

    Yatzy is a variation of Yahtzee, with slight but notable differences to the rules and scoring. Players take turns rolling a set of five dice. They have the option to choose any number of their dice to re-roll up to two times each turn. After this, they must choose one of fifteen categories to score in. The “upper half” of the scorecard consists of six categories- “ones” through “sixes”. The score for each simply is the sum of all dice with the specified number. The “lower half”, consisting of the remaining nine categories, has more specific requirements, such as “two pairs”, “three of a kind”, “full house”, etc..

    If a player’s total score in the upper half is equal to or greater than 63, they receive a 50-point bonus. The player with the highest total across the whole scorecard once all categories have been filled wins the game.

    Requirements of the project

    Given this ruleset, a functioning implementation of Yatzy would need to do the following:

    • Simulate dice rolls, including those where certain dice are kept for subsequent rolls
    • Calculate the score a roll would result in for each category
    • Save each player’s scorecard throughout the entire game
    • Determine the winner at the end of the game,
    • Allow the players to take these actions via a simple UI.

    Due to my object-oriented background, my approach to this project in prior years would be to define classes to represent relevant concepts, such as the player, the scorecard, and the roll, and maintain the state via instances of these objects.

    Additionally, I would make sure of iteration via loops to traverse data structures. Working with Elixir requires these problems to be tackled in different ways. The concepts are instead represented by processes that can be run concurrently, and data structures are traversed with recursive functions.

    Adapting to this different structure and way of thinking was the most challenging and rewarding part of this project.

    Score calculations and pattern matching

    My first step in writing the project was to implement functions for rolling a set of five dice and calculating the potential scores of those dice rolls in each available category. The dice roll itself was fairly simple, but makes use of a notable feature of Elixir that I had not previously encountered: setting a default argument for a function.

    In this instance, the roll function takes a single argument, ‘keep’, representing the dice from a previous roll that the player has chosen to keep.

    def roll(keep \\ []) do
      dice = 1..6
      number_of_dice = 5 - Enum.count(keep)
      func = fn -> Enum.random(dice) end
      roll = Stream.repeatedly(func) |> Enum.take(number_of_dice)
      keep ++ roll
    end
    

    Here ‘keep’ has a default value of an empty list that will be used if ‘roll’ is called with no arguments, as it would be for the first roll in any turn. If a list is passed to ‘roll’, the function will only generate enough new numbers to fill out the rest of the roll, and then combine this list with ‘keep’ for its final output. This allowed my code to be simpler, defining one function head that could be used in multiple different scenarios.

    The score calculations themselves were far more complex and required making use of Elixir’s pattern-matching capabilities.

    In this case, testing for a valid score in each category required accounting for every possible configuration the dice could appear in when passed into the function. I was able to greatly reduce the number of cases by ensuring the dice were sorted descending when passed, but this still left a lot to account for. However, Elixir’s pattern matching makes this process easier than it would be otherwise: the cases can be handled entirely in the function heads, and each function can be written in a single line:

    def two_pairs([x, x, y, y, _]) when x != y, do: x * 2 + y * 2
    def two_pairs([x, x, _, y, y]) when x != y, do: x * 2 + y * 2
    def two_pairs([_, x, x, y, y]) when x != y, do: x * 2 + y * 2
    def two_pairs(_roll), do: 0
    

    Processes and GenServers

    The next step of building the game was to implement processes, starting with those for the player and the scorecard. Processes in Elixir are vital for maintaining state and allowing concurrency – as many of them can be run simultaneously. I was able to set up a process for each player in the game -one for the scorecard belonging to each of those players, as well as one more to handle the score calculations.

    As processes are dissimilar to the object-oriented model, they were the aspect of Elixir that took me the longest time to adjust to. I became comfortable with them by first learning how to work with raw processes, in order to better understand the theory behind them. After this, I converted these processes into GenServers, which contain improved functionality and handle most of the client/server interactions automatically.

    The supervision tree

    Another benefit of GenServers over raw processes is that they can be used as part of a supervision tree. In Elixir, a supervisor is a process that monitors other processes and restarts them if they crash. A supervision tree is a branching structure consisting of multiple supervisors and their child processes. In my Yatzy application, the supervision tree consists of a head supervisor with the scoring process as a child, along with another child supervisor for each player in the game. Each of these player supervisors has two children: a player and a scorecard.

    Due to supervisors being syntactically similar to GenServers, the majority of this step of the process was simple. I had already learned how to implement the relevant API and callback functions, however, one mistake that took some time to notice was accidentally using GenServer.start_link instead of Supervisor.start_link in the API for the player supervisor. This problem was particularly hard to diagnose as it resulted in no compile or runtime errors in the application but did result in the supervisor’s child processes not starting and the game not functioning.

    Finite state machine

    After setting up the supervision tree, I still needed to define one more process to handle the functions for running through a single player’s turn. This process was implemented as another child of the head supervisor. As this process needed to handle multiple different states representing different stages of the turn, I constructed it as a finite state machine using the GenStateMachine module.

    In this case, I defined four states, representing how many rolls are remaining in the turn: three, two, one, and none. It contains functions handling calls that represent a roll of the dice, which will set the machine to its next state, and functions that will reset it to its initial state for the end of the turn, including if the player decides not to use all their rolls.

    Below is an example of one of the calls, representing a player making their second roll in a turn.

    def handle_event({:call, from}, {:roll, keep}, :two_rolls, data) do
      data = data ++ keep
      {:next_state, :one_roll, data, [{:reply, from, data}]}
    end
    

    Compared to learning how to work with GenServers and Supervisors, this functionality was actually rather simple to pick up. I had never worked with finite state machines in other languages, but the examples of GenStateMachine in the Elixir documentation were easy to understand and contained all the information I needed in order to implement this process.

    User interface and recursion

    Once the required processes were in place in a supervision tree, it was time to implement a simple text-based interface allowing a full game of Yatzy to be played all the way through.

    This would require each player in turn to receive the results of a dice roll, be prompted to choose which, if any, dice to keep for their subsequent rolls, and then again prompt them to choose which category to score in for that turn. It should loop through the players in this way until the game is complete, at which point it should declare the winner and prompt the user to reset the scorecards and play again.

    Implementing the interface was the most complex and time-consuming part of the project. This required a significant amount of trial-and-error and researching through the Elixir docs, in order to get something functioning. However, one aspect that was easier than expected was working with recursive functions. I had rarely used recursion while working in Java due to the language’s focus on iterative loops, and as such never became fully comfortable with the technique. Implementing the interface required me to use recursion in several different places, and I was surprised at how easy it was to pick up in this language, with the pattern matching on function parameters making it simple to account for the end of the loop. The following is one of the recursive functions I implemented, which maps the results of a dice roll to the letters a, b, c, d, and e, allowing the player to pick which of the five they want to keep in the text-based interface.

    def map_dice([head | tail], indexes) do
      index = String.to_atom(head)
      key_in_indexes = Map.has_key?(indexes, index)
      case index do
        index when key_in_indexes ->
          value = Map.get(indexes, index)
          [value | map_dice(tail, indexes)]
        _index ->
          [map_dice(tail, indexes)]
      end
    end
    

    Future

    Although my Yatzy implementation is currently functioning correctly, I plan to extend the project further in the future. In the current version, only three players are supported, with their names hard-coded into the program. I would like future versions to have a dynamic amount of players, along with the ability for the players to specify their own usernames.

    Additionally, I am also planning to learn the basics of Phoenix LiveView in the near future. Once I have done this, I would like to write a frontend for the program, allowing the players to interact with a more readable, visually appealing graphical interface, rather than the current text-based version.

    Conclusion

    Overall, I would describe my experience with the project as positive and feel that it served as a good introduction to Elixir.

    I was able to learn many of the basic features of the language naturally in order to fulfill the requirements of the game, and adjusted my ways of thinking about programming to better suit working with functional and concurrent programs. As a result, I feel like I have a good understanding of the basics of Elixir, and I am more confident about my ability to carry out other work with the language in the future.

    The post Learning functional and concurrent programming concepts with Elixir appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/learning-functional-and-concurrent-programming-concepts-with-elixir/

    • chevron_right

      Erlang Solutions: Everything you need to know about Phoenix Framework 1.7

      news.movim.eu / PlanetJabber • 13 October, 2022 • 7 minutes

    It is an exciting time for the Elixir community. As you may have seen at ElixirConf or ElixirConf EU, we are celebrating the 10th anniversary of Elixir . Despite now being 10 years old, there is no slowdown in the number of exciting new features, frameworks, and improvements being made to the language.

    One of the most exciting developments for Elixir is undoubtedly Phoenix . It is a project that is growing in both features and uses cases at an incredible pace. Phoenix 1.5 included some huge changes including the addition of LiveView to the framework, the creation of LiveDashboard, and the new version of PubSub (2.0.).

    Next Phoenix 1.6 introduced even more exciting features, most notably the HEEx engine, the authentication and mailer generators, better integration with LiveView, and the removal of node and webpack, which was replaced with a more simplified esbuild tool.

    For many of us, each new Phoenix framework release brings back the feeling of being a kid on Christmas, we wait with eager anticipation for Chris McCord to announce the new toys we have to play with for the upcoming year, but with these new toys also comes a challenge for those who want to keep their skills and their systems up-to-date. The migration nightmare. We will revisit that at the end of this post.

    Roadmap

    Since Phoenix 1.5 it is a noticeable trend to move into LiveView, as we progress, LiveView can replace more and more JavaScript code, allowing the Elixir developer to get better control of the HTML generation. In the latest release, this trend is continued with the following new features:

    • Verified Routes. This gives us the ability to define paths using a sigil that checks compilation time compared to defined routes.
    • Tailwind. In addition to answering our prayers concerning JavaScript and HTML, this new version also helps manage CSS.
    • Component-based generators. These features offer us a new and better way to write components.
    • Authentication generation code using LiveView. This lets us generate the code for the authentication code but using LiveView instead of the normal controllers, views, and templates.

    We will go deeper into each of these features, but you can already see a trend, right? We are moving more and more to LiveView in the same way we are removing the need to manage things like HTML, JavaScript, and CSS.

    First, let’s look more at LiveView specifically, for release 0.18, Chris McCord announced these improvements:

    • Declarative assigns/slots – which let us define information about attributes and slots which are included inside of the components.
    • HTML Formatter – which performs the format (mix format) for HEEx code even if it’s included inside of the sigil ~H.
    • Accessibility building blocks.

    Now let’s look at each of these elements in deeper detail.

    Verified Routes

    The story is that Jason Stiebs (from the Phoenix team) has been requesting a better, less verbose way, to use the routes for the last 8 years. The 12th time he requested it Chris McCord agreed to this feedback and José Valim had a fantastic way to make that happen.

    The basic idea is that if we have this:

    This is generating the route which we could use in this way:

    This is very verbose, but it could be even worse if we have a definition of the routes nested like this one:

    And it is just as verbose when we use LiveView:

    To avoid this, the Verified Routes provides us a shortcut using the path:

    As you can see, using the sigil “~p” we can define the path where we want to go and it’s completely equivalent to using the previous Routes helper function.

    The main advantage of this feature is that it allows us to write the path concisely and still check if that route is valid or not in the same way we would use the Route Helper function.

    Tailwind

    To understand this change let’s look at what Adam Wathan (creator of Tailwind) said about CSS and the use of CSS:

    The use of CSS in a traditional way, that is using “semantic class names”, is hard to maintain and that’s why he created Tailwind. Tailwind is based on the specification of how the element should be shown. There can be different elements that are semantically the same, for example, two “Accept” buttons where we want one to appear big and the other a bit narrower. Under this paradigm, we’d be forced to use the class “accept-button” in addition to the classes which are modifying this case and which do not allow them to be reused.

    The other approach is to implement small modifications to how we present the buttons. In this way, we can define a lot in HTML and get rid of the CSS.

    The main idea, as I said previously, is to replace as much CSS as possible in the same way as LiveView replaced a lot of JavaScript:

    For example, using Tailwind with HTML and getting rid of CSS, we could build a button like this one with the code shown in the image below:

    It could be argued that it’s complex, but it’s indeed perfect from the point of view of LiveView and components because these classes can be encapsulated inside of the component and we can use it in this way:

    And finally, in the template:

    Easy, right?

    Authentication generation code using LiveView

    Big thank you to Berenice Medel on the Phoenix team, she had the great idea to have the generation of the authentication templates work with LiveView.

    Declarative Assigns / Slots

    Before going into this section, Chris McCord gave a big thank you to Marius Saraiva and Connor Lay. They are the people in charge of all of the improvements regarding declarative assigns, slots, and HEEx.

    The idea behind slots and attrs is to provide us with a way to define attributes and sub-elements inside of a defined component. The example above, it’s defining a component with the name “table”. It’s defining the attributes “row_id” and “rest”, as you can see in the documentation, the attributes for the table are “rows”, “row_id”, and “class”. That means we can find “row_id”, then “rest” will feature a map with all of the remaining attributes.

    As we said, the slot is a way to indicate we are going to use a sub-element “col” inside of the “table”. In the example, you can see two elements “col” inside of “table”. The “col” element has only defined one attribute “if” which is a boolean.

    HTML Formatter

    A big thank you to Felipe Renan who worked on the implementation of this for HEEx to be included in Phoenix. Now, it’s possible to have a “mix format” fixing the format of the code written inside of the templates, even inside of the ~H sigil.

    Accessibility building blocks

    Phoenix 1.7 includes some primitives for helping to create more accessible websites. One of them is “focus_wrap”:

    This helps define the areas where you want to shift focus between multiple elements inside of a defined area instead of a whole website.

    This works in combination with functions in the JS module which configure the focus like a stack. When you go into the modal it pushes the focus area that we use and when the modal is closed, we pop out from that area of the stack and stay with the previous one.

    More improvements in the Roadmap

    One of the improvements for LiveView is Storybook. Storybook is a visual UI creator which lets us define the components we want to be included in our websites and then generate the code to be implemented for it. Christian Blavier did great work starting this in his repository but he’s now off and the Phoenix team is going to be moving it forward and evolving it.

    Streaming data for optimized handling of collections data is another priority in the roadmap. The work for this has already started, fingers are crossed that it might be announced for the next release.

    During recent conferences, another speaker raised a concern about the messaging incompatibility between LiveView and LiveComponent, luckily, this is on the roadmap to be fixed shortly.

    And is that all?

    With all the developments in Phoenix, it would be easy to talk about at much greater length and in much greater detail. The pace of the Phoenix team’s progress is impressive and exciting.

    As it continues to grow it is easy to imagine a future where we only need to write HEEx code inside of Elixir to get full control of generated HTML, CSS, and JavaScript for the browser. It’s exciting to imagine and will be sure to further grow the use and adoption of Elixir as a full-stack technology.

    Read to adopt Elixir? Or need help with your implementation? Or contact us about our training options.

    The post Everything you need to know about Phoenix Framework 1.7 appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/what-you-need-to-know-phoenix-framework-1-7/

    • chevron_right

      Prosodical Thoughts: Mutation Testing in Prosody

      news.movim.eu / PlanetJabber • 13 October, 2022 • 7 minutes

    This is a post about a new automated testing technique we have recently adopted to help us during our daily development work on Prosody. It’s probably most interesting to developers, but anyone technically-inclined should be able to follow along!

    If you’re unfamiliar with our project, it’s an open-source real-time messaging server, built around the XMPP protocol. It’s used by many organizations and self-hosting hobbyists, and also powers applications such as Snikket , JMP.chat and Jitsi Meet .

    Like most software projects, we routinely use automated testing tools to ensure Prosody is behaving correctly, even as we continue to work daily on fixes and improvements throughout the project.

    We use unit tests, which test the individual modules that Prosody is built from, via the busted testing tool for Lua. We also developed scansion , an automated XMPP client, for our integration tests that ensure Prosody as a whole is functioning as expected at the XMPP level.

    Recently we’ve been experimenting with a new testing technique.

    Introducing ‘mutation testing’

    Mutation testing is a way to test the tests. It is an automated process that introduces intentional errors (known as “mutations”) into the source code, and then runs the tests after each possible mutation, to make sure they identify the error and fail.

    Example mutations are things like changing true to false , or + to - . If the program was originally correct, then these changes should make it incorrect and the tests should fail. However, if the tests were not extensive enough, they might not notice the change and continue to report that the code is working correctly. That’s when there is work to do!

    Mutation testing is similar and related to other testing methods such as fault injection , which intentionally introduce errors into an application at runtime to ensure it handles them correctly. Mutation testing is specifically about errors introduced by modifying the application source code in certain ways. For this reason it is applicable to any code written in a given language, and does not need to be aware of any application-specific APIs or the runtime environment.

    One end result of a full mutation testing analysis is a “mutation score”, which is simply the percentage of mutated versions of the program (“mutants”) that the test suite failed to identify. Along with coverage (which counts the percentage of lines successfully executed during a test run), the mutation score provides a way to measure the quality of a test suite.

    Code coverage is not enough

    Measuring coverage alone does not suffice to assess the quality of a test suite. Take this example function:

    function max(a, b, c)
    	if a > b or a > c then
    		return a
    	elseif b > a or b > c then
    		return b
    	elseif c > a or c > b then
    		return c
    	end
    end
    

    This (not necessarily correct) function returns the largest of three input values. The lazy (fictional!) developer who wrote it was asked to ensure 100% test coverage for this function, here is the set of tests they produced:

    assert(max(10, 0, 0) == 10) -- test case 1, a is greater
    assert(max(0, 10, 0) == 10) -- test case 2, b is greater
    assert(max(0, 0, 10) == 10) -- test case 3, c is greater
    

    Like most tests, it executes the function with various input values and ensures it returns the expected result. In this case, the developer moves the maximum value ‘10’ between the three input parameters and successfully exercises every line of the function, achieving 100% code coverage. Mission accomplished!

    But wait… is this really a comprehensive test suite? How can we judge how extensively the behaviour of this function is actually being tested?

    Mutation testing

    Running this function through a mutation testing tool will highlight behaviour that the developer forgot to test. So that’s exactly what I did.

    The tool generated 5 mutants, and the tests failed to catch 4 of them. This means the test suite only has a mutation score of 20%. This is a very low score, and despite the 100% line and branch coverage of the tests, we now have a strong indication that they are inadequate.

    To fix this, we next have to analyze the mutants that our tests considered acceptable. Here is mutant number one:

    function max(a, b, c)
    	if false and a > b or a > c then
    		return a
    	elseif b > a or b > c then
    		return b
    	elseif c > a or c > b then
    		return c
    	end
    end
    

    See what it did? It changed the first if a > b to if false and a > b , effectively ensuring the condition a > b will never be checked. A condition was entirely disabled, yet the tests continued to pass?! There are two possible reasons for this: either this condition is not really needed for the program to work correctly, or we just don’t have any tests verifying that this condition is doing its job.

    Which test case should have tested this path? Obviously ‘test case 1’:

    assert(max(10, 0, 0) == 10)
    

    a is the greatest input here, and indeed the test confirms that the function returns it correctly. But according to our mutation testing, this is happening even without the a > b check, and that seems wrong - we would only want to return a if it is also greater than b . So let’s add a test for the case where a is greater than c but not greater than b :

    assert(max(10, 15, 0) == 15)
    

    What a surprise, our new test fails:

    Failure → spec/max_spec.lua @ 4
    max produces the expected results
    spec/max_spec.lua:1: Expected objects to be equal.
    Passed in:
    (number) 10
    Expected:
    (number) 15
    

    With this new test case added, the mutant we looked at will no longer be passed, and we’ve successfully improved our mutation score.

    Mutation testing helped us discover that our tests were not complete, despite having 100% coverage, and helped us identify which test cases we had forgotten to write. We can now go and fix our code to make the new test case pass, resulting in better tests and more confidence in the correctness of our code.

    Mutation testing limitations

    As a new tool in our toolbox, mutation testing has already helped us improve lots of our unit tests in ways we didn’t previously know they were lacking, and we’re focusing especially on improving our tests that currently have a low mutation score. But before you get too excited, you should be aware that although it is an amazing tool to have, it is not entirely perfect.

    Probably the biggest problem with mutation testing, as anyone who tries it will soon discover, is what are called ‘equivalent mutants’. These are mutated versions of the source code that still behave correctly. Unfortunately, identifying whether mutants are equivalent to the original code often requires manual inspection by a developer.

    Equivalent mutants are common where there are performance optimizations in the code but the code still works correctly without them. There are other cases too, such as when code only deals with whether a number is positive or negative (the mutation tool might change -1 to -2 and expect the tests to fail). There are also APIs where modifying parameters will not change the result. A common example of this in Prosody’s code is Lua’s string.sub() , where indices outside the boundaries of the input string do not affect the result ( string.sub("test", 1, 4) and string.sub("test", 1, 5) are equivalent because the string is only 4 characters long).

    The implementation

    Although mutation testing is something I first read about many years ago and it immediately interested me, there were no mutation testing tools available for Lua source code at the time. As this is the language I spend most of my time in while working on Prosody, I’ve never been able to properly use the technique.

    However, for our new authorization API in Prosody, I’m currently adding more new code and tests than usual and the new code is security-related. I want to be sure that everything I add is covered well by the accompanying tests, and that sparked again my interest in mutation testing to support this effort.

    Still no tool was available for Lua, so I set aside a couple of hours to determine whether producing such a thing would be feasible. Luckily I didn’t need to start from scratch - there is already a mature project for parsing and modifying Lua source code called ltokenp written by Luiz Henrique de Figueiredo. On top of this I needed to write a small filter script to actually define the mutations, and a helper script for the testing tool we use ( busted ) to actually inject the mutated source code during test runs.

    Combining this all together, I wrote a simple shell script to wrap the process of generating the mutants, running the tests, and keeping score. The result is a single-file script that I’ve committed to the Prosody repository, and we will probably link it up to our CI in the future.

    It’s still very young, and there are many improvements that could be made, but it is already proving very useful to us. If there is sufficient interest, maybe it will graduate into its own project some day!

    If you’re interested in learning more about mutation testing, check out these resources:

    • wifi_tethering open_in_new

      This post is public

      blog.prosody.im /mutation-testing-in-prosody/

    • chevron_right

      ProcessOne: Matrix protocol added to ejabberd

      news.movim.eu / PlanetJabber • 13 October, 2022 • 2 minutes

    ejabberd is already the most versatile and scalable messaging server. In this post, we are giving a sneak peak at what is coming next.

    ejabberd just get new ace in it sleeve – you can now use ejabberd to talk with other Matrix servers, a protocol sometimes used for small corporate server messaging.

    Of course, you all know ejabberd supports the XMPP instant messaging protocol with hundreds of XMPP extensions, this is what it is famous for.

    The second major protocol in XMPP is MQTT. ejabberd support MQTT 5 with clustering, and is massively scalable. ejabberd can be used to implement Internet of Things projects, using either XMPP or MQTT and it also supports hybrid workflow, where you can mix humans and machines exchanging messages on the same platform.

    It also supports SIP, as you can connect to ejabberd with a SIP client, so that you can use a softphone directly with ejabberd for internal calls.

    So far, so good, ejabberd leading both in terms of performance and number of messaging protocol it supports.

    We always keep an eye on new messaging protocol. Recently, the Matrix protocol emerged as a new way to implement messaging for the small corporate servers.

    ejabberd adds support for Matrix protocol

    Or course, by design, the Matrix protocol cannot scale as well as XMPP or MQTT protocols. At the heart of Matrix protocol, you have a kind of merging algorithm that reminds a bit of Google Wave. It means that a conversation is conceptually represented as a sort document you constantly merge on the server. This is a consuming process that is happening on the server for each message received in all conversations. That’s why Matrix has the reputation to be so difficult to scale.

    Even if it is not as scalable as XMPP, we believe that we can make Matrix much more scalable than what it is now. That’s what we are doing right now.

    As a first step, we have been working on implementing a large subset of the Matrix protocol as a bridge in ejabberd.

    It means that an ejabberd server will be able to act as a Matrix server in the Matrix ecosystem. XMPP users will be able to exchange messages with Matrix users, transparently.

    To do that, we implemented the Matrix protocol for conversations and the server-to-server protocol to allow interop between XMPP and Matrix protocol.

    This feature coming first for our customers, in the coming weeks, whether they are using ejabberd Business Edition internally or on Fluux ejabberd SaaS platform. It will come later to ejabberd Community Edition.

    Interested? Let’s talk! Contact us .

    The post Matrix protocol added to ejabberd first appeared on ProcessOne .
    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/matrix-protocol-added-to-ejabberd/

    • chevron_right

      Profanity: Profanity 0.13.1

      news.movim.eu / PlanetJabber • 12 October, 2022

    One month ago we released Profanity 0.13.0 and yesterday the minor release 0.13.1.

    18 people contributed code to this release: @binex-dsk, @cockroach, @DebXWoody, @MarcoPolo-PasTonMolo, @mdosch, @nandesu-utils, @netboy3, @paulfertser, @sjaeckel, @Zash, @omar-polo, @wahjava, @vinegret, @sgn, Max Wuttke, @tran-h-trung, @techmetx11 and @jubalh. Also a big thanks to our sponsors: @mdosch, @wstrm, @LeSpocky, @jamesponddotco and one anonymous person.

    We would also like to thank our testers, packagers and users.

    The release already landed several major distributions.

    For a list of changes please see the 0.13.0 and 0.13.1 release notes.

    • chevron_right

      JMP: SMS Account Verification

      news.movim.eu / PlanetJabber • 11 October, 2022 • 4 minutes

    Some apps and services (but not JMP!) require an SMS verification code in order to create a new account.  (Note that this is different from using SMS for authentication; which is a bad idea since SMS can be easily intercepted , are not encrypted in transit , and are vulnerable to simple swap scams , etc.; but has different incentives and issues.)  Why do they do this, and how can it affect you as a user?

    Tarpit

    In the fight against service abuse and SPAM, there are no sure-fire one-size-fits-all solutions.  Often preventing abusive accounts and spammers entirely is not possible, so targets turn to other strategies, such as tarpits .  This is anything that slows down the abusive activity, thus resulting in less of it.  This is the best way to think about most account-creation verification measures.  Receiving an SMS to a unique phone number is something that is not hard for most customers creating an account.  Even a customer who does not wish to give out their phone number or does not have a phone number can (in many countries, with enough money) get a new cell phone and cell phone number fairly quickly and use that to create the account.

    If a customer is expected to be able to pass this check easily, and an abuser is indistiguishable from a customer, then how can any SMS verification possibly help prevent abuse?  Well, if the abuser needs to create only one account, it cannot.  However, in many cases an abuser is trying to create tens of thousands of accounts.  Now imagine trying to buy ten thousand new cell phones at your local store every day.  It is not going to be easy.

    “VoIP Numbers”

    Now, JMP can easily get ten thousand new SMS-enabled numbers in a day.  So can almost any other carrier or reseller.  If there is no physical device that needs to be handed over (such as with VoIP , eSIM , and similar services), the natural tarpit is gone and all that is left is the prices and policies of the provider.  JMP has many times received requests to help with getting “10,000 numbers, only need them for one day”.  Of course, we do not serve such customers.  JMP is not here to facilitate abuse, but to help create a gateway to the phone network for human beings whose contacts are still only found there.  That doesn’t mean there are no resellers who will work with such a customer, however.

    So now the targets are in a pickle if they want to keep using this strategy.  If the abuser can get ten thousand SMS-enabled numbers a day, and if it doesn’t cost too much, then it won’t work as a tarpit at all!  So many of them have chosen a sort of scorched-earth policy.  They buy and create heuristics to guess if a phone number was “too easy” to get, blocking entire resellers, entire carriers, entire countries.  These rules change daily, are different for every target, and can be quite unpredictable.  This may help when it comes to foiling the abusers, but is bad if you are a customer who just wants to create an account.  Some targets, especially “big” ones, have made the decision to lose some customers (or make their lives much more difficult) in order to slow the abusers down.

    De-anonymization

    Many apps and services also make money by selling your viewing time to advertisers (e.g. ads interspersed in a social media feed, as pre-/mid-roll in a video, etc.) based on your demographics and behaviour.  To do this, they need to know who you are and what your habits are so they can target the ads you see for the advertisers’ benefit.  As a result, they have an incentive to associate your activity with just one identity, and to make it difficult for you to separate your behaviour in ways that reduce their ability to get a complete picture of who you are.  Some companies might choose to use SMS verification as one of the ways they try to ensure a given person can’t get more than one account, or for associating the account (via the provided phone number) with information they can acquire from other sources, such as where you are at any given time .

    Can I make a new account with JMP numbers?

    The honest answer is, we cannot say.  While JMP would never work with abusers, and has pricing and incentives set up to cater to long-term users rather than those looking for something “disposable”, communicating that to every app and service out there is a big job.  Many of our customers try to help us with this job by contacting the services they are also customers of; after all, a company is more likely to listen to their own customers than a cold-call from some other company. The Soprani.ca project has a wiki page where users keep track of what has worked for them, and what hasn’t, so everyone can remain informed of the current state (since a service may work today, but not tomorrow, then work again next week, it is important to track success over time).

    Part of why we can’t say whether you can make a new account with JMP numbers is because the reasons some companies choose to use SMS verification are opaque, so we may not know all of their criteria for sure.

    Many customers use JMP as their only phone number, often ported in from their previous carrier and already associated with many online accounts.  This often works very well, but everyone’s needs are different.  Especially those creating new personas which start with a JMP number find that creating new accounts at some services for the persona can be frustrating to impossible.  It is an active area of work for us and all other small, easy-access phone network resellers.

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/2022-sms-account-verification

    • chevron_right

      Gajim: Gajim 1.5.2

      news.movim.eu / PlanetJabber • 8 October, 2022 • 1 minute

    Gajim 1.5.2 brings another performance boost, better emojis, improvements for group chat moderators, and many bug fixes. Thank you for all your contributions!

    What’s New

    Generating performance profiles for Gajim revealed some bottlenecks in Gajim’s code. After fixing these, switching chats should now feel snappier than before.

    Did you know that you can use shortcodes for typing emojis? Typing :+1 for example will open a popup with emojis to choose from. Now there are even more shortcodes waiting for you. Also, emojis combined by ZJW sequences are now rendered correctly, if they are used for workspace icons.

    This release comes with an improvement for admins and moderators of group chats as well: You can now right-click the profile picture of any participant to reach moderation tools.

    Gajim 1.5

    Gajim 1.5

    There has been a bug where messages sent by you would not show under certain circumstances. Various users chimed in to help fixing this bug by reporting under which circumstances it would appear. Thanks for your help!

    More Changes

    New

    • Support for HEIF image previews
    • Server info dialog now shows TLS version and cipher used for your connection
    • For developers: Gajim now offers a content viewer PEP nodes

    Changes

    • Hotkey for re-opening closed chats: Ctrl+Shift+W
    • Gajim now highlights areas where you can drop chats or workspaces

    Fixes

    • Nickname suggestions in group chats should yield better results now
    • Text in right-to-left languages is now aligned correctly
    • Notifications for group chat messages have been fixed

    Over 20 issues have been fixed in this release.

    Have a look at the changelog for the complete list.

    Gajim

    As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab .

    • wifi_tethering open_in_new

      This post is public

      gajim.org /post/2022-10-08-gajim-1.5.2-released/

    • chevron_right

      Erlang Solutions: Pair Programming

      news.movim.eu / PlanetJabber • 6 October, 2022 • 5 minutes

    As a junior software developer, finding the right tools and techniques to help you learn a new language or technology can make a huge difference. While spending the last few months learning Erlang and Elixir, one of the techniques that I have found really helpful is pair programming.

    I will be breaking down the concept of pair programming and my experiences with it so far, including the benefits and different ways of utilizing this programming style.

    What is pair programming?

    Pair programming refers to the technique of writing code alongside other developers. Most typically this involves two developers working together on the same computer.

    Occasionally, there are more, which is known as “mob” or “ensemble” programming.

    Traditionally with pair programming, only one developer can have control of the keyboard at time, and is therefore responsible for writing the code. This person is sometimes referred to as the ‘driver’. The other developer reviews the code as it’s written and advises the driver on the overall direction. This is often referred to as the ‘navigator’.

    It’s worth noting that pair programming can also be performed using online collaboration tools, rather than in-person. There is still a driver and a navigator, but Once the developers have agreed upon a direction, a driver and navigator can work concurrently in a remote setting on different portions of the code.

    There are various tools that can be leveraged for remote pair programming, and later on I’ll be discussing my personal preference, an extension for Visual Studio Code called Live Share.

    Benefits of pair programming

    There are a variety of benefits to pair programming as a junior developer.

    One of the key uses so far has been for teaching. For me, the best approach to this has been taking the driver role, while a senior developer supervises my coding in the navigator role. This ensures that I have full understanding of what I’m doing, and allows the senior to chip in to suggest alternative approaches, or demonstrate best practices. It also utilises the senior developer’s experience most effectively, with them guiding the overall direction and structure of the code, rather than focusing on the details.

    The opposite arrangement, where the senior developer codes while a junior watches, can be even more efficient and result in quicker code generation.

    However, the disadvantage is that it can hamper participation from the junior developer. A less experienced observer is more likely to watch passively rather than contribute to the development. This will often defer to the more experienced developer’s judgement. They can also be afraid to ask simple questions, resulting in less effective learning. If you are going to use this system, it is important that the senior developer slows down from their normal pace significantly, and takes the time to check the junior’s understanding.

    In general, I still prefer the former arrangement, as being in the driver role has been better for my understanding of the code.

    The other style of pair programming that I have found beneficial is working alongside a fellow junior developer. This allows us to learn from one another- whether it’s useful shortcuts in the editor, helpful tips for the language, or just something you wouldn’t have thought of by yourself. In this style, we tend to swap roles regularly, allowing both developers to take on both the driver and navigator roles. This is really beneficial as it allows both of our perspectives to be utilised in the software structure, as well as allowing us to play to our individual strengths when coding.

    When working alone, you have to fulfil both roles, whereas pair programming allows you to focus solely on one aspect of the code, resultulting in solutions that you may not have thought of otherwise. The ability to discuss your code with someone else who understands it also allows for quick and easy debugging (likely in part due to the ‘rubber ducking’ effect). And of course, pair programming can increase confidence in your code overall, because there’s another set of eyes to spot mistakes.

    Remote pair programming with VS Code and Live Share

    My editor of choice, Visual Studio Code, has the option to install the Live Share extension which I highly recommend for pair programming remotely. It is a very intuitive tool that allows you to see what your programming partner is doing in real-time.

    When using Live Share, another developer is able to create files and edit the current project’s code from a different machine. You are able to see other users’ cursors in the code. You can also choose to ‘follow’ a particular user, which will track their cursor across all files in the project. This allows you to observe them wherever they are making changes. It is particularly helpful for more learning-based pair programming.

    You also have the option to grant read/write permissions for the terminal, allowing any participant to run the code or tests or perform any other commands in the current directory.

    To use Live Share, here’s what you’ll need to do:

    • You will need to find it in the VS Code extension marketplace, install the extension, and ensure you are signed in to VS Code with either your GitHub or Microsoft account.
    • Once the extension is installed, you can initiate a Collaboration Session, which will copy the sharing link to your clipboard automatically. Here you have the option to restrict the session to read-only, if you want to prevent edits from other users.
    • To grant access to another developer, simply share the link with them, which will allow access to the current project via either the browser or desktop versions of VS Code, for as long as the Collaboration Session is active.
    • Once the Collaboration Session is ended by the host, all access is revoked, and another session will need to be initiated to allow further access.

    Important tips for using VS Code Live Share:

    • Remember to terminate the Collaboration Session when you are finished pair programming. As long as the session is active, anyone with the link will be able to view and/or make changes to the code.
    • It’s important to be able to communicate quickly and effectively, so you will need to make use of a web-conferencing tool, such as Google Meet or Zoom, for verbal communication.
    • Alternatively, you can download the Live Share Audio extension which can facilitate audio calls within VS Code.

    Final thoughts

    Overall, I have found pair programming to be a valuable strategy and something I plan to utilise wherever possible in the future. Whether it’s learning from a senior developer or collaborating with a fellow junior, pair programming can be really beneficial and keep your mind open to different perspectives and ways of working.

    If you haven’t tried pair programming before, or perhaps it’s been a while, I highly recommend giving it a go. After all, programming can sometimes be a solitary activity, so working this closely with others could be a welcome change of pace and will surely foster some new ideas!

    The post Pair Programming appeared first on Erlang Solutions .