phone

    • chevron_right

      Erlang Solutions: Type-checking Erlang and Elixir

      news.movim.eu / PlanetJabber · Thursday, 5 October, 2023 - 11:30 · 18 minutes

    The BEAM community couldn’t be more varied when it comes to opinions about static type systems. For some they’re the most desired feature of other functional languages which we miss. Others shun them and choose our ecosystem exactly because, and not despite the fact that it doesn’t force the perceived overhead of types. Some others still worry whether static types could be successfully applied on the Erlang virtual machine at all.

    Over the years, there’s been some academic research into type checking Erlang. Even WhatsApp worked for a few years on inventing a new statically typed Erlang dialect. There were also numerous community attempts at building ML-inspired statically typed languages. However, for quite a while the only project that reached wider adoption was Dialyzer.

    Until now! The lineup of Code BEAM 2023 sports five different talks about static type systems. Is the community fed up with its long time “let it crash” adage? Or is this many talks too much of a good thing? Let’s have a closer look at the current state of type checking on the BEAM.

    Erlang, Elixir, Gleam, PureScript

    Before we begin, though, let’s get one thing out of the way. There are a few relatively new programming languages on the BEAM platform that offer static type checking from the get-go. For the sake of completeness, I’ll list them here, but as of writing they still seem to be niche – though they certainly don’t lack potential! Here comes the list:

    • Gleam – probably the best known and most actively developed one, Gleam targets both JS (browsers) and Erlang backends. The compiler is written in Rust, the language has a lively community, and, in general, it seems to have the biggest potential!
    • PureScript – a hidden gem, known either as a Haskell-dialect targeting JS and the browser, or as a more complex sibling of Elm, PureScript has a complete Erlang backend! It allows writing code with the expressiveness of Haskell, yet leveraging the legendary robustness of the BEAM.
    • Hamler – PureScript’s younger brother with some syntax modifications more closely resembling Erlang. Based on the GitHub repository, though, the development seems to have stopped some 2-3 years ago.
    • Caramel – an OCaml-inspired language compiling to Erlang. It aimed to bring OCaml’s strictness to the BEAM. Sadly, the project is discontinued with Leandro Ostera, the author, pointing at Gleam as the spiritual successor.
    • Alpaca – another attempt at bridging the gap between ML and BEAM. The project, though, was discontinued in 2019.

    With that out of the way, let’s focus on tools we can use in Erlang or Elixir. But how can we even compare programming languages and their type systems?

    Type system design 101

    I don’t want to quote too much academic material here, but I especially like the short introduction to type systems presented in “Bidirectional Typing” by Jana Dunfield and Neel Krishnaswami. If you’re already familiar with the theory, feel free to skip this section.

    > Type systems serve many purposes. They allow programming languages to reject nonsensical programs. They allow programmers to express their intent, and to use a type checker to verify that their programs are consistent with that intent. Type systems can […] even [be used] to guide program synthesis.

    On top of that, I’d add that type systems can make it significantly easier to build IDEs that provide good language support, code completion, and focused and detailed error messages, increasing development velocity.

    Dynamic or static? It doesn’t matter!

    Every programming language has some system of preventing errors. However, not all these systems are the same. A crucial property a programming language should help enforce is “program safety” (it’s certainly not the only one). It’s about whether I as a programmer can do something completely stupid, like divide by zero. Can I use a variable that was never instantiated? Can I store a number, yet later try to use it as text? Ideally programming languages should not allow any of that.

    One major distinction between such error-prevention systems is when they kick in to prevent the nonsensical operation. This might happen when the program is already running, just before the operation happens at runtime. We call programming languages using such systems “dynamically checked”, or just “dynamic”. They might also do it as soon as we run the compiler, way before a program is even deployed, not to mention being started. These are called “statically checked”, “statically typed”, or sometimes just “static”. Such an approach is called static analysis. Strictly speaking, only those latter – statically checked – programming languages use formal type systems.

    Statically typed languages historically have been more verbose and less expressive than dynamically typed ones. Dynamically typed ones have usually been easier to learn and quicker to deliver working code when starting from scratch, but also easier to shoot oneself in the foot. The latter have been on the rise since the 2000s, since when programming for the web has skyrocketed. However, with the size of computer systems growing, it’s becoming evident that building buggy systems fast is not the way to go. The question is: how can we make programming languages more expressive and more correct at the same time? It’s not dynamic vs static , it’s expressiveness vs correctness.

    Solving the equation

    In basic algebra classes we learn how to solve linear equations like:

    x + 3 = 5

    We know that with a single equation it’s possible to solve for only one variable. If x was also given, there would be nothing to solve – we could still check if the equation holds, though.

    A type system’s task is similar, but the equation it solves looks different:

    Γ ⊢ e : T

    • Γ is a typing context, that is the set of all our variables together with their types
    • e is our expression
    • T is its type

    If we know all three, we’re talking about “type checking”. It’s relatively easy, similar to checking if an equation holds.

    If we only know Γ and e , we’re talking about “type inference”. We’re trying to come up with a type of our expression e . Type inference is one of the techniques used to improve expressiveness and reduce verbosity of statically typed languages. The type system has more work to do in this case as it has to fill in the types that we, as programmers, didn’t have to provide. It saves us work at the cost of extra computation. However, that computation is hard, way harder than just type checking, and in some cases it’s just impossible. Practical type systems with inference often actually are undecidable without some type annotations provided by a programmer.

    If we know Γ and T and want to solve for e , we’re talking about program synthesis also known as code generation. This can be useful, for example, for implementing data serialization or in some cases just to save us some typing. Code generated from types usually is not complete, so it requires details to be filled in by a programmer.

    In practice, one of the common approaches to constructing type checkers is “bidirectional typing”, which means that when a checker traverses our program it alternates between “checking” and “inference” modes depending on the available type information. This approach makes inference possible in some situations where it otherwise wouldn’t be so, yet still allows programmers to omit the vast majority of tedious to write type annotations.

    Tell me the truth

    We already know that type systems aren’t all the same. They aren’t perfect either. Firstly, “they can only guarantee that well-typed programs are free from certain kinds of misbehavior.” Secondly, static analysis tools can be divided into two categories: underapproximating and overapproximating. This generally means that an underapproximating checker will not detect some bugs that exist in a given code base. An ideal checker (which cannot exist) would detect all bugs and raise no false alarms. An overapproximating checker might raise warnings even for perfectly valid code. There’s no escaping that, it’s just a matter of which side of the fence we’re on. Type systems, in general, fall in the overapproximating category.

    Type checker survey

    At the 2022 edition of Lambda Days I presented a side by side comparison of a few type checkers for Erlang. The landscape has radically changed since then! How much? Let’s take a look.

    Dialyzer

    Dialyzer is quite likely the most widely used tool in this domain. Strictly speaking it’s not a type system, but a di screpancy an alyser: a program consistency checker based on flow analysis. The theory underpinning it is described in detail in “Practical type inference based on success typings“ . It should be highlighted that it’s meant for use on existing codebases, i.e. no source code modification is necessary to start using it. It’s one of the two tools I presented at Lambda Days 2022 that are also covered in this survey.

    It’s known for the “Dialyzer is never wrong” slogan, which sounds like snake oil, but is actually true. But what does it really mean? It boils down to the distinction between under- and overapproximating checkers – Dialyzer is of the former kind. This means it never returns false positives, i.e. never reports errors that actually aren’t there in the source code.

    The other side of the coin, though, is that it might return a series of somewhat confusing non-local errors, only one of which is the root cause of the entire report – however, that one is then a real problem that needs to be fixed. Due to the underapproximating nature, it might also not warn about potential pitfalls, that do not occur 100% of the time.

    Dialyzer requires a clever approach when checking libraries meant for inclusion in other code. To deliver the best results, it requires client code to be analyzed together with main code. When writing a library, this means we need to put library API callers, not just the library implementation under analysis. Usually, including our library tests in the analysis does the trick.

    Traditionally, Dialyzer required generating a procedure lookup table (aka PLT) before running an analysis. It was quite a time consuming step, with the analyses not being very fast either. However, since Erlang OTP 26 , the situation has greatly improved thanks to the “incremental” mode implemented by Tom Davies from WhatsApp.

    A boatload of information on how to use Dialyzer effectively can be learnt from Jesper Eskilson’s “Slaying the Type Hydra”talk from Code BEAM 2022.

    Thomas Davies spoke that same year about “ Incremental Dialyzer: How we made Dialyzer 7x Faster “. At this year’s Code BEAM edition Marc Sugiyama will present “ How I grew to love Erlang Type Specs ”.

    Because of the historically slow analyses, confusing errors, and limited guarantees, even quite prominent community figures have voiced skepticism about Dialyzer’s efficacy. On the other hand, it’s a battle tested tool with integrations built into Rebar3, Mix, Erlang and Elixir language servers, and therefore practically any editor or IDE. You cannot go wrong with Dialyzer when looking for the extra bit of confidence in your codebase.

    Gradualizer

    For quite a while Dialyzer was leaving part of the BEAM community unsatisfied. One of the attempts to address this was Josef Svenningsson’s Code BEAM 2018 talk “ A gradual type system ”, where he unveiled Gradualizer. A gradual type system is one which generally behaves like a static one, but leaves some checks to be done at runtime – the fewer checks left for runtime, the more guarantees about code correctness can be offered before running it. Probably the best known example of a language with a gradual type system is TypeScript.

    Gradualizer is the other one of the two type checkers I covered in the Lambda Days 2022 talk. It’s also the one yours truly has invested the most time into, so keep in mind I might be a bit biased! I’m trying to stay level-headed, though.

    Gradualizer’s theoretical underpinnings are based on type system literature such as B. Pierce’s classic “Types and Programming Languages” or J. Siek’s and W. Taha’s “Gradual Typing for Functional Languages”, as well as some inspiration from G. Castagna’s work on set-theoretic types (which we’ll discuss separately in a while). Sadly, there’s no accompanying whitepaper describing it in detail. The project also doesn’t have dedicated funding, which means it’s developed as a community effort with – for better or worse – all its implications.

    The pros, however, are that it’s relatively fast, especially in comparison with pre-incremental-mode Dialyzer. Since Josef’s initial announcement, thanks to the community effort, it’s gained some features like partial type inference, polymorphism support and extended its syntax coverage to almost all legal Erlang constructs, including records and maps. It’s got a Rebar3 plugin as well as a Mix task. Thanks to its Elixir spinoff, Gradient , it can be used to check both Erlang and Elixir codebases. I presented a very early version of Gradient at ElixirConf EU 2021 in Warsaw – it’s still experimental, though.

    One milestone Gradualizer reached in early 2023 was cleanly passing a self-check. This means the project, while experimental in nature, can be used in practice to test a non-trivial codebase of a significant size. A caveat to keep in mind, though, is that it requires writing in a certain style, e.g. by sometimes adding inline type assertions, so it’s slightly opinionated. Another takeaway from its ability to self typecheck is that it’s a reference on how using it impacts the coding style. Gradualizer generally follows the “no spec, no check” rule, meaning that only code annotated with function specs is checked. All in all, it means it might be a bit of an effort to use in existing codebases as is, but should be easy enough to apply when starting a new project from scratch.

    Elixir Set Theoretic Types

    We’ve already mentioned “set-theoretic types” in one of the previous paragraphs. It’s a theory that’s been developed and extended by G. Castagna, a professor at CNRS – Université de Paris, and his team, for over 20 years. It’s used in CDuce, an expressive functional programming language purpose built for manipulating XML . Incidentally, its semantics match Erlang’s semantics, and therefore Elixir’s, exceptionally well.

    That’s the reason why José Valim, the creator of Elixir, has been cooperating with Giuseppe Castagna and Guillaume Duboc, a PhD student focusing on set-theoretic types for Elixir, at least since 2022, as outlined by José’s blog post, My Future with Elixir: set-theoretic types .

    As explained in Castagna’s “Programming with union, intersection, and negation types” , set-theoretic types give the programmer unparalleled freedom of expressing intent in type annotations. This is nicely captured in the Scala 3 Book chapter on Union Types – see the lengths to which one has to go if a language doesn’t have union types! Non-discriminated union types usually are not available in traditional statically typed languages like OCaml or Haskell. Scala 3 does have them, as does Erlang and Elixir, but set-theoretic types also offer the intersection and negation connectives, which together provide even more expressive power. Guillaume Duboc’s Elixir Prototype Showcase allows us to play with set-theoretic types in Elixir with an ad-hoc, prototype syntax. Please note the prototype is powered by the CDuce type checker, not the final Elixir one – that’s still a work in progress.

    However, this comes at a cost. Set-theoretic types are extremely expressive, but global type inference with set-theoretic connectives is undecidable (original result by Coppo and Dezani-Ciancaglini, 1980, here via Tommaso Petrucciani’s PhD thesis “Polymorphic set-theoretic types for functional languages”). This means that in the edge cases, a programming language with set-theoretic types requires the programmer to put in at least some type annotations. Local type inference should still allow for omitting most of them.

    This project, with José, the Elixir creator, Giuseppe, a researcher with over 20 years of experience in the field, and Guillaume, working on it full-time and writing a PhD thesis, has a huge potential! Listen to Guillaume and Giuseppe talking about “The Design Principles of the Elixir Type System” at this year’s Code BEAM in Berlin!

    However, it’s likely only going to target Elixir, with changes being gradually rolled out with new Elixir versions. Erlang is likely not going to benefit from it directly, though the research results will certainly be applicable. Another point worth mentioning is that the plan for Elixir is to abandon the traditional Erlang-inspired type and spec syntax to enable the full expressiveness of set-theoretic types. If this happens, it might mean some extra difficulties for projects trying to target or leverage both Elixir and Erlang at the same time (for example, using dependencies written in one language from a project in another). Nonetheless, the benefits seem to outweigh the costs, so let’s keep our fingers crossed for Elixir with set-theoretic types to copy (or surpass!) the success of TypeScript!

    Etylizer – Erlang Set Theoretic Types

    Since we’re at set-theoretic types and we’ve already mentioned these fit Erlang just as well as Elixir, then it makes sense to ask the question if there’s any work targeting the former. Apparently, the answer is yes!“ Set-theoretic Types for Erlang ” by Albert Schimpf, Stefan Wehr, and Anette Bieniusa is a paper announcing the development of etylizer , a new set-theoretic type checker for Erlang. The paper itself is a summary of Castagna and others’ work that best applies to Erlang and Elixir, with extensions on Erlang specific pattern-matching that doesn’t exist in CDuce.

    Etylizer is a very new project with not much user documentation and still little coverage of the Erlang language syntax. For example, features like records or maps are not implemented yet, which makes it practically impossible to use on real-world code. This makes it look paler in comparison to Gradualizer or Gradualizer. On the other hand, it’s based on a very strong theoretical underpinning, so there’s great opportunity ahead if the enthusiasm doesn’t fade. It can be set up with ease, though currently it only has a command line interface. It’s rather fast. It accepts contributions, and being written completely in Erlang, it should be quite easy to contribute to it for the BEAM community. Come listen to Anette and Albert talk about “ etylizer: Set-theoretic Types for Erlang” at Code BEAM 2023!

    eqWAlizer

    Talking about ease of contribution for the BEAM community, eqWalizer is an outlier – it’s the only tool in this comparison implemented in non-BEAM languages, namely Scala with some Rust here and there.

    ovsa4rQUd29kT8ag-vAi_qprZplSoc1chkXCy4pU3OX73NucIf-rvqnhxteK_FBfUlnt1JPGUiyHdM44DcWfrvHXLdEYZyGT2Kn0Pfz3-rCbSM2TpFcdoPYW06Y-qislfz7yiPnnC11mIk-4_1gqeWo

    With a dedicated team at WhatsApp backing it up, it’s currently the only such well polished contender to the BEAM ecosystem’s baseline, that is Dialyzer. And there’s no doubt that a strong contender it is!

    It’s fast. It’s easy to set up – it has a Rebar3 plugin, but it can be used without it on non-Rebar projects. It was announced at the ICFP Erlang Workshop keynote in late 2022 by Ilya Klyuchnikov, its main author. It seems to work just as well as Gradualizer on the latter’s test suite (more on that to come in another blog post). It’s slower when run on a single file, but still significantly faster than Dialyzer when run on an entire project. It’s a gradual type system supporting a dynamic() type. The treatment of that type depends on the mode – gradual or strict – with the gradual mode making it compatible with any type.

    On the cons side, it seems to be blind to some nuances that a set-theoretic type checker would catch. It’s also quite opinionated, not distinguishing between integers and floating point numbers, introducing concepts such as “shapes” and “dictionaries” to deal with maps, or not caring about the difference between proper and improper lists. Dialyzer and Gradualizer, in comparison, pay more attention to these aspects of the language. This might make eqWAlizer a bit hard to adopt in established codebases. Fortunately, eqWAlizer documents these design decisions well and thanks to the simplifications they bring, it seems to be a well polished and very handy tool in an Erlang programmer’s belt. Yes, just an Erlang programmer’s – there are no official plans to support Elixir.

    Roberto Aloi, Michał Muskała , and Robin Morisset from WhatsApp, as well as Alan Zimmerman from Meta, will be speaking at Code BEAM this year about various tools they develop for the Erlang ecosystem. While none of the talks is going to be about eqWAlizer directly, there’s a chance they could share some of their impressions in the “hallway track”.

    That’s the end of our survey of Erlang and Elixir type checkers available on the BEAM platform. As you can see, some of the mentioned projects change almost as we speak and we can watch their development very closely – these are truly interesting times! It’s even better that we can speak to the authors thanks to events like Code BEAM or even contribute directly since all the mentioned projects are open-source.

    Does it mean the BEAM community has grown tired of “let it crash”? Probably not, as that’s a valid mitigation strategy for errors which cannot be avoided like network links dropping, storage failing, or hardware malfunctioning.


    However, there are errors like programming mistakes, logical flaws, design shortcomings, or just simple API misuse that can be avoided or mitigated ahead of time. Moreover, such errors usually don’t go away just because of a process restart and have to be fixed in code. The sooner we realise they’re there, the sooner we can react. It would be short-sighted not to leverage approaches like static analysis or type checking if they can bring improved developer productivity, foster type-driven development, encourage better design, enable easier and more aggressive refactoring, or just make the developer experience better.

    The post Type-checking Erlang and Elixir appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Introducing Wardley Mapping to Your Business Strategy

      news.movim.eu / PlanetJabber · Thursday, 28 September, 2023 - 10:52 · 5 minutes

    Since it’s creation in 2005, Wardley Mapping has been embraced by UK government institutions and companies worldwide. This is thanks to its unique ability to factor both value and change into the strategising process. It’s a powerful, fascinating tool that far more organisations across the world should be implementing today to make key choices for their future growth.
    Ahead of my wider Wardley Mapping strategy discussion at GOTO Copenhagen this year, I have put this article together as an overview of Wardley Mapping for business strategy, including the fundamentals of creating a map, how to interpret it to form patterns, and why it works so effectively for corporate decision-making today.

    Wardley Mapping: An Abbreviated Overview

    In this article, I will go through the core aspects of how a Wardley Map is created so that you can begin to consider its use in your organisation.

    These maps can look complicated at first, particularly maps with more connections, but to get started you need to understand

    • Purpose
    • Anchors (Users)
    • Needs and Capabilities
    • The Value Chain (Vertical Axis)
    • Evolution (Horizontal Axis)

    Purpose

    To create a successful Wardley Map, you must first have a clearly defined purpose. This may be to solve a particular problem in your company, like a lack of sales, or to evaluate all of your solution features as an SaaS.

    Your purpose will define the overall scope of your map, and what you’re trying to achieve by creating it.

    The Anchor (User)

    Wardley Mapping is not a mindmap or a graph; it’s a map because it has an anchor. Simon Wardley, the inventor of Wardley Mapping, explains the difference between graphs and maps succinctly in this video .

    An anchor is what your strategy revolves around. If you’re Wardley Mapping to assess your market, for example, your main anchor would be your customers. Another simple example below shows a Wardley Map for an HR recruitment strategy, with a job candidate as the anchor.

    a-2PTUL6O5GB3G6fEE5Ujj5Kiy_Pjs1Cv2pdJA7na4oLnR6PUR-hOaishygVT-c1nXfMxlUYnym-2efrchVXuTSOylxVkxK_gtv57DEkGst6VXt_uBzVDFjBRHust0ZYCstJ4gCOk8A7zQbGP8SNC5c

    Source: HR Value

    You can find more information on this map and all the others I reference throughout this article in my curated collection of Wardley Mapping examples from a diverse set of business domaines.

    Needs and Capabilities

    The needs on a Wardley Map are simply what each of your anchors needs to have or achieve within what you’re mapping. In the example above, the candidate needs a job.

    Then, you have to map your current capabilities, which are all the things your anchor has to have to support their needs. Once you’ve identified needs and capabilities, you can start to connect them.

    This example for the Neeva search engine below shows a single, clear need for their search engine customers – find “something” – followed by a large number of capabilities, as well as offered features and what their competition is doing.

    bmcY6rnvTjNZPo1AmK9uhuccxT8obJDtwtoe27_shWuXnFC7Z7JAmrTiaWDxJiNO_6_r66aQn9reMj2NkZP3GWG_MKlSITfnCueO5VYpNN_i5YWRQWWPXytMhgod2E9Y4YEUH8AViBRYRZ-wpoRyX0c

    Value Chain (Vertical Axis)

    As you start to create your map, you’ll naturally be creating a value chain based on how visible, and invisible, needs and capabilities are to your anchor.

    A good example of this is this review of the video games industry below, where streaming, social media and store locations are more visible to the Player and Influencer anchors than publishers or development platforms.

    W35HQGrwwQjqqWkchr03kA-JKVdwMOdhPOWsbSGn1_G5Q0zGYgbVVcJ8Ed4q5iYAodsFrtaFS_1RQxuIJsYl--1ks6MJcE-QUjPRcF6ihhquazI1tUudul9riKgbWwQw241GBN5_xtZnqILxOCCzvy0

    Evolution (Horizontal Axis)

    The ability to track evolution and change is what sets Wardley Mapping apart from other business strategising tools. When mapping needs and capabilities, you must map along the horizontal axis across four stages:

    1. Genesis – rare, poorly understood, uncertain.
    2. Custom-built – people are starting to understand and consume this capability.
    3. Product (+ Rental) – consumption is rapidly increasing.
    4. Commodity/Utility/Basic Service – this capability is normalised and widespread.

    You can better understand each of these 4 states using this table, also found on this website and in Simon Wardley’s book on Wardley Mapping .

    The below map goes into great detail on the tech stack and capabilities of Airbnb and is a good example of how mapping across the value chain and evolution provides a unique visual only offered by Wardley Mapping.

    bslrGe_-HZipx8TlP8KLjvnSgvekWToseWGiynahJuU_EO1ZXTJ5wPQc06IumuEYhWmUEcZ0EgSrtStwRNqkCJURja9lyOstZRYEaJ4AEeHoQbjY5YrrHINMgrR77ROVjbUkQZJyp29ZAGUVfsnj1vg

    Strategising Through Wardley Mapping

    The above explains the fundamentals of designing a Wardley Map. There are also some helpful explainers on creating your first map on the Lean Wardley Mapping website .

    Once you know the basics of how to map, and you’ve practised enough to be confident in your results, you can implement more complicated strategies to use your map for key decision-making. Some approaches to this include:

    Understanding Patterns – how to spot the patterns between your capabilities.

    Climactic Patterns – these are the basic rules of the Wardley Map and how it evolves. You can use the table tracking these patterns to review potential changes to your strategy.

    Anticipating Changes – including more advanced connection-making like subgroups, inertia and the Red Queen effect.

    Doctrinal Principles – this is a table of universally useful patterns that you can apply to your Wardley Map to consider your current situation.

    Gameplay Patterns – this is a table of traditional business strategies you can use to influence your map and, by extension, your purpose or market.

    Fool’s Mate – a near-universal strategy for using your map to understand where to make changes through evolution.

    Why Wardley Mapping is Perfect for Business Strategy

    Wardley Mapping can be an invaluable tool in business planning and key decision-making thanks to the following factors.

    Topological Visualisation

    Wardley Mapping allows your business to visualise markets or scenarios topologically, meaning it takes into account things that change over time.

    In business, the one constant is that everything changes. Wardley Mapping can help you prepare for those changes, react ahead of time, and project key decisions for the future even before they’re necessary.

    Pattern Recognition

    The ability to view concepts in a changing environment also makes Wardley Mapping great for tracking patterns between capabilities or anchors.

    This allows you to consider how crucial decisions might impact other aspects of your business model, or identify which areas demand particular consideration.

    Uniting Senior Leaders

    Wardley Mapping allows all senior decision-makers to view the same data visualisation of a market or situation objectively.

    By then applying doctrines and patterns, management teams can discuss potential changes within a unifying environment, with a clear way to visualise and consider the impacts of these decisions through modifying the map.

    Wardley Mapping Strategy At BigCorp – GOTO Copenhagen 2023

    I’ve written this article as an introduction to my upcoming talk on October 3rd at GOTO Copenhagen 2023.

    From 09:40 – 10:30, I’ll be discussing how Wardley Mapping can be utilised in senior decision-making scenarios, as well as how I think it should be applied today.

    You can find out more about the talk, as well as the other speakers joining the conference this year, on GOTO Copenhagen’s website . I hope to see you there.

    The post Introducing Wardley Mapping to Your Business Strategy appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/introducing-wardley-mapping-to-your-business-strategy/

    • chevron_right

      Erlang Solutions: Our experts at Code BEAM Europe 2023

      news.movim.eu / PlanetJabber · Monday, 25 September, 2023 - 16:29 · 2 minutes

    The biggest Erlang and Elixir Conference is coming to Berlin in October!

    Are you ready for a deep dive into the world of Erlang and Elixir? Mark your calendars, because Code BEAM Europe 2023 is just around the corner.

    With a lineup of industry pioneers and thought leaders, Code BEAM Europe 2023 promises to be a hub of knowledge sharing, innovation, and networking.

    Erlang Solutions’ experts are working hard to create presentations and training that explore the latest trends and practical applications.

    Here’s a sneak peek into what our speakers have prepared for this event:

    Natalia Chechina - speaker at Code BEAM Europe 2023

    Natalia Chechina:
    Observability at Scale

    In her talk, Natalia will share her experience and the rules of thumb for working with metrics and logs at scale. She will also cover the theory behind these concepts.

    Read more >>

    .

    Nelson Vides and Pawel Chrzaszcz (Code BEAM Europe 2023)

    .

    Nelson Vides & Paweł Chrząszcz:
    Reimplementing technical debt with state machines

    A set of ideas on how to make a core protocol meant to be extensible implemented and stable: this talk is a must-attend for all legacy-code firefighters.

    Read more >>

    .

    Brian Underwood
    Do You Really Need Processes?

    A demo of a ride-sharing application which he created to understand what is possible with a standard Phoenix + PostgreSQL application.

    Read more >>

    Visit Our Stand : Meet experts, win prizes

    Drop by our exhibition stand at the conference venue and meet the speakers! This is a fantastic opportunity to interact one-on-one with our experts, ask questions, and gain deeper insights into their presentations.

    Win a free Erlang Security Audit for your business

    We’re showcasing our brand new Erlang Security Audit , for companies whose code is rooted in Erlang and who want to safeguard their systems from vulnerabilities and security threats. Pop by to find out more!

    Looking for a messaging solution?

    Our super-experienced team will be on hand to understand your messaging needs and demo our expert MongooseIM capabilities.  If you’re an existing MongooseIM user, you could also win a specialist Health Check to ensure the optimum and smooth operation of your MongooseIM cluster. So come by and say hello.


    Book your ticket and see you in Berlin 19-20 Oct! If you can’t make it in person, you can always join us online.

    Upskill your team

    Make sure to also check out the training offer. The day before Code BEAM Europe, you have the opportunity to join tutorials with experts, including those from Erlang Solutions: Robert Virding, Francesco Cesarini, Natalia Chechina and Łukasz Pauszek.

    Check the training programme here >>

    The post Our experts at Code BEAM Europe 2023 appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Smart Sensors with Erlang and AtomVM: Smart cities, smart houses and manufacturing monitoring

      news.movim.eu / PlanetJabber · Wednesday, 20 September, 2023 - 14:56 · 8 minutes

    For our first article on IoT developments at Erlang Solutions, our goal is to delve into the use of Erlang on microcontrollers, highlighting and exposing its capabilities to run efficiently on smaller devices. For our inaugural article, we have chosen to address a pressing issue faced by numerous sectors- including healthcare, real estate management, travel, entertainment, and hospitality industries: air quality monitoring. The range of measurements that can be collected is vast and will vary from context to context so we decided to use just one example of the information that can be collected as a conversation starter.

    We will guide you through the challenges and demonstrate how Erlang/Elixir can be utilised to measure, analyse, make smart decisions, respond accordingly and evaluate the results.

    Air quality is assessed by reading a range of different metrics. Carbon dioxide (CO₂) concentration, particulate matter (PM), nitrogen dioxide (NO₂), ozone (O₃), carbon monoxide (CO) and sulfur dioxide (SO₂) are usually taken into account. This collection of metrics is currently referred to as VOC (volatile organic compounds). Some, but not all VOCs are human-made and are produced in different processes, whether by urbanization, manufacturing or during the production of other goods or services.

    We are measuring CO₂ in this prototype as an example for gathering environmental readings. CO₂ is a greenhouse gas naturally present in the atmosphere and its levels are influenced by many factors, including human activities such as burning fossil fuels.

    The specific technical challenge for this prototype was to run our application in very small and power-constrained scenarios. We choose to address this by trying out AtomVM as our alternative to the BEAM.

    AtomVM is a new, lightweight implementation of the BEAM virtual machine that is designed to run as a standalone Unix binary or can be embedded in microcontrollers such as STM32 , ESP32 and RP2040 .

    Unlike a single-board computer designed to run an operating system, a microcontroller is not purpose-built for a specific task. Instead it runs application-specific firmware, often with very low power consumption and with lower costs, making it ideal for operating IoT devices.

    Our device is composed of an ESP32 microcontroller, a BME280 sensor to measure pressure, temperature and relative humidity and a SDC40 sensor to measure CO2 PPM (Parts per Million).

    The ESP32 that we are going to use in this article is an ESP32-C3 , which is a low-cost single-core RISC-V microcontroller, obtainable from authorized distributors worldwide. The SDC40 sensor is a Sensirion and the BME280 sensor is a Bosch Sensortec . There might be cheaper manufacturers for those sensors, so feel free to choose according to your needs.

    Let’s get going!

    Getting dependencies ready

    For starters, we will need to have AtomVM installed, just follow the instructions on their website.

    It is important to follow the instructions as it guarantees that you will have a working Expressif ESP-IDF installation and you are able to flash the ESP32 microcontroller via the USB port using the esptool provided by the ESP-SDK tool suite.
    You will also need to have rebar3 installed as we are going to use it to manage the development cycle of the project.

    Bootstrapping our application

    First, we will need to create our application in order to start wiring things up on the software side. Use rebar3 for creating the application layout:

    % rebar3 new app name=co2
    ===> Writing co2/src/co2_app.erl
    ===> Writing co2/src/co2_sup.erl
    ===> Writing co2/src/co2.app.src
    ===> Writing co2/rebar.config
    ===> Writing co2/.gitignore
    ===> Writing co2/LICENSE.md
    ===> Writing co2/README.md
    
    

    Make sure to include the rebar3 plugins and dependencies before compiling the scaffold project by adding the following to your rebar.config file:

    {deps, [
        {atomvm_lib, {git, "https://github.com/atomvm/atomvm_lib.git", {branch, "master"}}}
    ]}.
    
    {plugins, [
        atomvm_rebar3_plugin
    ]}.
    

    Recent atomvm_lib development updates have not yet been published to hex.pm , so we use the master branch which has some fixes we need. Lastly, this dependency includes the BME280 driver that we are going to use.

    While we can boot the application now in our machine, we also need to implement an extra function that AtomVM will use as an entrypoint. The OTP entrypoint is defined in the co2.app.src file as {mod, {co2_app, []}}, which is the default module to use for starting an application. However, in AtomVM, we need to instruct the runtime to use a start/0 function defined within a module. That is, AtomVM does not start the same way standard OTP applications do. Therefore, some glue must be used:

    -module(co2).
    
    -export([start/0]).
    
    start() ->
        {ok, I2CBus} = i2c_bus:start(#{sda => 6, scl => 7}), %% I2C pins for the xiao esp32c3
        {ok, SCD} = scd40:start_link(I2CBus, [{is_active, true}]),
        {ok, BME} = bme280:start(I2CBus, [{address, 16#77}]),
        loop(#{scd => SCD, bme => BME}).
    
    loop(#{scd := SCD, bme := BME} = State) ->
        timer:sleep(5_000),
        {ok, {CO2, Temp, Hum}} = scd40:take_reading(SCD),
        {ok, {Temp1, Press, Hum1}} = bme280:take_reading(BME),
        io:format(
           "[SCD] CO2: ~p PPM, Temperature: ~p C, Humidity: ~p%RH~n",
           [CO2, Temp, Hum]
          ),
        io:format(
          "[BME] Pressure: ~p hPa, Temperature: ~p C, Humidity: ~p%RH~n",
           [Press, Temp1, Hum1] 
          ),
        loop(State).
    

    This module will start the main loop that reads from the sensors and displays the readings over the serial connection.

    We are using the stock BME280 driver that comes bundled with the atomvm_lib dependency, meaning that we only needed to change the address in which the BME280 sensor answers in the I2C bus.
    For the SCD40 sensor, we need to write some code. According to the SCD40 datasheet , in order to submit commands to the sensor, we need to wrap our commands with a START and STOP condition, signaling the transmission sequence. The sensor provides a range of features and functionality, but we are only concerned with starting periodic measurements and reading those values from the sensor’s memory buffer.

    %% 3.5.1 start_periodic_measurement
    do_start_periodic_measurement(#state{i2c_bus = I2CBus, address = Address}) ->
        batch_writes(I2CBus, Address, ?SCD4x_CMD_START_PERIODIC_MEASUREMENT),
        timer:sleep(500),
        ok.
    
    …
    
    batch_writes(I2CBus, Address, Register) ->
        Writes =
    	[
    	 fun(I2C, _Addr) -> i2c:write_byte(I2C, Register bsr 8) end,     %% MSB
    	 fun(I2C, _Addr) -> i2c:write_byte(I2C, Register band 16#FF) end %% LSB
    	],
        i2c_bus:enqueue(I2CBus, Address, Writes).
    

    Once the SCD40 starts measuring the environment periodically we can read from the sensor every time a new reading is stored in memory:

    %% 3.5.2 read_measurement
    read_measurement(#state{i2c_bus = I2CBus, address = Address}) ->
        write_byte(I2CBus, Address, ?SCD4x_CMD_READ_MEASUREMENT bsr 8),
        write_byte(I2CBus, Address, ?SCD4x_CMD_READ_MEASUREMENT band 16#FF),
        timer:sleep(1_000),
        case read_bytes(I2CBus, Address, 9) of
    	{ok,
    	 <<C:2/bytes-little, _CCRC:1/bytes-little,
    	   T:2/bytes-little, _TCRC:1/bytes-little,
    	   H:2/bytes-little, _HCRC:1/bytes-little>>} ->
    	    %% 2 bytes in little endian for co2
    	    %% 2 bytes in little endian for temp
    	    %% 2 bytes in little endian for humidity
    	    <<C1, C2>> = C,
    	    <<T1, T2>> = T,
    	    <<H1, H2>> = H,
    	    {ok, {(C1 bsl 8) bor C2,
    		  -45 + 175 * (((T1 bsl 8) bor T2) / math:pow(2, 16)),
    		  100 * (((H1 bsl 8) bor H2) / math:pow(2, 16))}};
    	{error, _Reason} = Err ->
    	    Err
        end.
    
    
    

    According to the datasheet, the response we get from memory are 9 bytes that we need to unpack and convert. The response includes an 8-bit CRC checksum that we don’t take into account, but it would be useful to validate the sensor’s response. All the conversions above are according to the official datasheet’s basic command specifications.

    Flashing our application

    In order to get our application packed in AVM format and flashed to the esp32, we will need to add a rebar3 plugin that handles all those steps for us. It is possible to perform these steps manually but it can become tedious and error prone. By using rebar3 again we gain access to a more streamlined development process.

    Add the following to your rebar.config:

    {plugins, [
        {atomvm_rebar3_plugin, {git, "https://github.com/atomvm_rebar3_plugin", {branch, "master"}}}
    ]}.
    
    
    
    
    

    Which will provide a few commands that we will use, mainly `esp32_flash` and `packbeam`. Due to the way the the plugin is implemented, calling `esp32_flash` will proceed on getting project dependencies, getting the application compiled and it’s beam files packed in an AVM format designed to be flashed on our device:

    % rebar3 esp32_flash --port /dev/tty.usbmodem2101
    
    
    
    
    
    

    Note: You must use a port that matches your own.

    Obtaining readings

    If everything goes according to plan we should be able to connect to our device and see the output of the readings over the serial port. But first, we need to issue the following command within the AtomVM/src/platforms/esp32 directory:

    % ESPPORT=/dev/tty.usbmodem2101 idf.py -b 115200 monitor
    
    
    

    Note: You must use a port that matches your own.

    The output should match something along these lines:

    [SCD] CO2: 848 PPM, Temperature: 2.91405487060546875000e+01 C, Humidity: 3.74832153320312500000e+01%RH
    [BME] Pressure: 7.46429999999999949978e+02 hPa, Temperature: 2.89406427826446881000e+01 C, Humidity: 3.84759374625173900000e+01%RH
    
    
    
    

    Closing remarks

    We have explored writing an Erlang application that can run on an ESP32 microcontroller using AtomVM, an alternate implementation of the BEAM. We also managed to read environmental metrics of our interest, such as temperature, humidity and CO2 for further processing.

    Our highlights include the possibility for manipulating binary data by using pattern matching and developer happiness.

    The ability to run Erlang applications on microcontrollers opens up a wide range of possibilities for IoT development. Erlang is a well-known language for building reliable and scalable applications, and its use on microcontrollers can help to ensure that these applications are able to handle the demands that IoT requires.

    External links

    The post Smart Sensors with Erlang and AtomVM: Smart cities, smart houses and manufacturing monitoring appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/smart-sensors-with-erlang-and-atomvm-smart-cities-smart-houses-and-manufacturing-monitoring/

    • chevron_right

      Snikket: State of Snikket 2023: Funding

      news.movim.eu / PlanetJabber · Monday, 18 September, 2023 - 13:20 · 7 minutes

    As promised in our ‘State of Snikket 2023’ overview post, and teased at the end of our first update post about app development, this post in the series is about that thing most of us open-source folk love to hate… money.

    We are an open-source project, and not-for-profit. Making money is not our primary goal, but like any business we have upstream expenses to pay - to compensate for the time and specialist work we need to implement the Snikket vision. To do that, we need income.

    This post will cover where our funding has come from over the last couple of years and where we’ve been spending it. We’ll also talk a bit about where we anticipate finding funding over the next year or so, and what some of that is budgeted for.

    Our last post on this topic was two years ago, when we announced the Open Technology Fund grant that allowed SuperBloom (then known as Simply Secure) to work on the UI/UX of the Snikket apps. Since then, other pieces of Snikket-related work have been supported by two more grants - both from projects managing funds dedicated to open source and open standards by the EU’s ( Next Generation Internet ) initiative.

    The first one was a project called DAPSI (Data Portability and Services Incubator), focused on enabling people to move their data more easily between different online services. DAPSI funded Snikket directly to support Matthew’s work on account portability standards , which can be used not only in the software projects underlying Snikket itself, but any and all XMPP software. This one helped keep Matthew fed for much of 2021, and as we described on our blog after the funding was confirmed , it kept him busy with:

    • Standardizing the necessary protocols and formats for account data import and export
    • Developing open-source easy-to-use tools that allows people to export, import and migrate their account between XMPP services
    • Building this functionality into Snikket

    The other grant was from the NGI Assure Fund administrated by NLnet . It was one Matthew applied for on behalf of the Prosody project, and helped keep him busy and fed through the second half of 2022 and into 2023. Prosody is the XMPP server project that the Snikket server software is built on, so any improvements there flow fairly directly to people using Snikket.

    NGI Assure is focused on improving the security of people’s online accounts, and their grant to Prosody was for work on bringing new security features like multi-factor authentication to XMPP accounts. The work included in the scope of the grant is now complete, and some of it is already available to be used. The rest will be boxed up over the coming months and released, to start finding its way into XMPP software.

    Both of these successful grant applications are practical examples of the Snikket company serving as a way to fund important work on the software and standards that the Snikket software and services depend on. Work that can be hard to fund any other way. However, grants like these usually cover a medium-to-long-term piece of work with a very specific scope, which can divert time away from other parts of the project. It is hard to find grants with a focus on general improvements, bug fixing and maintenance. This is the main reason why there hasn’t been as much work on the app side of things, nor updates on this blog.

    We very much appreciate the grants we’ve received from all these funders, and the important features they have enabled us to implement. But ultimately we see “side income” like grants as a short-term way to plug the holes in our financial bucket while we’re still getting up and running. Our long term goal, as a social enterprise ( specifically a UK-based Community Interest Company ), has always been to earn the income we need through donations and by providing commercial services to the community using Snikket software.

    When Snikket began, the main plan for this was to set up a hosting service, where people can pay a regular subscription to have us look after their Snikket server (more on this below). But over the last year or so we’ve discovered that there’s a lot to be gained from partnering with other social enterprises with shared values and related goals.

    One such company is JMP.chat , an innovative telephony company who provide phone numbers that can be used with XMPP apps, for both text messages and calls. They recently celebrated JMP’s official public launch a few months ago.

    We’re very grateful to JMP for funding the other half of Matthew’s work hours while he was beavering away on the NGI Assure grant work. Why were they willing to do that? To answer that, we need to tell you a bit more about what they do.

    During the six years their service has been in beta testing, JMP’s first priority has been developing software gateways to allow XMPP apps to communicate with mobile phone networks, and vice-versa. However, many of their customers are newcomers to the world of XMPP. They would often struggle to find suitable apps with the required features for their platform, and struggle to find good servers on which they can register their XMPP accounts.

    What could be a better solution to this problem than a project that aims to produce a set of easy-to-use XMPP-compliant apps with a consistent set of features across multiple platforms? Yes - Snikket complements their service wonderfully!

    So we have been collaborating a lot with JMP (or more generally, Soprani.ca - their umbrella project for all their open-source projects, including JMP). On the app development side, we share code between Snikket Android and their Cheogram Android app (both are based on, and contribute back to, Conversations). We have also worked to ensure that iOS is not left behind, integrating features such as an in-call dial pad to Snikket iOS as well.

    If JMP customers don’t already have access to a hosted XMPP server and neither the time or skills to run their own, they need one of those too. So JMP have been suggesting Snikket’s hosting service to customers who don’t have an XMPP account yet. With all the necessary features for a smooth experience, easy setup and hosting available, Snikket ticks all the boxes. In fact the latest version of Cheogram allows you to launch your own Snikket instance directly within the app!

    A lot of work has been put into ensuring the hosting service is easy, scalable and reliable - to be ready for JMP’s launch traffic and also well into the future.

    But while JMP is an excellent partner, Snikket isn’t only about JMP. We’re preparing for our own service to also exit beta before the end of this year. Once we do, revenue from the service will help us cover the costs of continuing to grow and advance all of our goals. Pricing has not been set yet, but we’re aiming for a balance between sustainable and affordable.

    JMP will continue to sponsor half of Matthew’s time on the project. The other half is covered by our other supporters. You know who you are and we’re very grateful for your support.

    The income sources we’ve talked about so far pay for Matthew’s time to work on Snikket and related projects. We also appreciate the donations a number of people have made to the project via LiberaPay and GitHub sponsorships. These help us pay for incidental expenses like;

    • Project infrastructure, including this website, domain names, and push notification services and monitoring.

    • Development costs, like paying for an Apple developer account.

    • Travel costs of getting to conferences for presentations.

    One other important thing these donations help to pay for is test devices.

    We buy, or are donated, second-hand devices for developing and testing the Snikket apps. Used devices are much cheaper, so we can get more test devices for the same budget. Also, most people don’t get a brand new device every year, so these slightly older devices are more likely to match what the average person is using.

    Finally, we consider the environmental benefit. Using older but functional devices gives them a second life, preventing them from being needlessly scrapped, and keeping them out of the growing e-waste piles our societies now produce.

    So that’s everything there is to share on the topic of Snikket’s finances for now. But we’re not done with our ‘State of Snikket 2023’ updates, oh no.

    As we mentioned at the end of the last piece in this series, there’s at least one more coming, about new regulations for digital technology and online services. A number of governments around the world are passing or proposing laws that could affect Snikket - some of them a bit concerning - and we have a few things to say about them.

    We’re also going to sneak in a review of the inaugural FOSSY conference Matthew presented at recently.

    Watch this space!

    • wifi_tethering open_in_new

      This post is public

      snikket.org /blog/snikket-2023-funding/

    • chevron_right

      JMP: Newsletter: Summer in Review

      news.movim.eu / PlanetJabber · Wednesday, 13 September, 2023 - 20:19 · 2 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    Since our launch at the beginning of the summer, we’ve kept busy.  We saw some of you at the first FOSSY , which took place in July.  For those of you who missed it, the videos are out now .

    Automatic refill for users of the data plan is in testing now.  That should be fully automated a bit later this month and will pave the way for the end of the waiting list, at least for existing JMP customers.

    This summer also saw the addition of two new team members: welcome to Gnafu the Great who will be helping out with support, and Amolith , who will be helping out on the technical side.

    There have also been several releases of the Cheogram Android app ( latest is 2.12.8-2 ) with new features including:

    • Support for animated avatars
    • Show “hats” in the list of channel participants
    • An option to show related channels from the channel details area
    • Emoji and sticker autocomplete by typing ‘:’ (allows sending custom emoji)
    • Tweaks to thread UI, including no more auto-follow by default in channels
    • Optionally allow notifications for replies to your messages in channels
    • Allow selecting text and quoting the selection
    • Allow requesting voice when you are muted in a channel
    • Send link previews
    • Support for SVG images, avatars, etc.
    • Long press send button for media options
    • WebXDC importFiles and sendToChat support, allowing, for example, import and export of calendars from the calendar app
    • Fix Command UI in tablet mode
    • Manage permissions for channel participants with a dialog instead of a submenu
    • Ask if you want to moderate all recent messages by a user when banning them from a channel
    • Show a long streak of moderated messages as just one indicator

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/septermber-newsletter-2023

    • chevron_right

      Erlang Solutions: Diversity & Inclusion at CodeBEAM Europe

      news.movim.eu / PlanetJabber · Wednesday, 13 September, 2023 - 13:31 · 1 minute

    Our Pledge to Diversity

    As technology becomes increasingly integrated into our lives, it’s crucial that the minds behind it come from diverse backgrounds. Different viewpoints lead to more comprehensive solutions, ensuring that the tech we create addresses the needs of a global audience.

    At Erlang Solutions, we believe that a diverse workforce is a catalyst for creativity and progress. By sponsoring the Diversity & Inclusion Programme for Code BEAM Europe 2023, we’re reinforcing our commitment to creating a tech landscape that is reflective of the world we live in.

    This initiative is not just about breaking down barriers; it’s about opening doors to new perspectives, ideas, and endless possibilities.

    At Erlang Solutions, we believe that diversity isn’t just a buzzword – it’s a fundamental pillar of progress. Our sponsorship of the Diversity & Inclusion Programme at Code BEAM aligns perfectly with our values. We’re excited to be part of an event that encourages open dialogue, showcases diverse talent, and paves the way for a more inclusive tech industry. – Jo Galt, Talent Manager.

    The goal of the programme is to increase the diversity of attendees and offer support to groups underrepresented in the tech community who would not otherwise be able to attend the conference.

    The Diversity & Inclusion Programme focuses primarily on empowering women, ethnic minorities or people with disabilities, among others, but everybody is welcome to apply .

    The post Diversity & Inclusion at CodeBEAM Europe appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Protected: Navigating the Unconventional: Introducing Erlang and Elixir

      news.movim.eu / PlanetJabber · Thursday, 7 September, 2023 - 16:08

    This content is password protected. To view it please enter your password below:

    Password:

    The post Protected: Navigating the Unconventional: Introducing Erlang and Elixir appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/navigating-the-unconventional-introducing-erlang-and-elixir/

    • chevron_right

      Erlang Solutions: Pay down technical debt to modernise your technology estate

      news.movim.eu / PlanetJabber · Thursday, 7 September, 2023 - 16:07 · 5 minutes

    Imagine this scenario. Your CEO tells you the organisation needs a complete tech overhaul, then gives you a blank cheque and free rein. He tells you to sweep away the old and usher in the new. “No shortcuts, no compromise!” he cries. “Start from scratch and make it perfect!”

    And then you wake up. As we all know, this scenario is pure fantasy. Instead, IT leaders are faced with a constant struggle to keep up with the needs of the business, using limited resources, time-saving shortcuts and legacy systems.

    That’s the normal way of things and it can be a recipe for serious technical debt.

    What is technical debt?

    You’ve probably heard the term “technical debt”, even if there’s no agreement on what exactly it means.

    At its simplest, technical debt is the price you pay for an unplanned, non-optimised IT stack. There are many reasons for technical debt but in every definition, the debt tends to build over time and become more complex to solve.

    The phrase is intentionally analogous to financial debt. When you buy a house, you take out debt in the form of a mortgage. You get instant access to the thing you need – a home – but there are consequences down the line in the form of interest payments.

    As the metaphor suggests, technical debt is not always bad, just as financial debt is not always bad. It can be useful to do things quickly, as long as you’re prepared to tackle the consequences when they inevitably emerge. Unfortunately, lots of organisations take on the debt without thinking about the challenges to come.

    The challenges of technical debt

    How does technical debt come about? The simple answer is, in the normal cut and thrust of running a busy organisation.

    Source: McKinsey & Company

    • You create a temporary fix to a software problem and then don’t have time to design a better one. Over time, the temporary fix becomes a permanent part of your solution. The sticking plaster becomes the cure.
    • Your development teams are put under pressure to get something to market in super-quick time, to grasp a time-sensitive opportunity. They get it done – brilliantly – by making something that works well for the task in hand, but time-saving shortcuts slow up other systems in the longer term.
    • Finance refuses to replace legacy systems that just about work, even if they can’t offer the flexibility or speed that a modern digital-first organisation needs.

    In each case, complexity builds. One quick fix after another undermines the efficiency of your wider technology stack. Solutions work in isolation when they need to work together. Systems creak under the pressure of outdated or over complex code.
    What level of debt does this ad hoc activity accumulate? According to research by consultancy Mckinsey, tech debt amounts to between 20% and 40% of the worth of an organisations’ entire technology stack. The study also found that those with the lowest technical debt performed better.

    Source: Tech Target

    How to manage technical debt

    So what can you do about it? In short, the way to manage technical debt is to pay it off. Not necessarily all of it, because some debt is OK. But it should be nearer 5% than 40%.

    The first thing is to understand what your technical debt is, and what’s causing it. Some of this you may instinctively know, such as the slowdown in productivity caused by legacy infrastructure.

    Elsewhere, it can be relatively straightforward to identify the symptoms of an overly complex or outdated system:

    • When an engineer is assigned a support ticket, how long does it take to complete the task? Are average completion times increasing? If so, you’re accumulating technical debt.
    • Are you having to fix your fixes? Maybe an application requires reworking one week and then again a couple of weeks later. In this case, debt is building up.
    • If you often have to develop applications and solutions quickly, or patch legacy software to keep it running, you’re also likely to be accumulating technical debt.

    In business-critical applications, it’s worth analysing the quality of the underlying code. It might have been fine five years ago, but half a decade can be a long time in technology. Writing modern code in a more efficient language will likely create significant efficiencies.

    Don’t try and do everything

    Identifying and reducing technical debt is a resource-intensive task. But it can be made manageable if you focus on evolution rather than revolution.

    Mckinsey cites the example of a company that identified technical debt across 50 legacy applications but found that most of that debt was driven by fewer than 20. Every business is likely to have core assets that create most of its technical debt. They’re the ones to focus on.

    When you have identified the most debt-laden solutions, put the support, funding and governance in place to pay down the debt. Create meaningful KPIs and keep to them. Think about how to avoid debt accumulating again after modernisation projects are completed.

    Elixir and technical debt: commit to modern coding

    At the core of that future-proofing effort is code. Legacy coding techniques and languages create clunky, inefficient applications that will soon load your organisation with technical debt.

    One way to avoid that is through the use of Elixir, a simple, lightweight programming language that is built on top of the Erlang virtual machine.

    Elixir helps you avoid technical debt by creating uncomplicated, effective code. The simpler the code, the less likely it is to go wrong, and the easier it is to identify faults if it does.

    In addition, Elixir-based applications always tend to run at the optimal performance for their hardware environment, making the best use of your technology stack without incurring technical debt.

    In short, Elixir is a modern language that is designed for modern technology estates. It reduces technical debt through simplicity, efficiency and easy optimisation.

    Want to know more about efficient, effective development with Elixir, and how it can reduce your technical debt? Why not drop us a line?

    The post Pay down technical debt to modernise your technology estate appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/managing-technical-debt-organization-challenges-and-solutions/