MongooseIM, which is our scalable, flexible and cost-efficient instant messaging server, is now easier to use than ever before. The latest release 6.2 introduces a completely new CETS in-memory storage backend, letting you easily deploy it with modern cloud infrastructure solutions such as Kubernetes. The XMPP extensions are also updated, which means that we support new features of the XMPP protocol.
The new version of MongooseIM is very easy to try out, as there are two new options:
-
Firstly, you can
check out
trymongoose.im
– a live demo installation of the latest version, which lets you create your own XMPP domain and experiment with it. It also showcases how a
Phoenix
web application can be integrated with MongooseIM using its GraphQL API.
-
If you want to
set up your own MongooseIM
installation, you can now easily set it up in Kubernetes with Helm. Our new
Helm chart
automatically templates the configuration files, making it possible to quickly set up a running cluster of several nodes connected to a database.
One of the biggest new features is the support for CETS, which makes management of MongooseIM much easier than before. To fully appreciate this improvement, we need to start with an overview of the clustered storage options in MongooseIM. We will follow with a brief guide, helping you quickly set up a running server with the latest features enabled.
From Mnesia to CETS
MongooseIM is implemented in Erlang, making it possible to handle millions of connected clients exchanging messages. However, a typical user should not need any Erlang knowledge to deploy and maintain a messaging server. Up to version 6.1 there is one component, which breaks this assumption, making management and maintenance much harder. This component is the built-in Erlang database,
Mnesia
, which is convenient when you are starting your journey with MongooseIM, because it resides on the local disk and does not need to be started as a separate service. All MongooseIM nodes are clustered together, and they replicate Mnesia tables between them.
Issues with Mnesia
When you go beyond small experiments on your local machine, it is essential not to store any persistent data in Mnesia, because it is not designed for storing large volumes of data. Also, network connectivity issues or incorrect restarts might make your database inconsistent, leading to unexpected errors and cluster nodes refusing to start. It is also difficult to migrate your data to another database. That is why it is strongly recommended to use a relational database management system (RDBMS) such as PostgreSQL or MySQL, which you can host yourself or use cloud-based solutions such as Amazon RDS. However, when you configure MongooseIM 6.1 and its
extension modules
to use RDBMS, you will find out that the server still needs Mnesia for its operation. This is because Mnesia is also used to store in-memory data shared between the cluster nodes. For example, by sharing user sessions MongooseIM can route messages between users connected to different nodes of the cluster.
When Mnesia was first created, a server node used to be a long-running physical unit that is very rarely restarted – actually one of the main advantages of Erlang was the ability to significantly reduce downtime. With the introduction of virtualisation and containers, a server node is no longer tied to the underlying hardware, and new nodes can be dynamically added or removed. This means that the cluster is much more dynamic, and nodes can be started more often. This brings us to another issue of Mnesia – the need for storing the database schema on disk, which contains the information about all nodes in the cluster and their tables. This is mostly a problem with platforms like Kubernetes, where adding disk storage requires use of persistent volumes, which are costly and need to be manually deleted when a node is removed from the cluster. As a result, the whole management process becomes more error-prone.
Another problem is the additional cluster management required for each node. When a new node starts up, it is not a member of any cluster. There is a join_cluster command that needs to be executed. Same happens with node removal, when leave_cluster needs to be called. For the convenience of the user, our Helm charts automatically call these commands for the started nodes, but they still need to be started in a particular order, which has to be respected when doing restarts and upgrades as well. If for some reason you change that order, the nodes might be locked until all of them are online (see the
documentation
) – which is inconvenient, might result in overload and can even cause the whole cluster to be down if the final node does not start up for some reason. Finally, network connectivity issues might result in an inconsistent database or other errors (even without persistent tables), which can be difficult to understand for anyone but Erlang developers and may require manual intervention on the affected nodes. The solution is usually to stop the affected node, clean up the Mnesia volume, and start it again – which adds unwanted downtime for the server and workload for the operator.
It is important to note that we have these issues not because Mnesia is inherently bad, but because our use case has drifted away from its intended purpose, i.e. we need no persistence and transactions, but we would benefit from automatic features like simple conflict resolution and dynamic cluster discovery. This situation led us to develop a new library, which precisely meets our requirements.
Introducing Cluster ETS
CETS
is a lightweight replication layer for
ETS
(Erlang Term Storage) tables. The main principle of this library is to replicate ETS data to other nodes of the cluster with simple and automatic conflict resolution. In most cases the conflicts are not even possible, because the key of each stored key-value tuple uniquely identifies the creating node. In MongooseIM, we are using the RDBMS cluster node discovery mechanism. This means that each cluster node updates the database periodically, storing its name and IP address in the discovery_nodes table. Other nodes check this table periodically to determine the cluster nodes, and connect to them. Nodes that are down for a long time (by default 1 hour) are removed from the table to avoid trying to connect them. The database used for CETS is the same one that is used to store other persistent data, so in a typical case there should be no extra databases required.
The first benefit visible to the user is that the nodes don’t need to be added to the cluster anymore. You don’t need commands like join_cluster or leave_cluster – actually you cannot use them anymore. Another immediate benefit is the lack of persistent volumes required by MongooseIM, which means that any node can be immediately replaced by another fresh instance. It is also no longer possible to have consistency errors, because there is no persistent schema and any (unlikely) conflicts are resolved automatically.
Using CETS
Let’s see how quickly the new MongooseIM with CETS can be set up. This simple example assumes that you have
Docker
and
Kubernetes
installed locally. These tools simplify the setup process a lot, but if you cannot use them, you can manually configure MongooseIM to use CETS as well – see the
tutorial
. In this example we will use
PostgreSQL
for all persistent storage in MongooseIM, including CETS node discovery. You only need to download the database schema file
pg.sql
to your current directory and execute the following command:
$ docker run -d --name mongooseim-postgres -e POSTGRES_PASSWORD=mongooseim_secret \
-e POSTGRES_USER=mongooseim -v `pwd`/pg.sql:/docker-entrypoint-initdb.d/pgsql.sql:ro \
-p 5432:5432 postgres
The database should be up and running – let’s check it with
psql
:
$ PGPASSWORD=mongooseim_secret psql -U mongooseim -h localhost
(...)
mongooseim=#
Next, let’s install MongooseIM in Kubernetes with Helm. The
volatileDatabase
and
persistentDatabase
options are used to populate the generated MongooseIM configuration file with the required database options. Since we have set the DB to use the default MongooseIM credentials, we don’t need to provide them here. If you want to use a different user name, password or other parameters, see the
chart documentation
for a complete list of options.
$ helm repo add mongoose https://esl.github.io/MongooseHelm/
$ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
--set persistentDatabase=rdbms
NAME: test-mim
LAST DEPLOYED: Tue Nov 28 08:56:16 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing MongooseIM 6.2.0
(...)
Your three-node cluster using CETS and RDBMS should start up quickly. You can monitor its progress with Kubernetes:
$ watch kubectl get sts,pod,svc
NAME READY AGE
statefulset.apps/mongooseim 3/3 2m
NAME READY STATUS RESTARTS AGE
pod/mongooseim-0 1/1 Running 0 2m
pod/mongooseim-1 1/1 Running 0 2m
pod/mongooseim-2 1/1 Running 0 1m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 91d
service/mongooseim ClusterIP None <none> 4369/TCP,5222/TCP, (...) 2m
service/mongooseim-lb LoadBalancer 10.102.205.139 localhost 5222:32178/TCP, (...) 2m
When the XMPP port 5222 is open on localhost by the load balancer, the whole service is ready to use. You can check CETS cluster status on each node with the CLI (or the
GraphQL API
). The following command checks the status on
mongooseim-0
(the first node in the cluster):
$ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl cets systemInfo
{
"data" : {
"cets" : {
"systemInfo" : {
"unavailableNodes" : [],
"remoteUnknownTables" : [],
"remoteNodesWithoutDisco" : [],
"remoteNodesWithUnknownTables" : [],
"remoteNodesWithMissingTables" : [],
"remoteMissingTables" : [],
"joinedNodes" : [
"mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
],
"discoveryWorks" : true,
"discoveredNodes" : [
"mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
],
"conflictTables" : [],
"conflictNodes" : [],
"availableNodes" : [
"mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
]
}
}
}
}
You should see all nodes listed in
joinedNodes, discoveredNodes
and
availableNodes
. Other lists should be empty. There is
tableInfo
as well. This command shows information about each table:
$ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl cets tableInfo
{
"data" : {
"cets" : {
"tableInfo" : [
{
"tableName" : "cets_bosh",
"size" : 0,
"nodes" : [
"mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
],
"memory" : 141
},
{
"tableName" : "cets_cluster_id",
"size" : 1,
"nodes" : [
"mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
],
"memory" : 156
},
{
"tableName" : "cets_external_component",
"size" : 0,
"nodes" : [
"mongooseim@mongooseim-0.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-1.mongooseim.default.svc.cluster.local",
"mongooseim@mongooseim-2.mongooseim.default.svc.cluster.local"
],
"memory" : 307
},
(...)
]
}
}
}
You can find more information about these commands in our
GraphQL docs
, because the CLI is actually using the GraphQL commands. To complete our example, let’s create our first XMPP user account:
$ kubectl exec -it mongooseim-0 -- /usr/lib/mongooseim/bin/mongooseimctl account registerUser \
--username alice --domain localhost --password secret
{
"data" : {
"account" : {
"registerUser" : {
"message" : "User alice@localhost successfully registered",
"jid" : "alice@localhost"
}
}
}
}
Now you can connect to the server with an XMPP client as
alice@localhost
– see
https://trymongoose.im/client-apps
or
https://xmpp.org/software/?platform=all-platforms
for client software.
New extensions
MongooseIM 6.2 satisfies the
XMPP Compliance Suites 2023
, as reported at
xmpp.org
. Thanks to the new extensible architecture of mongoose_c2s, we are implementing new extensions faster than before. For example, we have recently added support for
XEP-0386: Bind 2
and
XEP-0388: Extensible SASL Profile
, allowing the client to authenticate, bind the resource and enable extensions like
message carbons
,
stream management
and
client state indication
. All of this can be done in a single step without the need for redundant roundtrips (see the
example
). This way your clients can establish their sessions faster than before, putting less load on the client and the server. We have also updated multiple extensions to their latest versions, and we will continue the effort to keep them up to date, adding new ones as well. Do you think we should support a new XMPP extension? Feel free to
request a feature
, so we can put it on our roadmap, and if you really want it now, we can
discuss
possible sponsoring options.
Summary
With the latest release 6.2 we have brought MongooseIM closer to you. Now you can try it out online as well as easily install it in Kubernetes without caring about persistent state and volumes. Your next step is to try our live demo, install MongooseIM with Helm and experiment with it. You can do it all for free and without Erlang knowledge, so go ahead and use it as the foundation of your new messaging solution. You are also not left alone – should you have any questions, please feel free to
contact us
, and we will be happy to deploy, load-test, health-check, optimise and customise MongooseIM to fit your needs.
The post
MongooseIM 6.2: Easy to set up, use and manage
appeared first on
Erlang Solutions
.