Thursday, 23 July, 2020
Getting started with the XTDB builder
Spinning up a new XTDB node is generally a case of including the relevant XTDB dependencies within your project, defining a 'topology' configuration, and then making a call to start-node
with that configuration. This is not a particularly difficult process but it still requires you to dip into Clojure or Java before you can get something running.
We created the xtdb-build
tool to bypass this process and make getting started much simpler for non-Clojure users.
xtdb-build
was initially released with XTDB in 20.02-1.7.0-alpha
. It offers a small range of 'batteries included' JARs and Docker containers to help you get a basic XTDB implementation running, and an easy mechanism to generate your own custom XTDB deployment artifacts.
Batteries included - xtdb-in-memory
Available on both:
-
GitHub releases, see the
xtdb-in-memory.jar
attached to each release.
This starts up a bare-bones XTDB implementation, utilising three XTDB modules:
-
xtdb-core
- required by all nodes. -
xtdb-http-server
- each configuration starts an HTTP server (on port 3000). -
xtdb-sql
- starts a SQL server (on port 1501).
Docker Hub page
there is a basic tutorial for getting started with the XTDB REST API, and you can find out more in the
HTTP module docs.
For communicating with the node using SQL, see the xtdb-sql
docs.
xtdb-in-memory
is intended for users to get started with XTDB, without requiring any prior Java/Clojure knowledge. It contains an in-memory node that does not persist any data across restarts.
We plan to provide further 'batteries included' implementations in the future (suggestions are welcome!), but let’s now take a look at how to generate custom XTDB deployment artifacts.
Roll your own XTDB deployment artifacts
Alongside the JARs deployed on the GitHub releases is xtdb-builder.tar.gz
- using the scripts in this archive, we can customise and build a JAR or a Docker container.
Building a JAR (Clojure CLI tooling):
The clj-uberjar
folder contains a number of files:
-
a
deps.edn
file to configure the Maven dependencies required -
an
xtdb.edn
file to configure the node itself -
a
resources/logback.xml
to configure logging output -
a
build-uberjar.sh
script to build the JAR.
In this example, let’s add rocksdb
as the KV store, storing the indexes in a folder called 'xtdb-indexes', and the standalone tx-log/document-store in xtdb-event-log
.
First, we add com.xtdb/xtdb-rocksdb
as a dependency in deps.edn
.
...
com.xtdb/xtdb-rocksdb {:mvn/version "21.01-1.14.0-beta"}
...
Then, in xtdb.edn
, use this new topology:
{:xtdb/index-store {:kv-store {:xtdb/module xtdb.rocksdb/->kv-store, :db-dir "/tmp/xtdb/indexes"}}
:xtdb/document-store {:kv-store {:xtdb/module xtdb.rocksdb/->kv-store, :db-dir "/tmp/xtdb/documents"}}
:xtdb/tx-log {:kv-store {:xtdb/module xtdb.rocksdb/->kv-store, :db-dir "/tmp/xtdb/tx-log"}}
:xtdb.http-server/server {}}
To build the JAR, run the build-uberjar.sh
script.
You can optionally pass the environment variable UBERJAR_NAME
to the script (for example, UBERJAR_NAME=xtdb-rocks.jar ./build-uberjar.sh
), otherwise the built uberjar will be called xtdb.jar
.
To run the clojure uberjar, use the following command: java -jar xtdb.jar
. This now starts up a node with both a HTTP server and persistent storage.
Building a JAR (Maven tooling):
Similarly to building a JAR using the Clojure CLI tooling, we can also build an uberjar using Maven.
In the mvn-uberjar
directory, we can add dependencies to the pom.xml
file, update the xtdb.edn
file as before, and then run build-uberjar.sh
to create the uberjar. To run the maven generated uberjar, use the following command: java -jar xtdb.jar
Building a Docker Container:
In the docker
directory, there are a similar set of files to the uberjar examples above, as well as a Dockerfile
and a build-docker.sh
script.
As with building a JAR, we can add rocksdb
as the KV store - first, by adding a dependency on com.xtdb/xtdb-rocksdb
to deps.edn
Default index and event-log directories are already configured within the sample xtdb.edn
(allowing you to map the Docker directories and give the images persistent storage), we then need to add xtdb.kv.rocksdb/kv-store
to the three modules:
{:xtdb/index-store {:kv-store {:xtdb/module xtdb.rocksdb/->kv-store, :db-dir "/var/lib/xtdb/indexes"}}
:xtdb/document-store {:kv-store {:xtdb/module xtdb.rocksdb/->kv-store, :db-dir "/var/lib/xtdb/documents"}}
:xtdb/tx-log {:kv-store {:xtdb/module xtdb.rocksdb/->kv-store, :db-dir "/var/lib/xtdb/tx-log"}}
:xtdb.http-server/server {}}
To build your Docker container, run the build-docker.sh
script.
You can optionally pass an IMAGE_NAME
and IMAGE_VERSION
to tag the container with (by default, the custom Docker container is called xtdb-custom:latest
).
In Summary
This is only scratching the surface of different setups that XTDB’s unbundled nature allows. For more information, check out the configuration section of the docs.
Get busy building!
As always, feel free to reach out to us on the 'Discuss XTDB' forum, the #xtdb channel on the Clojurians' Slack or via hello@xtdb.com.