# Understanding Kaabah
Prerequisites
Kaabah relies on various technologies such as Terraform (opens new window), Docker Swarm (opens new window), Traefik (opens new window)... and we assume that you are enough familiar with them. If not, please take a while to discover them.
# Key concepts
Kaabah let you manipulate 4 kind of entities:
- Workspace: a collection of everything Kaabah needs to create and manage an infrastructure.
- Configuration: a set of Terraform variables used to define your infrastructure.
- Cluster: a Docker Swarm (opens new window) infrastructure.
The following image illustrates how these entities interact:
In addition Kaabah provides a set of commands that help you to operate the cluster. For instance, you can easily prune all the images on the different nodes, execute a command on a given node... See the reference page to have the complete list.
# Workspace
Kaabah is designed to take advantage of Terraform Workspaces. Indeed, Kaabah relies on the Terraform recommend practices (opens new window) and assume a Workspace is used to store the required data needed to build and manage an infrastructure for a specific environment (staging, production...):
- the configuration of the infrastructure.
- the SSH private key to get connected to the infrastructure
- the user scripts you want to be executed when creating the infrastructure
- the Terraform states of the infrastructure.
Starting from this premise, Kaabah lets you to manage as many clusters as your projects require. If we decide to name our workspaces with both the project name and its environment (i.e. dev, test...), we can sketch the following diagram to illustrate the overall functioning of Kaabah:
In this diagram, the states of the different workspaces are stored within a dedicated bucket on amazon S3, but you are free to use any other Terraform backends (opens new window).
# Configuration
The Kaabah configuration file is a Terraform variable file describing the characteristics of the desired infrastructure.
Here is an example of a configuration file:
cloud_provider = "AWS"
manager_instance_type = "t2.small"
manager_ips = ["3.115.176.41"]
worker_instance_type = "t3.large"
worker_instance_count = 3
worker_additional_volume_size = 500
worker_additional_volume_type = "st1"
worker_additional_volume_mount_point = "/mnt/data"
Assuming the current workspace is app-dev
, then when applying such a configuration, Kaabah will generate a Docker Swarm infrastructure on AWS
(cloud_provider
variable) composed of:
1
manager node,app-dev-manager
, of typet2.small
with the public IP address3.115.176.41
.3
worker nodes,app-dev-worker-0
,app-dev-worker-1
andapp-dev-worker-2
, of type oft3.large
. To each worker is attached2
optimized hard-disk (sc1
) of500
GB and these volumes are accessible through the mount points:/mnt/data0
and/mnt/data1
.
Kaabah exposes many more variables allowing you to customize in detail your infrastructure. Have a look at the complete list of variables and the tests (opens new window) as examples.
# Cluster
The cluster consists in of multiple Docker hosts which run in swarm mode and act as Manager, to manage membership and delegation, and as Workers which run the services.
Kaabah let you build any kind of cluster topologies: 1 to many managers, 0 to many workers.
# Instances
# Instance types
Kaabah let you provide a same instance types for each managers and another instance type for each workers.
WARNING
Kaabah supports only x86 architecture.
# Naming convention
Each created instances are named according the following convention:
<WORKSPACE>-manager-<INDEX>
<WORKSPACE>-woker-<INDEX>
where <WORKSPACE>
specify the name of the Terraform workspace.
# Operating system
All the instances are based on the Debian Buster image provided by AWS, OVH and Scaleway.
# Docker
Kaabah installs the version 19.03.12
of Docker.
TIP
You can override the version using the docker_version
# Volumes
# Gluster Shared volume
When creating the cluster, Kaabah creates a shared volume among the nodes using Gluster (opens new window).
If your cluster is composed of 1 manager only, the shared volume is created in distributed
mode, otherwise the shared volume is created in replicated
mode to enhance the resilience. Check the documentation (opens new window) to learn more about these modes.
By default, the Gluster volume is mounted using the default mount point /mnt/share
. It can be overridden by setting the variable gluster_share_volume_mount_point
.
# Additional block volumes
When needed extra disk spaces, you can attach an additional volume either on the managers, either on the workers or both.
These volumes are automatically attached, formatted to EXT4 (opens new window) and mounted on the nodes. By default the volumes are accessible through the mount point /mnt/data
.
TIP
You can override this default mount point by overriding the manager_additional_volume_mount_point
and worker_additional_volume_mount_point
variables.
# Network
# IP addresses
Kaabah let you define the IP addresses of the managers nodes. The IP addresses you can assign to the managers are given (usually bought) by your provider:
- on AWS it must be an Elastic IP (opens new window)
- on Scaleway it must be a Flexible IP (opens new window)
- on OVH it must be a Floating IP (opens new window)
WARNING
On OVH, even if Kaabah adds automatically a network interface to allow the binding of the Floating IP to this instance, you need to do manually this binding using the OVH interface.
# Security Groups
By default, Kaabah creates 2 security groups:
- a security group assigned to the managers with the following rules:
- external HTTP traffic (port 80)
- external HTTPS traffic (port 443)
- internal SSH traffic (port 22)
- internal Docker swarm traffic
- internal Gluster traffic
- a security group assigned to the workers with the following rules:
- internal SSH traffic (port 22)
- internal Docker swarm traffic
- internal Gluster traffic
TIP
Kaabah allows you to define additional inbound rules. Read more here
# Security
# SSH
Kaabah requires the use of a Bastion (opens new window) to get connected to your instances. The implemented solution relies on the following architecture:
Your bastion instance must be instantiated in the same network of your cluster. The Security Groups rules allows the SSH traffic from the bastion.
WARNING
It is a best practice to harden your bastion host because it is a critical point of network security. Hardening might include disabling unnecessary applications or services, restrict the inbound traffic to well-known hosts.
# Docker Engine
The Docker daemon only allows connections from clients authenticated by a certificate signed by a Certificate Authority (CA).
When creating the cluster, Kaabah handles the creation of the server and client keys but it requires you to provide this CA. Check out the Getting started section to learn how to generate this CA.
Note
Kaabah relies on OpenSSL (opens new window) to generate the server and client keys.