updated smaller collections for manual
This commit is contained in:
Binary file not shown.
After Width: | Height: | Size: 103 KiB |
@@ -0,0 +1,506 @@
|
||||
<h1> Terraform Caprover </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [What is CapRover?](#what-is-caprover)
|
||||
- [Features of Caprover](#features-of-caprover)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [How to Run CapRover on ThreeFold Grid 3](#how-to-run-caprover-on-threefold-grid-3)
|
||||
- [Clone the Project Repo](#clone-the-project-repo)
|
||||
- [A) leader node deployment/setup:](#a-leader-node-deploymentsetup)
|
||||
- [Step 1: Deploy a Leader Node](#step-1-deploy-a-leader-node)
|
||||
- [Step 2: Connect Root Domain](#step-2-connect-root-domain)
|
||||
- [Note](#note)
|
||||
- [Step 3: CapRover Root Domain Configurations](#step-3-caprover-root-domain-configurations)
|
||||
- [Step 4: Access the Captain Dashboard](#step-4-access-the-captain-dashboard)
|
||||
- [To allow cluster mode](#to-allow-cluster-mode)
|
||||
- [B) Worker Node Deployment/setup:](#b-worker-node-deploymentsetup)
|
||||
- [Implementations Details:](#implementations-details)
|
||||
|
||||
***
|
||||
|
||||
## What is CapRover?
|
||||
|
||||
[CapRover](https://caprover.com/) is an easy-to-use app/database deployment and web server manager that works for a variety of applications such as Node.js, Ruby, PHP, Postgres, and MongoDB. It runs fast and is very robust, as it uses Docker, Nginx, LetsEncrypt, and NetData under the hood behind its user-friendly interface.
|
||||
Here’s a link to CapRover's open source repository on [GitHub](https://github.com/caprover/caprover).
|
||||
|
||||
## Features of Caprover
|
||||
|
||||
- CLI for automation and scripting
|
||||
- Web GUI for ease of access and convenience
|
||||
- No lock-in: Remove CapRover and your apps keep working !
|
||||
- Docker Swarm under the hood for containerization and clustering.
|
||||
- Nginx (fully customizable template) under the hood for load-balancing.
|
||||
- Let’s Encrypt under the hood for free SSL (HTTPS).
|
||||
- **One-Click Apps** : Deploying one-click apps is a matter of seconds! MongoDB, Parse, MySQL, WordPress, Postgres and many more.
|
||||
- **Fully Customizable** : Optionally fully customizable nginx config allowing you to enable HTTP2, specific caching logic, custom SSL certs and etc.
|
||||
- **Cluster Ready** : Attach more nodes and create a cluster in seconds! CapRover automatically configures nginx to load balance.
|
||||
- **Increase Productivity** : Focus on your apps ! Not the bells and whistles, just to run your apps.
|
||||
- **Easy Deploy** : Many ways to deploy. You can upload your source from dashboard, use command line caprover deploy, use webhooks and build upon git push
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Domain Name:
|
||||
after installation, you will need to point a wildcard DNS entry to your CapRover IP Address.
|
||||
Note that you can use CapRover without a domain too. But you won't be able to setup HTTPS or add `Self hosted Docker Registry`.
|
||||
- TerraForm installed to provision, adjust and tear down infrastructure using the tf configuration files provided here.
|
||||
- Yggdrasil installed and enabled for End-to-end encrypted IPv6 networking.
|
||||
- account created on [Polkadot](https://polkadot.js.org/apps/?rpc=wss://tfchain.dev.threefold.io/ws#/accounts) and got an twin id, and saved you mnemonics.
|
||||
- TFTs in your account balance (in development, Transferer some test TFTs from ALICE account).
|
||||
|
||||
## How to Run CapRover on ThreeFold Grid 3
|
||||
|
||||
In this guide, we will use Caprover to setup your own private Platform as a service (PaaS) on TFGrid 3 infrastructure.
|
||||
|
||||
### Clone the Project Repo
|
||||
|
||||
```sh
|
||||
git clone https://github.com/freeflowuniverse/freeflow_caprover.git
|
||||
```
|
||||
|
||||
### A) leader node deployment/setup:
|
||||
|
||||
#### Step 1: Deploy a Leader Node
|
||||
|
||||
Create a leader caprover node using terraform, here's an example :
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = "<your-mnemonics>"
|
||||
network = "dev" # or test to use testnet
|
||||
}
|
||||
|
||||
resource "grid_network" "net0" {
|
||||
nodes = [4]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d0" {
|
||||
node = 4
|
||||
network_name = grid_network.net0.name
|
||||
ip_range = lookup(grid_network.net0.nodes_ip_range, 4, "")
|
||||
disks {
|
||||
name = "data0"
|
||||
# will hold images, volumes etc. modify the size according to your needs
|
||||
size = 20
|
||||
description = "volume holding docker data"
|
||||
}
|
||||
disks {
|
||||
name = "data1"
|
||||
# will hold data reltaed to caprover conf, nginx stuff, lets encrypt stuff.
|
||||
size = 5
|
||||
description = "volume holding captain data"
|
||||
}
|
||||
|
||||
vms {
|
||||
name = "caprover"
|
||||
flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist"
|
||||
# modify the cores according to your needs
|
||||
cpu = 4
|
||||
publicip = true
|
||||
# modify the memory according to your needs
|
||||
memory = 8192
|
||||
entrypoint = "/sbin/zinit init"
|
||||
mounts {
|
||||
disk_name = "data0"
|
||||
mount_point = "/var/lib/docker"
|
||||
}
|
||||
mounts {
|
||||
disk_name = "data1"
|
||||
mount_point = "/captain"
|
||||
}
|
||||
env_vars = {
|
||||
"PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576"
|
||||
# SWM_NODE_MODE env var is required, should be "leader" or "worker"
|
||||
# leader: will run sshd, containerd, dockerd as zinit services plus caprover service in leader mode which start caprover, lets encrypt, nginx containers.
|
||||
# worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster. check the wroker terrafrom file example.
|
||||
"SWM_NODE_MODE" = "leader"
|
||||
# CAPROVER_ROOT_DOMAIN is optional env var, by providing it you can access the captain dashboard after vm initilization by visiting http://captain.your-root-domain
|
||||
# otherwise you will have to add the root domain manually from the captain dashboard by visiting http://{publicip}:3000 to access the dashboard
|
||||
"CAPROVER_ROOT_DOMAIN" = "roverapps.grid.tf"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net0.access_wg_config
|
||||
}
|
||||
output "ygg_ip" {
|
||||
value = grid_deployment.d0.vms[0].ygg_ip
|
||||
}
|
||||
output "vm_ip" {
|
||||
value = grid_deployment.d0.vms[0].ip
|
||||
}
|
||||
output "vm_public_ip" {
|
||||
value = grid_deployment.d0.vms[0].computedip
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
cd freeflow_caprover/terraform/leader/
|
||||
vim main.tf
|
||||
```
|
||||
|
||||
- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on.
|
||||
- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing.
|
||||
- In the `PUBLIC_KEY` env var value put your ssh public key .
|
||||
- In the `CAPROVER_ROOT_DOMAIN` env var value put your root domain, this is optional and you can add it later from the dashboard put it will save you the extra step and allow you to access your dashboard using your domain name directly after the deployment.
|
||||
|
||||
- save the file, and execute the following commands:
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
terraform apply
|
||||
```
|
||||
|
||||
- wait till you see `apply complete`, and note the VM public ip in the final output.
|
||||
|
||||
- verify the status of the VM
|
||||
|
||||
```bash
|
||||
ssh root@{public_ip_address}
|
||||
zinit list
|
||||
zinit log caprover
|
||||
```
|
||||
|
||||
You will see output like this:
|
||||
|
||||
```bash
|
||||
root@caprover:~ # zinit list
|
||||
sshd: Running
|
||||
containerd: Running
|
||||
dockerd: Running
|
||||
sshd-init: Success
|
||||
caprover: Running
|
||||
root@caprover:~ # zinit log caprover
|
||||
[+] caprover: CapRover Root Domain: newapps.grid.tf
|
||||
[+] caprover: {
|
||||
[+] caprover: "namespace": "captain",
|
||||
[+] caprover: "customDomain": "newapps.grid.tf"
|
||||
[+] caprover: }
|
||||
[+] caprover: CapRover will be available at http://captain.newapps.grid. tf after installation
|
||||
[-] caprover: docker: Cannot connect to the Docker daemon at unix:///var/ run/docker.sock. Is the docker daemon running?.
|
||||
[-] caprover: See 'docker run --help'.
|
||||
[-] caprover: Unable to find image 'caprover/caprover:latest' locally
|
||||
[-] caprover: latest: Pulling from caprover/caprover
|
||||
[-] caprover: af4c2580c6c3: Pulling fs layer
|
||||
[-] caprover: 4ea40d27a2cf: Pulling fs layer
|
||||
[-] caprover: 523d612e9cd2: Pulling fs layer
|
||||
[-] caprover: 8fee6a1847b0: Pulling fs layer
|
||||
[-] caprover: 60cce3519052: Pulling fs layer
|
||||
[-] caprover: 4bae1011637c: Pulling fs layer
|
||||
[-] caprover: ecf48b6c1f43: Pulling fs layer
|
||||
[-] caprover: 856f69196742: Pulling fs layer
|
||||
[-] caprover: e86a512b6f8c: Pulling fs layer
|
||||
[-] caprover: cecbd06d956f: Pulling fs layer
|
||||
[-] caprover: cdd679ff24b0: Pulling fs layer
|
||||
[-] caprover: d60abbe06609: Pulling fs layer
|
||||
[-] caprover: 0ac0240c1a59: Pulling fs layer
|
||||
[-] caprover: 52d300ad83da: Pulling fs layer
|
||||
[-] caprover: 8fee6a1847b0: Waiting
|
||||
[-] caprover: e86a512b6f8c: Waiting
|
||||
[-] caprover: 60cce3519052: Waiting
|
||||
[-] caprover: cecbd06d956f: Waiting
|
||||
[-] caprover: cdd679ff24b0: Waiting
|
||||
[-] caprover: 4bae1011637c: Waiting
|
||||
[-] caprover: d60abbe06609: Waiting
|
||||
[-] caprover: 0ac0240c1a59: Waiting
|
||||
[-] caprover: 52d300ad83da: Waiting
|
||||
[-] caprover: 856f69196742: Waiting
|
||||
[-] caprover: ecf48b6c1f43: Waiting
|
||||
[-] caprover: 523d612e9cd2: Verifying Checksum
|
||||
[-] caprover: 523d612e9cd2: Download complete
|
||||
[-] caprover: 4ea40d27a2cf: Verifying Checksum
|
||||
[-] caprover: 4ea40d27a2cf: Download complete
|
||||
[-] caprover: af4c2580c6c3: Verifying Checksum
|
||||
[-] caprover: af4c2580c6c3: Download complete
|
||||
[-] caprover: 4bae1011637c: Verifying Checksum
|
||||
[-] caprover: 4bae1011637c: Download complete
|
||||
[-] caprover: 8fee6a1847b0: Verifying Checksum
|
||||
[-] caprover: 8fee6a1847b0: Download complete
|
||||
[-] caprover: 856f69196742: Verifying Checksum
|
||||
[-] caprover: 856f69196742: Download complete
|
||||
[-] caprover: ecf48b6c1f43: Verifying Checksum
|
||||
[-] caprover: ecf48b6c1f43: Download complete
|
||||
[-] caprover: e86a512b6f8c: Verifying Checksum
|
||||
[-] caprover: e86a512b6f8c: Download complete
|
||||
[-] caprover: cdd679ff24b0: Verifying Checksum
|
||||
[-] caprover: cdd679ff24b0: Download complete
|
||||
[-] caprover: d60abbe06609: Verifying Checksum
|
||||
[-] caprover: d60abbe06609: Download complete
|
||||
[-] caprover: cecbd06d956f: Download complete
|
||||
[-] caprover: 0ac0240c1a59: Verifying Checksum
|
||||
[-] caprover: 0ac0240c1a59: Download complete
|
||||
[-] caprover: 60cce3519052: Verifying Checksum
|
||||
[-] caprover: 60cce3519052: Download complete
|
||||
[-] caprover: af4c2580c6c3: Pull complete
|
||||
[-] caprover: 52d300ad83da: Download complete
|
||||
[-] caprover: 4ea40d27a2cf: Pull complete
|
||||
[-] caprover: 523d612e9cd2: Pull complete
|
||||
[-] caprover: 8fee6a1847b0: Pull complete
|
||||
[-] caprover: 60cce3519052: Pull complete
|
||||
[-] caprover: 4bae1011637c: Pull complete
|
||||
[-] caprover: ecf48b6c1f43: Pull complete
|
||||
[-] caprover: 856f69196742: Pull complete
|
||||
[-] caprover: e86a512b6f8c: Pull complete
|
||||
[-] caprover: cecbd06d956f: Pull complete
|
||||
[-] caprover: cdd679ff24b0: Pull complete
|
||||
[-] caprover: d60abbe06609: Pull complete
|
||||
[-] caprover: 0ac0240c1a59: Pull complete
|
||||
[-] caprover: 52d300ad83da: Pull complete
|
||||
[-] caprover: Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
|
||||
[-] caprover: Status: Downloaded newer image for caprover/caprover:latest
|
||||
[+] caprover: Captain Starting ...
|
||||
[+] caprover: Overriding skipVerifyingDomains from /captain/data/ config-override.json
|
||||
[+] caprover: Installing Captain Service ...
|
||||
[+] caprover:
|
||||
[+] caprover: Installation of CapRover is starting...
|
||||
[+] caprover: For troubleshooting, please see: https://caprover.com/docs/ troubleshooting.html
|
||||
[+] caprover:
|
||||
[+] caprover:
|
||||
[+] caprover:
|
||||
[+] caprover:
|
||||
[+] caprover:
|
||||
[+] caprover: >>> Checking System Compatibility <<<
|
||||
[+] caprover: Docker Version passed.
|
||||
[+] caprover: Ubuntu detected.
|
||||
[+] caprover: X86 CPU detected.
|
||||
[+] caprover: Total RAM 8339 MB
|
||||
[+] caprover: Pulling: nginx:1
|
||||
[+] caprover: Pulling: caprover/caprover-placeholder-app:latest
|
||||
[+] caprover: Pulling: caprover/certbot-sleeping:v1.6.0
|
||||
[+] caprover: October 12th 2021, 12:49:26.301 pm Fresh installation!
|
||||
[+] caprover: October 12th 2021, 12:49:26.309 pm Starting swarm at 185.206.122.32:2377
|
||||
[+] caprover: Swarm started: z06ymksbcoren9cl7g2xzw9so
|
||||
[+] caprover: *** CapRover is initializing ***
|
||||
[+] caprover: Please wait at least 60 seconds before trying to access CapRover.
|
||||
[+] caprover: ===================================
|
||||
[+] caprover: **** Installation is done! *****
|
||||
[+] caprover: CapRover is available at http://captain.newapps.grid.tf
|
||||
[+] caprover: Default password is: captain42
|
||||
[+] caprover: ===================================
|
||||
```
|
||||
|
||||
Wait until you see \***\* Installation is done! \*\*\*** in the caprover service log.
|
||||
|
||||
#### Step 2: Connect Root Domain
|
||||
|
||||
After the container runs, you will now need to connect your CapRover instance to a Root Domain.
|
||||
|
||||
Let’s say you own example.com. You can set \*.something.example.com as an A-record in your DNS settings to point to the IP address of the server where you installed CapRover. To do this, go to the DNS settings in your domain provider website, and set a wild card A record entry.
|
||||
|
||||
For example: Type: A, Name (or host): \*.something.example.com, IP (or Points to): `110.122.131.141` where this is the IP address of your CapRover machine.
|
||||
|
||||
```yaml
|
||||
TYPE: A record
|
||||
HOST: \*.something.example.com
|
||||
POINTS TO: (IP Address of your server)
|
||||
TTL: (doesn’t really matter)
|
||||
```
|
||||
|
||||
To confirm, go to https://mxtoolbox.com/DNSLookup.aspx and enter `somethingrandom.something.example.com` and check if IP address resolves to the IP you set in your DNS.
|
||||
|
||||
##### Note
|
||||
|
||||
`somethingrandom` is needed because you set a wildcard entry in your DNS by setting `*.something.example.com` as your host, not `something.example.com`.
|
||||
|
||||
#### Step 3: CapRover Root Domain Configurations
|
||||
|
||||
skip this step if you provided your root domain in the TerraFrom configuration file
|
||||
|
||||
Once the CapRover is initialized, you can visit `http://[IP_OF_YOUR_SERVER]:3000` in your browser and login to CapRover using the default password `captain42`. You can change your password later.
|
||||
|
||||
In the UI enter you root domain and press Update Domain button.
|
||||
|
||||
#### Step 4: Access the Captain Dashboard
|
||||
|
||||
Once you set your root domain as caprover.example.com, you will be redirected to captain.caprover.example.com.
|
||||
|
||||
Now CapRover is ready and running in a single node.
|
||||
|
||||
##### To allow cluster mode
|
||||
|
||||
- Enable HTTPS
|
||||
|
||||
- Go to CapRover `Dashboard` tab, then in `CapRover Root Domain Configurations` press on `Enable HTTPS` then you will asked to enter your email address
|
||||
|
||||
- Docker Registry Configuration
|
||||
|
||||
- Go to CapRover `Cluster` tab, then in `Docker Registry Configuration` section, press on `Self hosted Docker Registry` or add your `Remote Docker Registry`
|
||||
|
||||
- Run the following command in the ssh session:
|
||||
|
||||
```bash
|
||||
docker swarm join-token worker
|
||||
```
|
||||
|
||||
It will output something like this:
|
||||
|
||||
```bash
|
||||
docker swarm join --token SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ez fpdd6svnnbq7 185.206.122.33:2377
|
||||
```
|
||||
|
||||
- To add a worker node to this swarm, you need:
|
||||
|
||||
- Generated token `SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ezfpdd6svnnbq7`
|
||||
- Leader node public ip `185.206.122.33`
|
||||
|
||||
This information is required in the next section to run CapRover in cluster mode.
|
||||
|
||||
### B) Worker Node Deployment/setup:
|
||||
|
||||
We show how to deploy a worker node by providing an example worker Terraform file.
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = "<your-mnemonics>"
|
||||
network = "dev" # or test to use testnet
|
||||
}
|
||||
|
||||
resource "grid_network" "net2" {
|
||||
nodes = [4]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d2" {
|
||||
node = 4
|
||||
network_name = grid_network.net2.name
|
||||
ip_range = lookup(grid_network.net2.nodes_ip_range, 4, "")
|
||||
disks {
|
||||
name = "data2"
|
||||
# will hold images, volumes etc. modify the size according to your needs
|
||||
size = 20
|
||||
description = "volume holding docker data"
|
||||
}
|
||||
|
||||
vms {
|
||||
name = "caprover"
|
||||
flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist"
|
||||
# modify the cores according to your needs
|
||||
cpu = 2
|
||||
publicip = true
|
||||
# modify the memory according to your needs
|
||||
memory = 2048
|
||||
entrypoint = "/sbin/zinit init"
|
||||
mounts {
|
||||
disk_name = "data2"
|
||||
mount_point = "/var/lib/docker"
|
||||
}
|
||||
env_vars = {
|
||||
"PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576"
|
||||
}
|
||||
# SWM_NODE_MODE env var is required, should be "leader" or "worker"
|
||||
# leader: check the wroker terrafrom file example.
|
||||
# worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster.
|
||||
|
||||
"SWM_NODE_MODE" = "worker"
|
||||
# from the leader node (the one running caprover) run `docker swarm join-token worker`
|
||||
# you must add the generated token to SWMTKN env var and the leader public ip to LEADER_PUBLIC_IP env var
|
||||
|
||||
"SWMTKN"="SWMTKN-1-522cdsyhknmavpdok4wi86r1nihsnipioc9hzfw9dnsvaj5bed-8clrf4f2002f9wziabyxzz32d"
|
||||
"LEADER_PUBLIC_IP" = "185.206.122.38"
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net2.access_wg_config
|
||||
}
|
||||
output "ygg_ip" {
|
||||
value = grid_deployment.d2.vms[0].ygg_ip
|
||||
}
|
||||
output "vm_ip" {
|
||||
value = grid_deployment.d2.vms[0].ip
|
||||
}
|
||||
output "vm_public_ip" {
|
||||
value = grid_deployment.d2.vms[0].computedip
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
cd freeflow_caprover/terraform/worker/
|
||||
vim main.tf
|
||||
```
|
||||
|
||||
- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on.
|
||||
- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing.
|
||||
- In the `PUBLIC_KEY` env var value put your ssh public key.
|
||||
- In the `SWMTKN` env var value put the previously generated token.
|
||||
- In the `LEADER_PUBLIC_IP` env var value put the leader node public ip.
|
||||
|
||||
- Save the file, and execute the following commands:
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
terraform apply
|
||||
```
|
||||
|
||||
- Wait till you see `apply complete`, and note the VM public ip in the final output.
|
||||
|
||||
- Verify the status of the VM.
|
||||
|
||||
```bash
|
||||
ssh root@{public_ip_address}
|
||||
zinit list
|
||||
zinit log caprover
|
||||
```
|
||||
|
||||
You will see output like this:
|
||||
|
||||
```bash
|
||||
root@caprover:~# zinit list
|
||||
caprover: Success
|
||||
dockerd: Running
|
||||
containerd: Running
|
||||
sshd: Running
|
||||
sshd-init: Success
|
||||
root@caprover:~# zinit log caprover
|
||||
[-] caprover: Cannot connect to the Docker daemon at unix:///var/run/ docker.sock. Is the docker daemon running?
|
||||
[+] caprover: This node joined a swarm as a worker.
|
||||
```
|
||||
|
||||
This means that your worker node is now ready and have joined the cluster successfully.
|
||||
|
||||
You can also verify this from CapRover dashboard in `Cluster` tab. Check `Nodes` section, you should be able to see the new worker node added there.
|
||||
|
||||
Now CapRover is ready in cluster mode (more than one server).
|
||||
|
||||
To run One-Click Apps please follow this [tutorial](https://caprover.com/docs/one-click-apps.html)
|
||||
|
||||
## Implementations Details:
|
||||
|
||||
- we use Ubuntu 18.04 to minimize the production issues as CapRover is tested on Ubuntu 18.04 and Docker 19.03.
|
||||
- In standard installation, CapRover has to be installed on a machine with a public IP address.
|
||||
- Services are managed by `Zinit` service manager to bring these processes up and running in case of any failure:
|
||||
|
||||
- sshd-init : service used to add user public key in vm ssh authorized keys (it run once).
|
||||
- containerd: service to maintain container runtime needed by docker.
|
||||
- caprover: service to run caprover container(it run once).
|
||||
- dockerd: service to run docker daemon.
|
||||
- sshd: service to maintain ssh server daemon.
|
||||
|
||||
- we adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system
|
||||
```bash
|
||||
echo -500 >/proc/self/oom_score_adj
|
||||
```
|
@@ -0,0 +1,210 @@
|
||||
<h1> Kubernetes Cluster </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
- [Grid Kubernetes Resource](#grid-kubernetes-resource)
|
||||
- [Kubernetes Outputs](#kubernetes-outputs)
|
||||
- [More Info](#more-info)
|
||||
- [Demo Video](#demo-video)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
While Kubernetes deployments can be quite difficult and can require lots of experience, we provide here a very simple way to provision Kubernetes cluster on the TFGrid.
|
||||
|
||||
## Example
|
||||
|
||||
An example for deploying a kubernetes cluster could be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/k8s/main.tf)
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
resource "grid_scheduler" "sched" {
|
||||
requests {
|
||||
name = "master_node"
|
||||
cru = 2
|
||||
sru = 512
|
||||
mru = 2048
|
||||
distinct = true
|
||||
public_ips_count = 1
|
||||
}
|
||||
requests {
|
||||
name = "worker1_node"
|
||||
cru = 2
|
||||
sru = 512
|
||||
mru = 2048
|
||||
distinct = true
|
||||
}
|
||||
requests {
|
||||
name = "worker2_node"
|
||||
cru = 2
|
||||
sru = 512
|
||||
mru = 2048
|
||||
distinct = true
|
||||
}
|
||||
requests {
|
||||
name = "worker3_node"
|
||||
cru = 2
|
||||
sru = 512
|
||||
mru = 2048
|
||||
distinct = true
|
||||
}
|
||||
}
|
||||
|
||||
locals {
|
||||
solution_type = "Kubernetes"
|
||||
name = "myk8s"
|
||||
}
|
||||
resource "grid_network" "net1" {
|
||||
solution_type = local.solution_type
|
||||
name = local.name
|
||||
nodes = distinct(values(grid_scheduler.sched.nodes))
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_kubernetes" "k8s1" {
|
||||
solution_type = local.solution_type
|
||||
name = local.name
|
||||
network_name = grid_network.net1.name
|
||||
token = "12345678910122"
|
||||
ssh_key = "PUT YOUR SSH KEY HERE"
|
||||
|
||||
master {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["master_node"]
|
||||
name = "mr"
|
||||
cpu = 2
|
||||
publicip = true
|
||||
memory = 2048
|
||||
}
|
||||
workers {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["worker1_node"]
|
||||
name = "w0"
|
||||
cpu = 2
|
||||
memory = 2048
|
||||
}
|
||||
workers {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["worker2_node"]
|
||||
name = "w2"
|
||||
cpu = 2
|
||||
memory = 2048
|
||||
}
|
||||
workers {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["worker3_node"]
|
||||
name = "w3"
|
||||
cpu = 2
|
||||
memory = 2048
|
||||
}
|
||||
}
|
||||
|
||||
output "computed_master_public_ip" {
|
||||
value = grid_kubernetes.k8s1.master[0].computedip
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
```
|
||||
|
||||
Everything looks similar to our first example, the global terraform section, the provider section and the network section.
|
||||
|
||||
## Grid Kubernetes Resource
|
||||
|
||||
```terraform
|
||||
resource "grid_kubernetes" "k8s1" {
|
||||
solution_type = local.solution_type
|
||||
name = local.name
|
||||
network_name = grid_network.net1.name
|
||||
token = "12345678910122"
|
||||
ssh_key = "PUT YOUR SSH KEY HERE"
|
||||
|
||||
master {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["master_node"]
|
||||
name = "mr"
|
||||
cpu = 2
|
||||
publicip = true
|
||||
memory = 2048
|
||||
}
|
||||
workers {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["worker1_node"]
|
||||
name = "w0"
|
||||
cpu = 2
|
||||
memory = 2048
|
||||
}
|
||||
workers {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["worker2_node"]
|
||||
name = "w2"
|
||||
cpu = 2
|
||||
memory = 2048
|
||||
}
|
||||
workers {
|
||||
disk_size = 2
|
||||
node = grid_scheduler.sched.nodes["worker3_node"]
|
||||
name = "w3"
|
||||
cpu = 2
|
||||
memory = 2048
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
It requires
|
||||
|
||||
- Network name that would contain the cluster
|
||||
- A cluster token to work as a key for other nodes to join the cluster
|
||||
- SSH key to access the cluster VMs.
|
||||
|
||||
Then, we describe the the master and worker nodes in terms of:
|
||||
|
||||
- name within the deployment
|
||||
- disk size
|
||||
- node to deploy it on
|
||||
- cpu
|
||||
- memory
|
||||
- whether or not this node needs a public ip
|
||||
|
||||
### Kubernetes Outputs
|
||||
|
||||
```terraform
|
||||
output "master_public_ip" {
|
||||
value = grid_kubernetes.k8s1.master[0].computedip
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
We will be mainly interested in the master node public ip `computed IP` and the wireguard configurations
|
||||
|
||||
## More Info
|
||||
|
||||
A complete list of k8s resource parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/kubernetes.md).
|
||||
|
||||
## Demo Video
|
||||
|
||||
Here is a video showing how to deploying k8s with Terraform.
|
||||
|
||||
<div class="aspect-w-16 aspect-h-9">
|
||||
<iframe src="https://player.vimeo.com/video/654552300?h=c61feb579b" width="640" height="564" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe>
|
||||
</div>
|
@@ -0,0 +1,7 @@
|
||||
<h1> Demo Video Showing Deploying k8s with Terraform </h1>
|
||||
|
||||
<div class="aspect-w-16 aspect-h-9">
|
||||
<iframe src="https://player.vimeo.com/video/654552300?h=c61feb579b" width="640" height="564" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
|
@@ -0,0 +1,20 @@
|
||||
<h1> Quantum Safe Filesystem (QSFS) </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
Quantum Storage is a FUSE filesystem that uses mechanisms of forward error correction (Reed Solomon codes) to make sure data (files and metadata) are stored in multiple remote places in a way that we can afford losing x number of locations without losing the data.
|
||||
|
||||
The aim is to support unlimited local storage with remote backends for offload and backup which cannot be broken, even by a quantum computer.
|
||||
|
||||
## QSFS Workload Parameters and Documentation
|
||||
|
||||
A complete list of QSFS workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-qsfs).
|
||||
|
||||
The [quantum-storage](https://github.com/threefoldtech/quantum-storage) repo contains a more thorough description of QSFS operation.
|
@@ -0,0 +1,211 @@
|
||||
<h1> QSFS on Full VM </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Full Example](#full-example)
|
||||
- [Mounting the QSFS Disk](#mounting-the-qsfs-disk)
|
||||
- [Debugging](#debugging)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
This short ThreeFold Guide will teach you how to deploy a Full VM with QSFS disk on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04 based cloud-init image.
|
||||
|
||||
The steps are very simple. You first need to create the Terraform files, and then deploy the full VM and the QSFS workloads. After the deployment is done, you will need to SSH into the full VM and manually mount the QSFS disk.
|
||||
|
||||
The main goal of this guide is to show you all the necessary steps to deploy a Full VM with QSFS disk on the TGrid using Terraform.
|
||||
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Install Terraform](../terraform_install.md)
|
||||
|
||||
You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
|
||||
|
||||
## Create the Terraform Files
|
||||
|
||||
Deploying a FullVM is a bit different than deploying a MicroVM, let take a look first at these differences
|
||||
- FullVMs uses `cloud-init` images and unlike the microVMs it needs at least one disk attached to the vm to copy the image to, and it serves as the root fs for the vm.
|
||||
- QSFS disk is based on `virtiofs`, and you can't use QSFS disk as the first mount in a FullVM, instead you need a regular disk.
|
||||
- Any extra disks/mounts will be available on the vm but unlike mounts on MicroVMs, extra disks won't be mounted automatically. you will need to mount it manually after the deployment.
|
||||
|
||||
Let modify the qsfs-on-microVM [example](./terraform_qsfs_on_microvm.md) to deploy a QSFS on FullVM this time:
|
||||
|
||||
- Inside the `grid_deployment` resource we will need to add a disk for the vm root fs.
|
||||
|
||||
```terraform
|
||||
disks {
|
||||
name = "roof-fs"
|
||||
size = 10
|
||||
description = "root fs"
|
||||
}
|
||||
```
|
||||
|
||||
- We need also to add an extra mount inside the `grid_deployment` resource in `vms` block. it must be the first mounts block in the vm:
|
||||
|
||||
```terraform
|
||||
mounts {
|
||||
disk_name = "rootfs"
|
||||
mount_point = "/"
|
||||
}
|
||||
```
|
||||
|
||||
- We also need to specify the flist for our FullVM, inside the `grid_deployment` in the `vms` block, change the flist filed to use this image:
|
||||
- https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist
|
||||
|
||||
|
||||
|
||||
## Full Example
|
||||
The full example would be like this:
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
locals {
|
||||
metas = ["meta1", "meta2", "meta3", "meta4"]
|
||||
datas = ["data1", "data2", "data3", "data4"]
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [11]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
node = 11
|
||||
dynamic "zdbs" {
|
||||
for_each = local.metas
|
||||
content {
|
||||
name = zdbs.value
|
||||
description = "description"
|
||||
password = "password"
|
||||
size = 10
|
||||
mode = "user"
|
||||
}
|
||||
}
|
||||
dynamic "zdbs" {
|
||||
for_each = local.datas
|
||||
content {
|
||||
name = zdbs.value
|
||||
description = "description"
|
||||
password = "password"
|
||||
size = 10
|
||||
mode = "seq"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "qsfs" {
|
||||
node = 11
|
||||
network_name = grid_network.net1.name
|
||||
disks {
|
||||
name = "rootfs"
|
||||
size = 10
|
||||
description = "rootfs"
|
||||
}
|
||||
qsfs {
|
||||
name = "qsfs"
|
||||
description = "description6"
|
||||
cache = 10240 # 10 GB
|
||||
minimal_shards = 2
|
||||
expected_shards = 4
|
||||
redundant_groups = 0
|
||||
redundant_nodes = 0
|
||||
max_zdb_data_dir_size = 512 # 512 MB
|
||||
encryption_algorithm = "AES"
|
||||
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
|
||||
compression_algorithm = "snappy"
|
||||
metadata {
|
||||
type = "zdb"
|
||||
prefix = "hamada"
|
||||
encryption_algorithm = "AES"
|
||||
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
|
||||
dynamic "backends" {
|
||||
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"]
|
||||
content {
|
||||
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
|
||||
namespace = backends.value.namespace
|
||||
password = backends.value.password
|
||||
}
|
||||
}
|
||||
}
|
||||
groups {
|
||||
dynamic "backends" {
|
||||
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"]
|
||||
content {
|
||||
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
|
||||
namespace = backends.value.namespace
|
||||
password = backends.value.password
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
vms {
|
||||
name = "vm"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
planetary = true
|
||||
env_vars = {
|
||||
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576"
|
||||
}
|
||||
mounts {
|
||||
disk_name = "rootfs"
|
||||
mount_point = "/"
|
||||
}
|
||||
mounts {
|
||||
disk_name = "qsfs"
|
||||
mount_point = "/qsfs"
|
||||
}
|
||||
}
|
||||
}
|
||||
output "metrics" {
|
||||
value = grid_deployment.qsfs.qsfs[0].metrics_endpoint
|
||||
}
|
||||
output "ygg_ip" {
|
||||
value = grid_deployment.qsfs.vms[0].ygg_ip
|
||||
}
|
||||
```
|
||||
|
||||
**note**: the `grid_deployment.qsfs.name` should be the same as the qsfs disk name in `grid_deployment.vms.mounts.disk_name`.
|
||||
|
||||
|
||||
|
||||
## Mounting the QSFS Disk
|
||||
After applying this terraform file, you will need to manually mount the disk.
|
||||
SSH into the VM and type `mount -t virtiofs <QSFS DISK NAME> /qsfs`:
|
||||
|
||||
```bash
|
||||
mkdir /qsfs
|
||||
mount -t virtiofs qsfs /qsfs
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Debugging
|
||||
|
||||
During deployment, you might encounter the following error when using mount command:
|
||||
|
||||
`mount: /qsfs: wrong fs type, bad option, bad superblock on qsfs3, missing codepage or helper program, or other error.`
|
||||
|
||||
- **Explanations**: Most likely you typed a wrong qsfs deployment/disk name that not matched with the one from qsfs deployment.
|
||||
- **Solution**: Double check your terraform file, and make sure the name you are using as qsfs deployment/disk name is matching the one you are trying to mount on your VM.
|
@@ -0,0 +1,348 @@
|
||||
<h1> QSFS on Micro VM with Terraform</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Find a 3Node](#find-a-3node)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Create the Files with the Provider](#create-the-files-with-the-provider)
|
||||
- [Create the Files Manually](#create-the-files-manually)
|
||||
- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform)
|
||||
- [SSH into the 3Node](#ssh-into-the-3node)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this ThreeFold Guide, we will learn how to deploy a Quantum Safe File System (QSFS) deployment with Terraform. The main template for this example can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/qsfs/main.tf).
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
In this guide, we will be using Terraform to deploy a QSFS workload on a micro VM that runs on the TFGrid. Make sure to have the latest Terraform version.
|
||||
|
||||
- [Install Terraform](../terraform_install.md)
|
||||
|
||||
|
||||
|
||||
|
||||
## Find a 3Node
|
||||
|
||||
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address.
|
||||
|
||||
We show here how to find a suitable 3Node using the ThreeFold Explorer.
|
||||
|
||||
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
|
||||
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
|
||||
* For proper understanding, we give further information on some relevant columns:
|
||||
* `ID` refers to the node ID
|
||||
* `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
* `HRU` refers to HDD storage
|
||||
* `SRU` refers to SSD storage
|
||||
* `MRU` refers to RAM (memory)
|
||||
* `CRU` refers to virtual cores (vcores)
|
||||
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
|
||||
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
|
||||
* `Free SRU (GB)`: 15
|
||||
* `Free MRU (GB)`: 1
|
||||
* `Total CRU (Cores)`: 1
|
||||
* `Free Public IP`: 2
|
||||
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
|
||||
|
||||
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
|
||||
|
||||
|
||||
|
||||
## Create the Terraform Files
|
||||
|
||||
We present two different methods to create the Terraform files. In the first method, we will create the Terraform files using the [TFGrid Terraform Provider](https://github.com/threefoldtech/terraform-provider-grid). In the second method, we will create the Terraform files manually. Feel free to choose the method that suits you best.
|
||||
|
||||
### Create the Files with the Provider
|
||||
|
||||
Creating the Terraform files is very straightforward. We want to clone the repository `terraform-provider-grid` locally and run some simple commands to properly set and start the deployment.
|
||||
|
||||
* Clone the repository `terraform-provider-grid`
|
||||
* ```
|
||||
git clone https://github.com/threefoldtech/terraform-provider-grid
|
||||
```
|
||||
* Go to the subdirectory containing the examples
|
||||
* ```
|
||||
cd terraform-provider-grid/examples/resources/qsfs
|
||||
```
|
||||
* Set your own mnemonics (replace `mnemonics words` with your own mnemonics)
|
||||
* ```
|
||||
export MNEMONICS="mnemonics words"
|
||||
```
|
||||
* Set the network (replace `network` by the desired network, e.g. `dev`, `qa`, `test` or `main`)
|
||||
* ```
|
||||
export NETWORK="network"
|
||||
```
|
||||
* Initialize the Terraform deployment
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
* Apply the Terraform deployment
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
* At any moment, you can destroy the deployment with the following line
|
||||
* ```
|
||||
terraform destroy
|
||||
```
|
||||
|
||||
When using this method, you might need to change some parameters within the `main.tf` depending on your specific deployment.
|
||||
|
||||
### Create the Files Manually
|
||||
|
||||
For this method, we use two files to deploy with Terraform. The first file contains the environment variables (**credentials.auto.tfvars**) and the second file contains the parameters to deploy our workloads (**main.tf**). To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file, but only the file **credentials.auto.tfvars**.
|
||||
|
||||
* Open the terminal and go to the home directory (optional)
|
||||
* ```
|
||||
cd ~
|
||||
```
|
||||
|
||||
* Create the folder `terraform` and the subfolder `deployment-qsfs-microvm`:
|
||||
* ```
|
||||
mkdir -p terraform && cd $_
|
||||
```
|
||||
* ```
|
||||
mkdir deployment-qsfs-microvm && cd $_
|
||||
```
|
||||
* Create the `main.tf` file:
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the `main.tf` content and save the file.
|
||||
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Variables
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "network" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "minimal_shards" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "expected_shards" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = var.network
|
||||
}
|
||||
|
||||
locals {
|
||||
metas = ["meta1", "meta2", "meta3", "meta4"]
|
||||
datas = ["data1", "data2", "data3", "data4"]
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [var.tfnodeid1]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
node = var.tfnodeid1
|
||||
dynamic "zdbs" {
|
||||
for_each = local.metas
|
||||
content {
|
||||
name = zdbs.value
|
||||
description = "description"
|
||||
password = "password"
|
||||
size = var.size
|
||||
mode = "user"
|
||||
}
|
||||
}
|
||||
dynamic "zdbs" {
|
||||
for_each = local.datas
|
||||
content {
|
||||
name = zdbs.value
|
||||
description = "description"
|
||||
password = "password"
|
||||
size = var.size
|
||||
mode = "seq"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "qsfs" {
|
||||
node = var.tfnodeid1
|
||||
network_name = grid_network.net1.name
|
||||
qsfs {
|
||||
name = "qsfs"
|
||||
description = "description6"
|
||||
cache = 10240 # 10 GB
|
||||
minimal_shards = var.minimal_shards
|
||||
expected_shards = var.expected_shards
|
||||
redundant_groups = 0
|
||||
redundant_nodes = 0
|
||||
max_zdb_data_dir_size = 512 # 512 MB
|
||||
encryption_algorithm = "AES"
|
||||
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
|
||||
compression_algorithm = "snappy"
|
||||
metadata {
|
||||
type = "zdb"
|
||||
prefix = "hamada"
|
||||
encryption_algorithm = "AES"
|
||||
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
|
||||
dynamic "backends" {
|
||||
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"]
|
||||
content {
|
||||
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
|
||||
namespace = backends.value.namespace
|
||||
password = backends.value.password
|
||||
}
|
||||
}
|
||||
}
|
||||
groups {
|
||||
dynamic "backends" {
|
||||
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"]
|
||||
content {
|
||||
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
|
||||
namespace = backends.value.namespace
|
||||
password = backends.value.password
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
planetary = true
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
mounts {
|
||||
disk_name = "qsfs"
|
||||
mount_point = "/qsfs"
|
||||
}
|
||||
}
|
||||
}
|
||||
output "metrics" {
|
||||
value = grid_deployment.qsfs.qsfs[0].metrics_endpoint
|
||||
}
|
||||
output "ygg_ip" {
|
||||
value = grid_deployment.qsfs.vms[0].ygg_ip
|
||||
}
|
||||
```
|
||||
|
||||
Note that we named the VM as **vm1**.
|
||||
|
||||
* Create the `credentials.auto.tfvars` file:
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content and save the file.
|
||||
* ```terraform
|
||||
# Network
|
||||
network = "main"
|
||||
|
||||
# Credentials
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
# Node Parameters
|
||||
tfnodeid1 = "..."
|
||||
size = "15"
|
||||
cpu = "1"
|
||||
memory = "512"
|
||||
|
||||
# QSFS Parameters
|
||||
minimal_shards = "2"
|
||||
expected_shards = "4"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node you want to deploy on. Simply replace the three dots by the content. If you want to deploy on the Test net, you can replace **main** by **test**.
|
||||
|
||||
Set the parameters for your VMs as you wish. For this example, we use the minimum parameters.
|
||||
|
||||
For the section QSFS Parameters, you can decide on how many VMs your data will be sharded. You can also decide the minimum of VMs to recover the whole of your data. For example, a 16 minimum, 20 expected configuration will disperse your data on 20 3Nodes, and the deployment will only need at any time 16 VMs to recover the whole of your data. This gives resilience and redundancy to your storage. A 2 minimum, 4 expected configuration is given here for the main template.
|
||||
|
||||
|
||||
|
||||
## Deploy the Micro VM with Terraform
|
||||
|
||||
We now deploy the QSFS deployment with Terraform. Make sure that you are in the correct folder `terraform/deployment-qsfs-microvm` containing the main and variables files.
|
||||
|
||||
* Initialize Terraform by writing the following in the terminal:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
* Apply the Terraform deployment:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
|
||||
|
||||
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
|
||||
* ```
|
||||
terraform show
|
||||
```
|
||||
|
||||
|
||||
|
||||
## SSH into the 3Node
|
||||
|
||||
You can now SSH into the 3Node with Planetary Network.
|
||||
|
||||
To SSH with Planetary Network, write the following:
|
||||
|
||||
```
|
||||
ssh root@planetary_IP
|
||||
```
|
||||
|
||||
Note that the IP address should be the value of the parameter **ygg_ip** from the Terraform Outputs.
|
||||
|
||||
You now have an SSH connection access to the VM over Planetary Network.
|
||||
|
||||
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
@@ -0,0 +1,14 @@
|
||||
<h1> Terraform Resources </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Using Scheduler](./terraform_scheduler.md)
|
||||
- [Virtual Machine](./terraform_vm.md)
|
||||
- [Web Gateway](./terraform_vm_gateway.md)
|
||||
- [Kubernetes Cluster](./terraform_k8s.md)
|
||||
- [ZDB](./terraform_zdb.md)
|
||||
- [Zlogs](./terraform_zlogs.md)
|
||||
- [Quantum Safe Filesystem](./terraform_qsfs.md)
|
||||
- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](./terraform_caprover.md)
|
@@ -0,0 +1,153 @@
|
||||
<h1> Scheduler Resource </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [How the Scheduler Works](#how-the-scheduler-works)
|
||||
- [Quick Example](#quick-example)
|
||||
|
||||
***
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
Using the TFGrid scheduler enables users to automatically get the nodes that match their criterias. We present here some basic information on this resource.
|
||||
|
||||
|
||||
|
||||
## How the Scheduler Works
|
||||
|
||||
To better understand the scheduler, we summarize the main process:
|
||||
|
||||
- At first if `farm_id` is specified, then the scheduler will check if this farm has the Farmerbot enabled
|
||||
- If so it will try to find a suitable node using the Farmerbot.
|
||||
- If the Farmerbot is not enabled, it will use grid proxy to find a suitable node.
|
||||
|
||||
|
||||
|
||||
## Quick Example
|
||||
|
||||
Let's take a look at the following example:
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
version = "1.8.1-dev"
|
||||
}
|
||||
}
|
||||
}
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "testvm"
|
||||
}
|
||||
|
||||
resource "grid_scheduler" "sched" {
|
||||
requests {
|
||||
farm_id = 53
|
||||
name = "node1"
|
||||
cru = 3
|
||||
sru = 1024
|
||||
mru = 2048
|
||||
node_exclude = [33] # exlude node 33 from your search
|
||||
public_ips_count = 0 # this deployment needs 0 public ips
|
||||
public_config = false # this node does not need to have public config
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [grid_scheduler.sched.nodes["node1"]]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
name = local.name
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "anothervm"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
output "vm1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "vm1_ygg_ip" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "vm2_ip" {
|
||||
value = grid_deployment.d1.vms[1].ip
|
||||
}
|
||||
output "vm2_ygg_ip" {
|
||||
value = grid_deployment.d1.vms[1].ygg_ip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
From the example above, we take a closer look at the following section:
|
||||
|
||||
```
|
||||
resource "grid_scheduler" "sched" {
|
||||
requests {
|
||||
name = "node1"
|
||||
cru = 3
|
||||
sru = 1024
|
||||
mru = 2048
|
||||
node_exclude = [33] # exlude node 33 from your search
|
||||
public_ips_count = 0 # this deployment needs 0 public ips
|
||||
public_config = false # this node does not need to have public config
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this case, the user is specifying the requirements which match the deployments.
|
||||
|
||||
Later on, the user can use the result of the scheduler which contains the `[nodes]` in the deployments:
|
||||
|
||||
```
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [grid_scheduler.sched.nodes["node1"]]
|
||||
...
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
resource "grid_deployment" "d1" {
|
||||
name = local.name
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
...
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
@@ -0,0 +1,282 @@
|
||||
<h1> VM Deployment </h1>
|
||||
|
||||
<h2>Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Template](#template)
|
||||
- [Using scheduler](#using-scheduler)
|
||||
- [Using Grid Explorer](#using-grid-explorer)
|
||||
- [Describing the overlay network for the project](#describing-the-overlay-network-for-the-project)
|
||||
- [Describing the deployment](#describing-the-deployment)
|
||||
- [Which flists to use](#which-flists-to-use)
|
||||
- [Remark multiple VMs](#remark-multiple-vms)
|
||||
- [Reference](#reference)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
The following provides the basic information to deploy a VM with Terraform on the TFGrid.
|
||||
|
||||
## Template
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
version = "1.8.1-dev"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = "FROM THE CREATE TWIN STEP"
|
||||
network = "dev" # or test to use testnet
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "testvm"
|
||||
}
|
||||
|
||||
resource "grid_scheduler" "sched" {
|
||||
requests {
|
||||
name = "node1"
|
||||
cru = 3
|
||||
sru = 1024
|
||||
mru = 2048
|
||||
node_exclude = [33] # exlude node 33 from your search
|
||||
public_ips_count = 0 # this deployment needs 0 public ips
|
||||
public_config = false # this node does not need to have public config
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [grid_scheduler.sched.nodes["node1"]]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
name = local.name
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "anothervm"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
output "vm1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "vm1_ygg_ip" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "vm2_ip" {
|
||||
value = grid_deployment.d1.vms[1].ip
|
||||
}
|
||||
output "vm2_ygg_ip" {
|
||||
value = grid_deployment.d1.vms[1].ygg_ip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Using scheduler
|
||||
|
||||
- If the user decided to choose [scheduler](terraform_scheduler.md) to find a node for him, then he will use the node returned from the scheduler as the example above
|
||||
|
||||
## Using Grid Explorer
|
||||
|
||||
- If not, the user can still specify the node directly if he wants using the grid explorer to find a node that matches his requirements
|
||||
|
||||
## Describing the overlay network for the project
|
||||
|
||||
```terraform
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [grid_scheduler.sched.nodes["node1"]]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "some network"
|
||||
add_wg_access = true
|
||||
}
|
||||
```
|
||||
|
||||
We tell terraform we will have a network one node `having the node ID returned from the scheduler` using the IP Range `10.1.0.0/16` and add wireguard access for this network
|
||||
|
||||
## Describing the deployment
|
||||
|
||||
```terraform
|
||||
resource "grid_deployment" "d1" {
|
||||
name = local.name
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "anothervm"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
It's bit long for sure but let's try to dissect it a bit
|
||||
|
||||
```terraform
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "")
|
||||
```
|
||||
|
||||
- `node = grid_scheduler.sched.nodes["node1"]` means this deployment will happen on node returned from the scheduler. Otherwise the user can specify the node as `node = 2` and in this case the choice of the node is completely up to the user at this point. They need to do the capacity planning. Check the [Node Finder](../../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria.
|
||||
- `network_name` which network to deploy our project on, and here we choose the `name` of network `net1`
|
||||
- `ip_range` here we [lookup](https://www.terraform.io/docs/language/functions/lookup.html) the iprange of node `2` and initially load it with `""`
|
||||
|
||||
> Advannced note: Direct map access fails during the planning if the key doesn't exist which happens in cases like adding a node to the network and a new deployment on this node. So it's replaced with this to make a default empty value to pass the planning validation and it's validated anyway inside the plugin.
|
||||
|
||||
## Which flists to use
|
||||
|
||||
see [list of flists](../../../developers/flist/grid3_supported_flists.md)
|
||||
|
||||
## Remark multiple VMs
|
||||
|
||||
in terraform you can define items of a list like the following
|
||||
|
||||
```
|
||||
listname {
|
||||
|
||||
}
|
||||
listname {
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
So to add a VM
|
||||
|
||||
```terraform
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
publicip = true
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY ="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeq1MFCQOv3OCLO1HxdQl8V0CxAwt5AzdsNOL91wmHiG9ocgnq2yipv7qz+uCS0AdyOSzB9umyLcOZl2apnuyzSOd+2k6Cj9ipkgVx4nx4q5W1xt4MWIwKPfbfBA9gDMVpaGYpT6ZEv2ykFPnjG0obXzIjAaOsRthawuEF8bPZku1yi83SDtpU7I0pLOl3oifuwPpXTAVkK6GabSfbCJQWBDSYXXM20eRcAhIMmt79zo78FNItHmWpfPxPTWlYW02f7vVxTN/LUeRFoaNXXY+cuPxmcmXp912kW0vhK9IvWXqGAEuSycUOwync/yj+8f7dRU7upFGqd6bXUh67iMl7 ahmed@ahmedheaven"
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
- We give it a name within our deployment `vm1`
|
||||
- `flist` is used to define the flist to run within the VM. Check the [list of flists](../../../developers/flist/grid3_supported_flists.md)
|
||||
- `cpu` and `memory` are used to define the cpu and memory
|
||||
- `publicip` is usued to define if it requires a public IP or not
|
||||
- `entrypoint` is used define the entrypoint which in most of the cases in `/sbin/zinit init`, but in case of flists based on vms it can be specific to each flist
|
||||
- `env_vars` are used to define te environment variables, in this example we define `SSH_KEY` to authorize me accessing the machine
|
||||
Here we say we will have this deployment on node with `twin ID 2` using the overlay network defined from before `grid_network.net1.name` and use the ip range allocated to that specific node `2`
|
||||
|
||||
The file describes only the desired state which is `a deployment of two VMs and their specifications in terms of cpu and memory, and some environment variables e.g sshkey to ssh into the machine`
|
||||
|
||||
## Reference
|
||||
|
||||
A complete list of VM workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vms).
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [8]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
node = 8
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "")
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 2
|
||||
publicip = true
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "anothervm"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
|
||||
}
|
||||
}
|
||||
}
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "node1_zmachine2_ip" {
|
||||
value = grid_deployment.d1.vms[1].ip
|
||||
}
|
||||
output "public_ip" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
|
||||
output "ygg_ip" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
```
|
@@ -0,0 +1,172 @@
|
||||
<h1> Terraform Web Gateway With VM </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Expose with Prefix](#expose-with-prefix)
|
||||
- [Expose with Full Domain](#expose-with-full-domain)
|
||||
- [Using Gateway Name on Private Networks (WireGuard)](#using-gateway-name-on-private-networks-wireguard)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this section, we provide the basic information for a VM web gateway using Terraform on the TFGrid.
|
||||
|
||||
## Expose with Prefix
|
||||
|
||||
A complete list of gateway name workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/name_proxy.md).
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
# this data source is used to break circular dependency in cases similar to the following:
|
||||
# vm: needs to know the domain in its init script
|
||||
# gateway_name: needs the ip of the vm to use as backend.
|
||||
# - the fqdn can be computed from grid_gateway_domain for the vm
|
||||
# - the backend can reference the vm ip directly
|
||||
data "grid_gateway_domain" "domain" {
|
||||
node = 7
|
||||
name = "ashraf"
|
||||
}
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [8]
|
||||
ip_range = "10.1.0.0/24"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
node = 8
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "")
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/strm-helloworld-http-latest.flist"
|
||||
cpu = 2
|
||||
publicip = true
|
||||
memory = 1024
|
||||
env_vars = {
|
||||
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP ashraf@thinkpad"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
resource "grid_name_proxy" "p1" {
|
||||
node = 7
|
||||
name = "ashraf"
|
||||
backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])]
|
||||
tls_passthrough = false
|
||||
}
|
||||
output "fqdn" {
|
||||
value = data.grid_gateway_domain.domain.fqdn
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "public_ip" {
|
||||
value = split("/",grid_deployment.d1.vms[0].computedip)[0]
|
||||
}
|
||||
|
||||
output "ygg_ip" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
please note to use grid_name_proxy you should choose a node that has public config and has a domain in its public config like node 7 in the following example
|
||||

|
||||
|
||||
Here
|
||||
|
||||
- we created a grid domain resource `ashraf` to be deployed on gateway node `7` to end up with a domain `ashraf.ghent01.devnet.grid.tf`
|
||||
- we create a proxy for the gateway to send the traffic coming to `ashraf.ghent01.devnet.grid.tf` to the vm as a backend, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replcae it with `true` your backend service needs to be able to do the TLS termination
|
||||
|
||||
## Expose with Full Domain
|
||||
|
||||
A complete list of gateway fqdn workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/fqdn_proxy.md).
|
||||
|
||||
it is more like the above example the only difference is you need to create an `A record` on your name provider for `remote.omar.grid.tf` to gateway node `7` IPv4.
|
||||
|
||||
```
|
||||
|
||||
resource "grid_fqdn_proxy" "p1" {
|
||||
node = 7
|
||||
name = "workloadname"
|
||||
fqdn = "remote.omar.grid.tf"
|
||||
backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])]
|
||||
tls_passthrough = true
|
||||
}
|
||||
|
||||
output "fqdn" {
|
||||
value = grid_fqdn_proxy.p1.fqdn
|
||||
}
|
||||
```
|
||||
|
||||
## Using Gateway Name on Private Networks (WireGuard)
|
||||
|
||||
It is possible to create a vm with private ip (wireguard) and use it as a backend for a gateway contract. this is done as the following
|
||||
|
||||
- Create a gateway domain data source. this data source will construct the full domain so we can use it afterwards
|
||||
|
||||
```
|
||||
data "grid_gateway_domain" "domain" {
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
name = "examp123456"
|
||||
}
|
||||
```
|
||||
|
||||
- create a network resource
|
||||
|
||||
```
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [grid_scheduler.sched.nodes["node1"]]
|
||||
ip>_range = "10.1.0.0/16"
|
||||
name = mynet
|
||||
description = "newer network"
|
||||
}
|
||||
```
|
||||
|
||||
- Create a vm to host your service
|
||||
|
||||
```
|
||||
resource "grid_deployment" "d1" {
|
||||
name = vm1
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Create a grid_name_proxy resource using the network created above and the wireguard ip of the vm that host the service. Also consider changing the port to the correct port
|
||||
|
||||
```
|
||||
resource "grid_name_proxy" "p1" {
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
name = "examp123456"
|
||||
backends = [format("http://%s:9000", grid_deployment.d1.vms[0].ip)]
|
||||
network = grid_network.net1.name
|
||||
tls_passthrough = false
|
||||
}
|
||||
```
|
||||
|
||||
- To know the full domain created using the data source above you can show it via
|
||||
|
||||
```
|
||||
output "fqdn" {
|
||||
value = data.grid_gateway_domain.domain.fqdn
|
||||
}
|
||||
```
|
||||
|
||||
- Now vist the domain you should be able to reach your service hosted on the vm
|
@@ -0,0 +1,64 @@
|
||||
<h1> Deploying a ZDB with terraform </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We provide a basic template for ZDB deployment with Terraform on the TFGrid.
|
||||
|
||||
A brief description of zdb fields can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-zdbs).
|
||||
|
||||
A more thorough description of zdb operation can be found in its parent [repo](https://github.com/threefoldtech/0-db).
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
node = 4
|
||||
|
||||
zdbs{
|
||||
name = "zdb1"
|
||||
size = 10
|
||||
description = "zdb1 description"
|
||||
password = "zdbpasswd1"
|
||||
mode = "user"
|
||||
}
|
||||
zdbs{
|
||||
name = "zdb2"
|
||||
size = 2
|
||||
description = "zdb2 description"
|
||||
password = "zdbpasswd2"
|
||||
mode = "seq"
|
||||
}
|
||||
}
|
||||
|
||||
output "deployment_id" {
|
||||
value = grid_deployment.d1.id
|
||||
}
|
||||
|
||||
output "zdb1_endpoint" {
|
||||
value = format("[%s]:%d", grid_deployment.d1.zdbs[0].ips[0], grid_deployment.d1.zdbs[0].port)
|
||||
}
|
||||
|
||||
output "zdb1_namespace" {
|
||||
value = grid_deployment.d1.zdbs[0].namespace
|
||||
}
|
||||
```
|
||||
|
||||
|
@@ -0,0 +1,119 @@
|
||||
<h1> Zlogs </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Using Zlogs](#using-zlogs)
|
||||
- [Creating a server](#creating-a-server)
|
||||
- [Streaming logs](#streaming-logs)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Zlogs is a utility that allows you to stream VM logs to a remote location. You can find the full description [here](https://github.com/threefoldtech/zos/tree/main/docs/manual/zlogs)
|
||||
|
||||
## Using Zlogs
|
||||
|
||||
In Terraform, a vm has a zlogs field, this field should contain a list of target URLs to stream logs to.
|
||||
|
||||
Valid protocols are: `ws`, `wss`, and `redis`.
|
||||
|
||||
For example, to deploy two VMs named "vm1" and "vm2", with one vm1 streaming logs to vm2, this is what main.tf looks like:
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [2, 4]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "some network description"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
node = 2
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "")
|
||||
vms {
|
||||
name = "vm1" #streaming logs
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
entrypoint = "/sbin/zinit init"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
env_vars = {
|
||||
SSH_KEY = "PUT YOUR SSH KEY HERE"
|
||||
}
|
||||
zlogs = tolist([
|
||||
format("ws://%s:5000", replace(grid_deployment.d2.vms[0].computedip, "//.*/", "")),
|
||||
])
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d2" {
|
||||
node = 4
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, 4, "")
|
||||
vms {
|
||||
name = "vm2" #receiving logs
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = "PUT YOUR SSH KEY HERE"
|
||||
}
|
||||
publicip = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
At this point, two VMs are deployed, and vm1 is ready to stream logs to vm2.
|
||||
But what is missing here is that vm1 is not actually producing any logs, and vm2 is not listening for incoming messages.
|
||||
|
||||
### Creating a server
|
||||
|
||||
- First, we will create a server on vm2. This should be a websocket server listening on port 5000 as per our zlogs definition in main.tf ```ws://%s:5000```.
|
||||
|
||||
- a simple python websocket server looks like this:
|
||||
```
|
||||
import asyncio
|
||||
import websockets
|
||||
import gzip
|
||||
|
||||
|
||||
async def echo(websocket):
|
||||
async for message in websocket:
|
||||
data = gzip.decompress(message).decode('utf-8')
|
||||
f = open("output.txt", "a")
|
||||
f.write(data)
|
||||
f.close()
|
||||
|
||||
async def main():
|
||||
async with websockets.serve(echo, "0.0.0.0", 5000, ping_interval=None):
|
||||
await asyncio.Future()
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
- Note that incoming messages are decompressed since zlogs compresses any messages using gzip.
|
||||
- After a message is decompressed, it is then appended to `output.txt`.
|
||||
|
||||
### Streaming logs
|
||||
|
||||
- Zlogs streams anything written to stdout of the zinit process on a vm.
|
||||
- So, simply running ```echo "to be streamed" 1>/proc/1/fd/1``` on vm1 should successfuly stream this message to the vm2 and we should be able to see it in `output.txt`.
|
||||
- Also, if we want to stream a service's logs, a service definition file should be created in ```/etc/zinit/example.yaml``` on vm1 and should look like this:
|
||||
```
|
||||
exec: sh -c "echo 'to be streamed'"
|
||||
log: stdout
|
||||
```
|
||||
|
Reference in New Issue
Block a user