manual removed files

This commit is contained in:
2024-04-15 17:49:09 +00:00
parent a567404ef3
commit c19931fd32
1763 changed files with 0 additions and 51340 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 326 KiB

View File

@@ -1,18 +0,0 @@
<h1> Terraform Advanced </h1>
<h2> Table of Contents </h2>
- [Terraform Provider](./terraform_provider.html)
- [Terraform Provisioners](./terraform_provisioners.html)
- [Mounts](./terraform_mounts.html)
- [Capacity Planning](./terraform_capacity_planning.html)
- [Updates](./terraform_updates.html)
- [SSH Connection with Wireguard](./terraform_wireguard_ssh.md)
- [Set a Wireguard VPN](./terraform_wireguard_vpn.md)
- [Synced MariaDB Databases](./terraform_mariadb_synced_databases.md)
- [Nomad](./terraform_nomad.md)
- [Nextcloud Deployments](./terraform_nextcloud_toc.md)
- [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md)
- [Nextcloud Single Deployment](./terraform_nextcloud_single.md)
- [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md)
- [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md)

View File

@@ -1,159 +0,0 @@
<h1> Capacity Planning </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [Preparing the Requests](#preparing-the-requests)
***
## Introduction
In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/simple-dynamic/main.tf) we will discuss capacity planning on top of the TFGrid.
## Example
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
locals {
name = "testvm"
}
resource "grid_scheduler" "sched" {
requests {
name = "node1"
cru = 3
sru = 1024
mru = 2048
node_exclude = [33] # exlude node 33 from your search
public_ips_count = 0 # this deployment needs 0 public ips
public_config = false # this node does not need to have public config
}
}
resource "grid_network" "net1" {
name = local.name
nodes = [grid_scheduler.sched.nodes["node1"]]
ip_range = "10.1.0.0/16"
description = "newer network"
}
resource "grid_deployment" "d1" {
name = local.name
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "PUT YOUR SSH KEY HERE"
}
planetary = true
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "PUT YOUR SSH KEY HERE"
}
planetary = true
}
}
output "vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "vm1_ygg_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "vm2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "vm2_ygg_ip" {
value = grid_deployment.d1.vms[1].ygg_ip
}
```
## Preparing the Requests
```terraform
resource "grid_scheduler" "sched" {
# a machine for the first server instance
requests {
name = "server1"
cru = 1
sru = 256
mru = 256
}
# a machine for the second server instance
requests {
name = "server2"
cru = 1
sru = 256
mru = 256
}
# a name workload
requests {
name = "gateway"
public_config = true
}
}
```
Here we define a `list` of requests, each request has a name and filter options e.g `cru`, `sru`, `mru`, `hru`, having `public_config` or not, `public_ips_count` for this deployment, whether or not this node should be `dedicated`, whether or not this node should be `distinct` from other nodes in this plannder, `farm_id` to search in, nodes to exlude from search in `node_exclude`, and whether or not this node should be `certified`.
The full docs for the capacity planner `scheduler` are found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/scheduler.md)
And after that in our code we can reference the grid_scheduler object with the request name to be used instead of node_id.
For example:
```terraform
resource "grid_deployment" "server1" {
node = grid_scheduler.sched.nodes["server1"]
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, grid_scheduler.sched.nodes["server1"], "")
vms {
name = "firstserver"
flist = "https://hub.grid.tf/omar0.3bot/omarelawady-simple-http-server-latest.flist"
cpu = 1
memory = 256
rootfs_size = 256
entrypoint = "/main.sh"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
env_vars = {
PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
}
planetary = true
}
}
```
> Note: you need to call `distinct` while specifying the nodes in the network, because the scheduler may assign server1, server2 on the same node. Example:
```terraform
resource "grid_network" "net1" {
name = local.name
nodes = distinct(values(grid_scheduler.sched.nodes))
ip_range = "10.1.0.0/16"
description = "newer network"
}
```

View File

@@ -1,585 +0,0 @@
<h1>MariaDB Synced Databases Between Two VMs</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Main Steps](#main-steps)
- [Prerequisites](#prerequisites)
- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer)
- [Set the VMs](#set-the-vms)
- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform)
- [Create the Terraform Files](#create-the-terraform-files)
- [Deploy the 3Nodes with Terraform](#deploy-the-3nodes-with-terraform)
- [SSH into the 3Nodes](#ssh-into-the-3nodes)
- [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment)
- [Test the Wireguard Connection](#test-the-wireguard-connection)
- [Configure the MariaDB Database](#configure-the-mariadb-database)
- [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database)
- [Create User with Replication Grant](#create-user-with-replication-grant)
- [Verify the Access of the User](#verify-the-access-of-the-user)
- [Set the VMs to accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection)
- [TF Template Worker Server Data](#tf-template-worker-server-data)
- [TF Template Master Server Data](#tf-template-master-server-data)
- [Set the MariaDB Databases on Both 3Nodes](#set-the-mariadb-databases-on-both-3nodes)
- [Install and Set GlusterFS](#install-and-set-glusterfs)
- [Conclusion](#conclusion)
***
# Introduction
In this ThreeFold Guide, we show how to deploy a VPN with Wireguard and create a synced MariaDB database between the two servers using GlusterFS, a scalable network filesystem. Any change in one VM's database will be echoed in the other VM's database. This kind of deployment can lead to useful server architectures.
# Main Steps
This guide might seems overwhelming, but the steps are carefully explained. Take your time and it will all work out!
To get an overview of the whole process, we present the main steps:
* Download the dependencies
* Find two 3Nodes on the TFGrid
* Deploy and set the VMs with Terraform
* Create a MariaDB database
* Set GlusterFS
# Prerequisites
* [Install Terraform](https://developer.hashicorp.com/terraform/downloads)
* [Install Wireguard](https://www.wireguard.com/install/)
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
# Find Nodes with the ThreeFold Explorer
We first need to decide on which 3Nodes we will be deploying our workload.
We thus start by finding two 3Nodes with sufficient resources. For this current MariaDB guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address.
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
* For proper understanding, we give further information on some relevant columns:
* `ID` refers to the node ID
* `Free Public IPs` refers to available IPv4 public IP addresses
* `HRU` refers to HDD storage
* `SRU` refers to SSD storage
* `MRU` refers to RAM (memory)
* `CRU` refers to virtual cores (vcores)
* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters:
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
* `Free SRU (GB)`: 50
* `Free MRU (GB)`: 2
* `Total CRU (Cores)`: 1
* `Free Public IP`: 2
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
# Set the VMs
## Create a Two Servers Wireguard VPN with Terraform
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file.
Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-synced-db`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
Modify the variable files to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on.
### Create the Terraform Files
Open the terminal.
* Go to the home folder
* ```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-synced-db`:
* ```
mkdir -p terraform/deployment-synced-db
```
* ```
cd terraform/deployment-synced-db
```
* Create the `main.tf` file:
* ```
nano main.tf
```
* Copy the `main.tf` content and save the file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "tfnodeid2" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "tfvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid1, var.tfnodeid2]
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
resource "grid_deployment" "d2" {
disks {
name = "disk2"
size = var.size
}
name = local.name
node = var.tfnodeid2
network_name = grid_network.net1.name
vms {
name = "vm2"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk2"
mount_point = "/disk2"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_zmachine2_ip" {
value = grid_deployment.d2.vms[0].ip
}
output "ygg_ip1" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "ygg_ip2" {
value = grid_deployment.d2.vms[0].ygg_ip
}
output "ipv4_vm1" {
value = grid_deployment.d1.vms[0].computedip
}
output "ipv4_vm2" {
value = grid_deployment.d2.vms[0].computedip
}
```
In this file, we name the first VM as `vm1` and the second VM as `vm2`. For ease of communication, in this guide we call `vm1` as the master VM and `vm2` as the worker VM.
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2`is 10.1.4.2. This might be different during your own deployment. If so, change the codes in this guide accordingly.
* Create the `credentials.auto.tfvars` file:
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid1 = "..."
tfnodeid2 = "..."
size = "50"
cpu = "1"
memory = "2048"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to increase or modify the quantity in the variables `size`, `cpu` and `memory`.
### Deploy the 3Nodes with Terraform
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-synced-db` with the main and variables files.
* Initialize Terraform:
* ```
terraform init
```
* Apply Terraform to deploy the VPN:
* ```
terraform apply
```
After deployments, take note of the 3Nodes' IPv4 address. You will need those addresses to SSH into the 3Nodes.
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
* ```
terraform show
```
### SSH into the 3Nodes
* To [SSH into the 3Nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following while making sure to set the proper IP address for each VM:
* ```
ssh root@3node_IPv4_Address
```
### Preparing the VMs for the Deployment
* Update and upgrade the system
* ```
apt update && sudo apt upgrade -y && sudo apt-get install apache2 -y
```
* After download, you might need to reboot the system for changes to be fully taken into account
* ```
reboot
```
* Reconnect to the VMs
### Test the Wireguard Connection
We now want to ping the VMs using Wireguard. This will ensure the connection is properly established.
First, we set Wireguard with the Terraform output.
* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/usr/local/etc/wireguard/wg.conf`.
* ```
nano /usr/local/etc/wireguard/wg.conf
```
* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The WireGuard output stands in between `EOT`.
* Start the WireGuard on your local computer:
* ```
wg-quick up wg
```
* To stop the wireguard service:
* ```
wg-quick down wg
```
> Note: If it doesn't work and you already did a WireGuard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`.
This should set everything properly.
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct:
* ```
ping 10.1.3.2
```
* ```
ping 10.1.4.2
```
If you correctly receive the packets for the two VMs, you know that the VPN is properly set.
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
# Configure the MariaDB Database
## Download MariaDB and Configure the Database
* Download the MariaDB server and client on both the master VM and the worker VM
* ```
apt install mariadb-server mariadb-client -y
```
* Configure the MariaDB database
* ```
nano /etc/mysql/mariadb.conf.d/50-server.cnf
```
* Do the following changes
* Add `#` in front of
* `bind-address = 127.0.0.1`
* Remove `#` in front of the following lines and replace `X` by `1` for the master VM and by `2` for the worker VM
```
#server-id = X
#log_bin = /var/log/mysql/mysql-bin.log
```
* Below the lines shown above add the following line:
```
binlog_do_db = tfdatabase
```
* Restart MariaDB
* ```
systemctl restart mysql
```
* Launch Mariadb
* ```
mysql
```
## Create User with Replication Grant
* Do the following on both the master and the worker
* ```
CREATE USER 'repuser'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ;
FLUSH PRIVILEGES;
show master status\G;
```
## Verify the Access of the User
* Verify the access of repuser user
```
SELECT host FROM mysql.user WHERE User = 'repuser';
```
* You want to see `%` in Host
## Set the VMs to accept the MariaDB Connection
### TF Template Worker Server Data
* Write the following in the Worker VM
* ```
CHANGE MASTER TO MASTER_HOST='10.1.3.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
start slave;
```
* ```
show slave status\G;
```
### TF Template Master Server Data
* Write the following in the Master VM
* ```
CHANGE MASTER TO MASTER_HOST='10.1.4.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
start slave;
```
* ```
show slave status\G;
```
## Set the MariaDB Databases on Both 3Nodes
We now set the MariaDB database. You should choose your own username and password. The password should be the same for the master and worker VMs.
* On the master VM, write:
```
CREATE DATABASE tfdatabase;
CREATE USER 'ncuser'@'%';
GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234';
FLUSH PRIVILEGES;
```
* On the worker VM, write:
```
CREATE USER 'ncuser'@'%';
GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234';
FLUSH PRIVILEGES;
```
* To see a database, write the following:
```
show databases;
```
* To see users on MariaDB:
```
select user from mysql.user;
```
* To exit MariaDB:
```
exit;
```
# Install and Set GlusterFS
We will now install and set [GlusterFS](https://www.gluster.org/), a free and open-source software scalable network filesystem.
* Install GlusterFS on both the master and worker VMs
* ```
add-apt-repository ppa:gluster/glusterfs-7 -y && apt install glusterfs-server -y
```
* Start the GlusterFS service on both VMs
* ```
systemctl start glusterd.service && systemctl enable glusterd.service
```
* Set the master to worker probe IP on the master VM:
* ```
gluster peer probe 10.1.4.2
```
* See the peer status on the worker VM:
* ```
gluster peer status
```
* Set the master and worker IP address on the master VM:
* ```
gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force
```
* Start Gluster:
* ```
gluster volume start vol1
```
* Check the status on the worker VM:
* ```
gluster volume status
```
* Mount the server with the master IP on the master VM:
* ```
mount -t glusterfs 10.1.3.2:/vol1 /var/www
```
* See if the mount is there on the master VM:
* ```
df -h
```
* Mount the Server with the worker IP on the worker VM:
* ```
mount -t glusterfs 10.1.4.2:/vol1 /var/www
```
* See if the mount is there on the worker VM:
* ```
df -h
```
We now update the mount with the filse fstab on both master and worker.
* To prevent the mount from being aborted if the server reboot, write the following on both servers:
* ```
nano /etc/fstab
```
* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2):
* ```
10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2):
* ```
10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
The databases of both VMs are accessible in `/var/www`. This means that any change in either folder `/var/www` of each VM will be reflected in the same folder of the other VM. In order words, the databases are now synced in real-time.
# Conclusion
You now have two VMs syncing their MariaDB databases. This can be very useful for a plethora of projects requiring redundancy in storage.
You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB and GlusterFS.
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,86 +0,0 @@
<h1> Deploying a VM with Mounts Using Terraform </h1>
<h2> Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [More Info](#more-info)
***
## Introduction
In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/mounts/main.tf), we will see how to deploy a VM and mount disks on it on the TFGrid.
## Example
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_network" "net1" {
nodes = [2, 4]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
resource "grid_deployment" "d1" {
node = 2
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "")
disks {
name = "data"
size = 10
description = "volume holding app data"
}
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
publicip = true
memory = 1024
entrypoint = "/sbin/zinit init"
mounts {
disk_name = "data"
mount_point = "/app"
}
env_vars = {
SSH_KEY = "PUT YOUR SSH KEY HERE"
}
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "PUT YOUR SSH KEY HERE"
}
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_zmachine2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "public_ip" {
value = grid_deployment.d1.vms[0].computedip
}
```
## More Info
A complete list of Mount workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vmsmounts).

View File

@@ -1,140 +0,0 @@
<h1> Nextcloud All-in-One Deployment </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Deploy a Full VM](#deploy-a-full-vm)
- [Set a Firewall](#set-a-firewall)
- [Set the DNS Record for Your Domain](#set-the-dns-record-for-your-domain)
- [Install Nextcloud All-in-One](#install-nextcloud-all-in-one)
- [Set BorgBackup](#set-borgbackup)
- [Conclusion](#conclusion)
***
## Introduction
We present a quick way to install Nextcloud All-in-One on the TFGrid. This guide is based heavily on the Nextcloud documentation available [here](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). It's mostly a simple adaptation to the TFGrid with some additional information on how to set correctly the firewall and the DNS record for your domain.
## Deploy a Full VM
* Deploy a Full VM with the [TF Dashboard](../../getstarted/ssh_guide/ssh_openssh.md) or [Terraform](../terraform_full_vm.md)
* Minimum specs:
* IPv4 Address
* 2 vcores
* 4096 MB of RAM
* 50 GB of Storage
* Take note of the VM IP address
* SSH into the Full VM
## Set a Firewall
We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
It should already be installed on your system. If it is not, install it with the following command:
```
apt install ufw
```
For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443).
We thus add the following rules:
* Allow SSH (port 22)
* ```
ufw allow ssh
```
* Allow HTTP (port 80)
* ```
ufw allow http
```
* Allow https (port 443)
* ```
ufw allow https
```
* Allow port 8443
* ```
ufw allow 8443
```
* Allow port 3478 for Nextcloud Talk
* ```
ufw allow 3478
```
* To enable the firewall, write the following:
* ```
ufw enable
```
* To see the current security rules, write the following:
* ```
ufw status verbose
```
You now have enabled the firewall with proper security rules for your Nextcloud deployment.
## Set the DNS Record for Your Domain
* Go to your domain name registrar (e.g. Namecheap)
* In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on:
* Type: A Record
* Host: @
* Value: <VM_IP_Address>
* TTL: Automatic
* It might take up to 30 minutes to set the DNS properly.
* To check if the A record has been registered, you can use a common DNS checker:
* ```
https://dnschecker.org/#A/<domain-name>
```
## Install Nextcloud All-in-One
For the rest of the guide, we follow the steps availabe on the Nextcloud website's tutorial [How to Install the Nextcloud All-in-One on Linux](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/).
* Install Docker
* ```
curl -fsSL get.docker.com | sudo sh
```
* Install Nextcloud AIO
* ```
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest
```
* Reach the AIO interface on your browser:
* ```
https://<domain_name>:8443
```
* Example: `https://nextcloudwebsite.com:8443`
* Take note of the Nextcloud password
* Log in with the given password
* Add your domain name and click `Submit`
* Click `Start containers`
* Click `Open your Nextcloud`
You can now easily access Nextcloud AIO with your domain URL!
## Set BorgBackup
On the AIO interface, you can easily set BorgBackup. Since we are using Linux, we use the mounting directory `/mnt/backup`. Make sure to take note of the backup password.
## Conclusion
Most of the information in this guide can be found on the Nextcloud official website. We presented this guide to show another way to deploy Nextcloud on the TFGrid.

View File

@@ -1,908 +0,0 @@
<h1>Nextcloud Redundant Deployment</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Main Steps](#main-steps)
- [Prerequisites](#prerequisites)
- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer)
- [Set the VMs](#set-the-vms)
- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform)
- [Create the Terraform Files](#create-the-terraform-files)
- [Deploy the 3nodes with Terraform](#deploy-the-3nodes-with-terraform)
- [SSH into the 3nodes](#ssh-into-the-3nodes)
- [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment)
- [Test the Wireguard Connection](#test-the-wireguard-connection)
- [Create the MariaDB Database](#create-the-mariadb-database)
- [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database)
- [Create User with Replication Grant](#create-user-with-replication-grant)
- [Verify the Access of the User](#verify-the-access-of-the-user)
- [Set the VMs to Accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection)
- [TF Template Worker Server Data](#tf-template-worker-server-data)
- [TF Template Master Server Data](#tf-template-master-server-data)
- [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database)
- [Install and Set GlusterFS](#install-and-set-glusterfs)
- [Install PHP and Nextcloud](#install-php-and-nextcloud)
- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns)
- [Worker File for DuckDNS](#worker-file-for-duckdns)
- [Set Apache](#set-apache)
- [Access Nextcloud on a Web Browser with the Subdomain](#access-nextcloud-on-a-web-browser-with-the-subdomain)
- [Enable HTTPS](#enable-https)
- [Install Certbot](#install-certbot)
- [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain)
- [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal)
- [Set a Firewall](#set-a-firewall)
- [Conclusion](#conclusion)
- [Acknowledgements and References](#acknowledgements-and-references)
***
# Introduction
In this Threefold Guide, we deploy a redundant [Nextcloud](https://nextcloud.com/) instance that is continually synced on two different 3node servers running on the [Threefold Grid](https://threefold.io/).
We will learn how to deploy two full virtual machines (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). The Terraform deployment will be composed of a virtual private network (VPN) using [Wireguard](https://www.wireguard.com/). The two VMs will thus be connected in a private and secure network. Once this is done, we will link the two VMs together by setting up a [MariaDB](https://mariadb.org/) database and using [GlusterFS](https://www.gluster.org/). Then, we will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity.
The advantage of this redundant Nextcloud deployment is obvious: if one of the two VMs goes down, the Nextcloud instance will still be accessible, as the other VM will take the lead. Also, the two VMs will be continually synced in real-time. If the master node goes down, the data will be synced to the worker node, and the worker node will become the master node. Once the master VM goes back online, the data will be synced to the master node and the master node will retake the lead as the master node.
This kind of real-time backup of the database is not only limited to Nextcloud. You can use the same architecture to deploy different workloads while having the redundancy over two 3node servers. This architecture could be deployed over more than two 3nodes. Feel free to explore and let us know in the [Threefold Forum](http://forum.threefold.io/) if you come up with exciting and different variations of this kind of deployment.
As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/).
Let's go!
# Main Steps
This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out!
To get an overview of the whole process, we present the main steps:
* Download the dependencies
* Find two 3nodes on the TF Grid
* Deploy and set the VMs with Terraform
* Create a MariaDB database
* Download and set GlusterFS
* Install PHP and Nextcloud
* Create a subdomain with DuckDNS
* Set Apache
* Access Nextcloud
* Add HTTPS protection
* Set a firewall
# Prerequisites
* [Install Terraform](../terraform_install.md)
* [Install Wireguard](https://www.wireguard.com/install/)
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
# Find Nodes with the ThreeFold Explorer
We first need to decide on which 3Nodes we will be deploying our workload.
We thus start by finding two 3Nodes with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for 3Nodes with each a public IPv4 address.
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
* Find two 3Nodes with suitable resources for the deployment and take note of their node IDs on the leftmost column `ID`
* For proper understanding, we give further information on some relevant columns:
* `ID` refers to the node ID
* `Free Public IPs` refers to available IPv4 public IP addresses
* `HRU` refers to HDD storage
* `SRU` refers to SSD storage
* `MRU` refers to RAM (memory)
* `CRU` refers to virtual cores (vcores)
* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters:
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
* `Free SRU (GB)`: 50
* `Free MRU (GB)`: 2
* `Total CRU (Cores)`: 1
* `Free Public IP`: 2
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
Once you've found two 3Nodes, take note of their node IDs. You will need to use those IDs when creating the Terraform files.
# Set the VMs
## Create a Two Servers Wireguard VPN with Terraform
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node IDs of the two 3nodes you will be deploying on.
### Create the Terraform Files
Open the terminal.
* Go to the home folder
* ```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-nextcloud`:
* ```
mkdir -p terraform/deployment-nextcloud
```
* ```
cd terraform/deployment-nextcloud
```
* Create the `main.tf` file:
* ```
nano main.tf
```
* Copy the `main.tf` content and save the file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "tfnodeid2" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "tfvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid1, var.tfnodeid2]
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
resource "grid_deployment" "d2" {
disks {
name = "disk2"
size = var.size
}
name = local.name
node = var.tfnodeid2
network_name = grid_network.net1.name
vms {
name = "vm2"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk2"
mount_point = "/disk2"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_zmachine2_ip" {
value = grid_deployment.d2.vms[0].ip
}
output "ygg_ip1" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "ygg_ip2" {
value = grid_deployment.d2.vms[0].ygg_ip
}
output "ipv4_vm1" {
value = grid_deployment.d1.vms[0].computedip
}
output "ipv4_vm2" {
value = grid_deployment.d2.vms[0].computedip
}
```
In this file, we name the first VM as `vm1` and the second VM as `vm2`. In the guide, we call `vm1` as the master VM and `vm2` as the worker VM.
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly.
* Create the `credentials.auto.tfvars` file:
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid1 = "..."
tfnodeid2 = "..."
size = "50"
cpu = "1"
memory = "2048"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers.
### Deploy the 3nodes with Terraform
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-nextcloud` with the main and variables files.
* Initialize Terraform:
* ```
terraform init
```
* Apply Terraform to deploy the VPN:
* ```
terraform apply
```
After deployments, take note of the 3nodes' IPv4 address. You will need those addresses to SSH into the 3nodes.
### SSH into the 3nodes
* To [SSH into the 3nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following:
* ```
ssh root@VM_IPv4_Address
```
### Preparing the VMs for the Deployment
* Update and upgrade the system
* ```
apt update && apt upgrade -y && apt-get install apache2 -y
```
* After download, reboot the system
* ```
reboot
```
* Reconnect to the VMs
### Test the Wireguard Connection
We now want to ping the VMs using Wireguard. This will ensure the connection is properly established.
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
First, we set Wireguard with the Terraform output.
* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/etc/wireguard/wg.conf`.
* ```
nano /etc/wireguard/wg.conf
```
* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The Wireguard output stands in between `EOT`.
* Start Wireguard on your local computer:
* ```
wg-quick up wg
```
* To stop the wireguard service:
* ```
wg-quick down wg
```
If it doesn't work and you already did a wireguard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`.
This should set everything properly.
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct:
* ```
ping 10.1.3.2
```
* ```
ping 10.1.4.2
```
If you correctly receive the packets from the two VMs, you know that the VPN is properly set.
# Create the MariaDB Database
## Download MariaDB and Configure the Database
* Download MariaDB's server and client on both VMs
* ```
apt install mariadb-server mariadb-client -y
```
* Configure the MariaDB database
* ```
nano /etc/mysql/mariadb.conf.d/50-server.cnf
```
* Do the following changes
* Add `#` in front of
* `bind-address = 127.0.0.1`
* Remove `#` in front of the following lines and replace `X` by `1` on the master VM and by `2` on the worker VM
```
#server-id = X
#log_bin = /var/log/mysql/mysql-bin.log
```
* Below the lines shown above add the following line:
```
binlog_do_db = nextcloud
```
* Restart MariaDB
* ```
systemctl restart mysql
```
* Launch MariaDB
* ```
mysql
```
## Create User with Replication Grant
* Do the following on both VMs
* ```
CREATE USER 'repuser'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ;
FLUSH PRIVILEGES;
show master status\G;
```
## Verify the Access of the User
* Verify the access of the user
```
SELECT host FROM mysql.user WHERE User = 'repuser';
```
* You want to see `%` in Host
## Set the VMs to Accept the MariaDB Connection
### TF Template Worker Server Data
* Write the following in the worker VM
* ```
CHANGE MASTER TO MASTER_HOST='10.1.3.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
start slave;
```
* ```
show slave status\G;
```
### TF Template Master Server Data
* Write the following in the master VM
* ```
CHANGE MASTER TO MASTER_HOST='10.1.4.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
start slave;
```
* ```
show slave status\G;
```
## Set the Nextcloud User and Database
We now set the Nextcloud database. You should choose your own username and password. The password should be the same for the master and worker VMs.
* On the master VM, write:
```
CREATE DATABASE nextcloud;
CREATE USER 'ncuser'@'%';
GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234';
FLUSH PRIVILEGES;
```
* On the worker VM, write:
```
CREATE USER 'ncuser'@'%';
GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234';
FLUSH PRIVILEGES;
```
* To see the databases, write:
```
show databases;
```
* To see users, write:
```
select user from mysql.user;
```
* To exit MariaDB, write:
```
exit;
```
# Install and Set GlusterFS
We will now install and set [GlusterFS](https://www.gluster.org/), a free and open source software scalable network filesystem.
* Install GlusterFS on both the master and worker VMs
* ```
echo | add-apt-repository ppa:gluster/glusterfs-7 && apt install glusterfs-server -y
```
* Start the GlusterFS service on both VMs
* ```
systemctl start glusterd.service && systemctl enable glusterd.service
```
* Set the master to worker probe IP on the master VM:
* ```
gluster peer probe 10.1.4.2
```
* See the peer status on the worker VM:
* ```
gluster peer status
```
* Set the master and worker IP address on the master VM:
* ```
gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force
```
* Start GlusterFS on the master VM:
* ```
gluster volume start vol1
```
* Check the status on the worker VM:
* ```
gluster volume status
```
* Mount the server with the master IP on the master VM:
* ```
mount -t glusterfs 10.1.3.2:/vol1 /var/www
```
* See if the mount is there on the master VM:
* ```
df -h
```
* Mount the server with the worker IP on the worker VM:
* ```
mount -t glusterfs 10.1.4.2:/vol1 /var/www
```
* See if the mount is there on the worker VM:
* ```
df -h
```
We now update the mount with the filse fstab on both VMs.
* To prevent the mount from being aborted if the server reboots, write the following on both servers:
* ```
nano /etc/fstab
```
* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2):
* ```
10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2):
* ```
10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
# Install PHP and Nextcloud
* Install PHP and the PHP modules for Nextcloud on both the master and the worker:
* ```
apt install php -y && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y
```
We will now install Nextcloud. This is done only on the master VM.
* On both the master and worker VMs, go to the folder `/var/www`:
* ```
cd /var/www
```
* To install the latest Nextcloud version, go to the Nextcloud homepage:
* See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/).
* We now download Nextcloud on the master VM.
* ```
wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip
```
You only need to download on the master VM, since you set a peer-to-peer connection, it will also be accessible on the worker VM.
* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress:
* ```
apt install p7zip-full -y
```
* ```
7z x nextcloud-27.0.1.zip -o/var/www/
```
* After the download, see if the Nextcloud file is there on the worker VM:
* ```
ls
```
* Then, we grant permissions to the folder. Do this on both the master VM and the worker VM.
* ```
chown www-data:www-data /var/www/nextcloud/ -R
```
# Create a Subdomain with DuckDNS
We want to create a subdomain to access Nextcloud over the public internet.
For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit.
We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website. Make sure to do this for both VMs.
* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/).
* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system.
Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure.
## Worker File for DuckDNS
In our current scenario, we want to make sure the master VM stays the main IP address for the DuckDNS subdomain as long as the master VM is online. To do so, we add an `if` statement in the worker VM's `duck.sh` file. The process is as follow: the worker VM will ping the master VM and if it sees that the master VM is offline, it will run the command to update DuckDNS's subdomain with the worker VM's IP address. When the master VM goes back online, it will run the `duck.sh` file within 5 minutes and the DuckDNS's subdomain will be updated with the master VM's IP address.
The content of the `duck.sh` file for the worker VM is the following. Make sure to replace the line `echo ...` with the line provided by DuckDNS and to replace `mastervm_IPv4_address` with the master VM's IP address.
```
ping -c 2 mastervm_IPv4_address
if [ $? != 0 ]
then
echo url="https://www.duckdns.org/update?domains=exampledomain&token=a7c4d0ad-114e-40ef-ba1d-d217904a50f2&ip=" | curl -k -o ~/duckdns/duck.log -K -
fi
```
Note: When the master VM goes offline, after 5 minutes maximum DuckDNS will change the IP address from the masters to the workers. Without clearing the DNS cache, your browser might have some difficulties connecting to the updated IP address when reaching the URL `subdomain.duckdns.org`. Thus you might need to [clear your DNS cache](https://blog.hubspot.com/website/flush-dns). You can also use the [Tor browser](https://www.torproject.org/) to connect to Nextcloud. If the IP address changes, you can simply leave the browser and reopen another session as the browser will automatically clear the DNS cache.
# Set Apache
We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`.
* On both the master and worker VMs, write the following:
* ```
nano /etc/apache2/sites-available/nextcloud.conf
```
The file should look like this, with your own subdomain instead of `subdomain`:
```
<VirtualHost *:80>
DocumentRoot "/var/www/nextcloud"
ServerName subdomain.duckdns.org
ServerAlias www.subdomain.duckdns.org
ErrorLog ${APACHE_LOG_DIR}/nextcloud.error
CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined
<Directory /var/www/nextcloud/>
Require all granted
Options FollowSymlinks MultiViews
AllowOverride All
<IfModule mod_dav.c>
Dav off
</IfModule>
SetEnv HOME /var/www/nextcloud
SetEnv HTTP_HOME /var/www/nextcloud
Satisfy Any
</Directory>
</VirtualHost>
```
* On both the master VM and the worker VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file:
* ```
a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl
```
* Then, reload and restart Apache:
* ```
systemctl reload apache2 && systemctl restart apache2
```
# Access Nextcloud on a Web Browser with the Subdomain
We now access Nextcloud over the public Internet.
* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain):
* ```
subdomain.duckdns.org
```
Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser.
* Choose a name and a password. For this guide, we use the following:
* ```
ncadmin
password1234
```
* Enter the Nextcloud Database information created with MariaDB and click install:
* ```
Database user: ncuser
Database password: password1234
Database name: nextcloud
Database location: localhost
```
Nextcloud will then proceed to complete the installation.
We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database.
After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain.
# Enable HTTPS
## Install Certbot
We will now enable HTTPS. This needs to be done on the master VM as well as the worker VM. This section can be done simultaneously on the two VMs. But make sure to do the next section on setting the Certbot with only one VM at a time.
To enable HTTPS, first install `letsencrypt` with `certbot`:
Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/)
* See if you have the latest version of snap:
* ```
snap install core; snap refresh core
```
* Remove certbot-auto:
* ```
apt-get remove certbot
```
* Install certbot:
* ```
snap install --classic certbot
```
* Ensure that certbot can be run:
* ```
ln -s /snap/bin/certbot /usr/bin/certbot
```
* Then, install certbot-apache:
* ```
apt install python3-certbot-apache -y
```
## Set the Certbot with the DNS Domain
To avoid errors, set HTTPS with the master VM and power off the worker VM.
* To do so with a 3node, you can simply comment the `vms` section of the worker VM in the Terraform `main.tf` file and do `terraform apply` on the terminal.
* Put `/*` one line above the section, and `*/` one line below the section `vms`:
```
/*
vms {
name = "vm2"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk2"
mount_point = "/disk2"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
*/
```
* Put `#` in front of the appropriated lines, as shown below:
```
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
#output "node1_zmachine2_ip" {
# value = grid_deployment.d2.vms[0].ip
#}
output "ygg_ip1" {
value = grid_deployment.d1.vms[0].ygg_ip
}
#output "ygg_ip2" {
# value = grid_deployment.d2.vms[0].ygg_ip
#}
output "ipv4_vm1" {
value = grid_deployment.d1.vms[0].computedip
}
#output "ipv4_vm2" {
# value = grid_deployment.d2.vms[0].computedip
#}
```
* To add the HTTPS protection, write the following line on the master VM with your own subdomain:
* ```
certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org
```
* Once the HTTPS is set, you can reset the worker VM:
* To reset the worker VM, simply remove `/*`, `*/` and `#` on the main file and redo `terraform apply` on the terminal.
Note: You then need to redo the same process with the worker VM. This time, make sure to set the master VM offline to avoid errors. This means that you should comment the section `vms`of `vm1`instead of `vm2`.
## Verify HTTPS Automatic Renewal
* Make a dry run of the certbot renewal to verify that it is correctly set up.
* ```
certbot renew --dry-run
```
You now have HTTPS security on your Nextcloud instance.
# Set a Firewall
Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
It should already be installed on your system. If it is not, install it with the following command:
```
apt install ufw
```
For our security rules, we want to allow SSH, HTTP and HTTPS.
We thus add the following rules:
* Allow SSH (port 22)
* ```
ufw allow ssh
```
* Allow HTTP (port 80)
* ```
ufw allow http
```
* Allow https (port 443)
* ```
ufw allow https
```
* To enable the firewall, write the following:
* ```
ufw enable
```
* To see the current security rules, write the following:
* ```
ufw status verbose
```
You now have enabled the firewall with proper security rules for your Nextcloud deployment.
# Conclusion
If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone!
The Nextcloud database is synced in real-time on two different 3nodes. When one 3node goes offline, the database is still synchronized on the other 3node. Once the powered-off 3node goes back online, the database is synced automatically with the node that was powered off.
You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this.
You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB, GlusterFS, PHP and Nextcloud. Now, you know how to deploy workloads on the Threefold Grid with an efficient architecture in order to ensure redundancy. This is just the beginning. The Threefold Grid has a somewhat infinite potential when it comes to deployments, workloads, architectures and server projects. Let's see where it goes from here!
This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge.
# Acknowledgements and References
A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort!
The main reference for this guide is this [amazing video](https://youtu.be/ARsqxUw1ONc) by NETVN82. Many steps were modified or added to make this suitable with Wireguard and the Threefold Grid. Other configurations are possible. We invite you to explore the possibilities offered by the Threefold Grid!
This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform.

View File

@@ -1,594 +0,0 @@
<h1>Nextcloud Single Deployment </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Main Steps](#main-steps)
- [Prerequisites](#prerequisites)
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
- [Set the Full VM](#set-the-full-vm)
- [Overview](#overview)
- [Create the Terraform Files](#create-the-terraform-files)
- [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform)
- [SSH into the 3Node](#ssh-into-the-3node)
- [Prepare the Full VM](#prepare-the-full-vm)
- [Create the MariaDB Database](#create-the-mariadb-database)
- [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database)
- [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database)
- [Install PHP and Nextcloud](#install-php-and-nextcloud)
- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns)
- [Set Apache](#set-apache)
- [Access Nextcloud on a Web Browser](#access-nextcloud-on-a-web-browser)
- [Enable HTTPS](#enable-https)
- [Install Certbot](#install-certbot)
- [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain)
- [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal)
- [Set a Firewall](#set-a-firewall)
- [Conclusion](#conclusion)
- [Acknowledgements and References](#acknowledgements-and-references)
***
# Introduction
In this Threefold Guide, we deploy a [Nextcloud](https://nextcloud.com/) instance on a full VM running on the [Threefold Grid](https://threefold.io/).
We will learn how to deploy a full virtual machine (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). We will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity.
As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/).
Let's go!
# Main Steps
This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out!
To get an overview of the whole process, we present the main steps:
* Download the dependencies
* Find a 3Node on the TF Grid
* Deploy and set the VM with Terraform
* Install PHP and Nextcloud
* Create a subdomain with DuckDNS
* Set Apache
* Access Nextcloud
* Add HTTPS protection
* Set a firewall
# Prerequisites
- [Install Terraform](../terraform_install.md)
You need to download and install properly Terraform on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
# Find a 3Node with the ThreeFold Explorer
We first need to decide on which 3Node we will be deploying our workload.
We thus start by finding a 3Node with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address.
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
* For proper understanding, we give further information on some relevant columns:
* `ID` refers to the node ID
* `Free Public IPs` refers to available IPv4 public IP addresses
* `HRU` refers to HDD storage
* `SRU` refers to SSD storage
* `MRU` refers to RAM (memory)
* `CRU` refers to virtual cores (vcores)
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Node.
* `Free SRU (GB)`: 50
* `Free MRU (GB)`: 2
* `Total CRU (Cores)`: 1
* `Free Public IP`: 2
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
Once you've found a 3Node, take note of its node ID. You will need to use this ID when creating the Terraform files.
# Set the Full VM
## Overview
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload.
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-single-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on.
## Create the Terraform Files
Open the terminal and follow those steps.
* Go to the home folder
* ```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-single-nextcloud`:
* ```
mkdir -p terraform/deployment-single-nextcloud
```
* ```
cd terraform/deployment-single-nextcloud
```
* Create the `main.tf` file:
* ```
nano main.tf
```
* Copy the `main.tf` content and save the file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "tfvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid1, var.tfnodeid2]
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "ygg_ip1" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "ipv4_vm1" {
value = grid_deployment.d1.vms[0].computedip
}
```
In this file, we name the full VM as `vm1`.
* Create the `credentials.auto.tfvars` file:
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid1 = "..."
size = "50"
cpu = "1"
memory = "2048"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node. Simply replace the three dots by the appropriate content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers.
## Deploy the Full VM with Terraform
We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-single-nextcloud` with the main and variables files.
* Initialize Terraform:
* ```
terraform init
```
* Apply Terraform to deploy the full VM:
* ```
terraform apply
```
After deployments, take note of the 3Node's IPv4 address. You will need this address to SSH into the 3Node.
## SSH into the 3Node
* To [SSH into the 3Node](../../getstarted/ssh_guide/ssh_guide.md), write the following:
* ```
ssh root@VM_IPv4_Address
```
## Prepare the Full VM
* Update and upgrade the system
* ```
apt update && apt upgrade && apt-get install apache2
```
* After download, reboot the system
* ```
reboot
```
* Reconnect to the VM
# Create the MariaDB Database
## Download MariaDB and Configure the Database
* Download MariaDB's server and client
* ```
apt install mariadb-server mariadb-client
```
* Configure the MariaDB database
* ```
nano /etc/mysql/mariadb.conf.d/50-server.cnf
```
* Do the following changes
* Add `#` in front of
* `bind-address = 127.0.0.1`
* Remove `#` in front of the following lines and make sure the variable `server-id` is set to `1`
```
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
```
* Below the lines shown above add the following line:
```
binlog_do_db = nextcloud
```
* Restart MariaDB
* ```
systemctl restart mysql
```
* Launch MariaDB
* ```
mysql
```
## Set the Nextcloud User and Database
We now set the Nextcloud database. You should choose your own username and password.
* On the full VM, write:
```
CREATE DATABASE nextcloud;
CREATE USER 'ncuser'@'%';
GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234';
FLUSH PRIVILEGES;
```
* To see the databases, write:
```
show databases;
```
* To see users, write:
```
select user from mysql.user;
```
* To exit MariaDB, write:
```
exit;
```
# Install PHP and Nextcloud
* Install PHP and the PHP modules for Nextcloud on both the master and the worker:
* ```
apt install php && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip
```
We will now install Nextcloud.
* On the full VM, go to the folder `/var/www`:
* ```
cd /var/www
```
* To install the latest Nextcloud version, go to the Nextcloud homepage:
* See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/).
* We now download Nextcloud on the full VM.
* ```
wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip
```
* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress:
* ```
apt install p7zip-full
```
* ```
7z x nextcloud-27.0.1.zip -o/var/www/
```
* Then, we grant permissions to the folder.
* ```
chown www-data:www-data /var/www/nextcloud/ -R
```
# Create a Subdomain with DuckDNS
We want to create a subdomain to access Nextcloud over the public internet.
For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit.
We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website.
* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/).
* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system.
Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure.
# Set Apache
We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`.
* On full VM, write the following:
* ```
nano /etc/apache2/sites-available/nextcloud.conf
```
The file should look like this, with your own subdomain instead of `subdomain`:
```
<VirtualHost *:80>
DocumentRoot "/var/www/nextcloud"
ServerName subdomain.duckdns.org
ServerAlias www.subdomain.duckdns.org
ErrorLog ${APACHE_LOG_DIR}/nextcloud.error
CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined
<Directory /var/www/nextcloud/>
Require all granted
Options FollowSymlinks MultiViews
AllowOverride All
<IfModule mod_dav.c>
Dav off
</IfModule>
SetEnv HOME /var/www/nextcloud
SetEnv HTTP_HOME /var/www/nextcloud
Satisfy Any
</Directory>
</VirtualHost>
```
* On the full VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file:
* ```
a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl
```
* Then, reload and restart Apache:
* ```
systemctl reload apache2 && systemctl restart apache2
```
# Access Nextcloud on a Web Browser
We now access Nextcloud over the public Internet.
* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain):
* ```
subdomain.duckdns.org
```
Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser.
* Choose a name and a password. For this guide, we use the following:
* ```
ncadmin
password1234
```
* Enter the Nextcloud Database information created with MariaDB and click install:
* ```
Database user: ncuser
Database password: password1234
Database name: nextcloud
Database location: localhost
```
Nextcloud will then proceed to complete the installation.
We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database.
After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain.
# Enable HTTPS
## Install Certbot
We will now enable HTTPS on the full VM.
To enable HTTPS, first install `letsencrypt` with `certbot`:
Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/)
* See if you have the latest version of snap:
* ```
snap install core; snap refresh core
```
* Remove certbot-auto:
* ```
apt-get remove certbot
```
* Install certbot:
* ```
snap install --classic certbot
```
* Ensure that certbot can be run:
* ```
ln -s /snap/bin/certbot /usr/bin/certbot
```
* Then, install certbot-apache:
* ```
apt install python3-certbot-apache
```
## Set the Certbot with the DNS Domain
We now set the certbot with the DNS domain.
* To add the HTTPS protection, write the following line on the full VM with your own subdomain:
* ```
certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org
```
## Verify HTTPS Automatic Renewal
* Make a dry run of the certbot renewal to verify that it is correctly set up.
* ```
certbot renew --dry-run
```
You now have HTTPS security on your Nextcloud instance.
# Set a Firewall
Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
It should already be installed on your system. If it is not, install it with the following command:
```
apt install ufw
```
For our security rules, we want to allow SSH, HTTP and HTTPS.
We thus add the following rules:
* Allow SSH (port 22)
* ```
ufw allow ssh
```
* Allow HTTP (port 80)
* ```
ufw allow http
```
* Allow https (port 443)
* ```
ufw allow https
```
* To enable the firewall, write the following:
* ```
ufw enable
```
* To see the current security rules, write the following:
* ```
ufw status verbose
```
You now have enabled the firewall with proper security rules for your Nextcloud deployment.
# Conclusion
If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone!
You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this.
You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Terraform, MariaDB, PHP and Nextcloud.
This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge.
# Acknowledgements and References
A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort!
This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform.
This single Nextcloud instance guide is an adaptation from the [Nextcloud Redundant Deployment guide](terraform_nextcloud_redundant.md). The inspiration to make a single instance deployment guide comes from [RobertL](https://forum.threefold.io/t/threefold-guide-nextcloud-redundant-deployment-on-two-3node-servers/3915/3) on the ThreeFold Forum.
Thanks to everyone who helped shape this guide.

View File

@@ -1,10 +0,0 @@
<h1> Nextcloud Deployments </h2>
We present here different Nextcloud deployments. While this section is focused on Nextcloud, those deployment architectures can be used as templates for other kind of deployments on the TFGrid.
<h2> Table of Contents </h2>
- [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md)
- [Nextcloud Single Deployment](./terraform_nextcloud_single.md)
- [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md)
- [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md)

View File

@@ -1,343 +0,0 @@
<h1>Nextcloud 2-Node VPN Deployment</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [2-Node Terraform Deployment](#2-node-terraform-deployment)
- [Create the Terraform Files](#create-the-terraform-files)
- [Variables File](#variables-file)
- [Main File](#main-file)
- [Deploy the 2-Node VPN](#deploy-the-2-node-vpn)
- [Nextcloud Setup](#nextcloud-setup)
- [Nextcloud VM Prerequisites](#nextcloud-vm-prerequisites)
- [Prepare the VMs for the Rsync Daily Backup](#prepare-the-vms-for-the-rsync-daily-backup)
- [Create a Cron Job for the Rsync Daily Backup](#create-a-cron-job-for-the-rsync-daily-backup)
- [Future Projects](#future-projects)
- [Questions and Feedback](#questions-and-feedback)
***
# Introduction
This guide is a proof-of-concept to show that, using two VMs in a WireGuard VPN, it is possible to, on the first VM, set a Nextcloud AIO instance on the TFGrid, set on it a daily backup and update with Borgbackup, and, on the second VM, set a second daily backup of the first backup. This means that we have 2 virtual machines, one VM with the Nextcloud instance and the Nextcloud backup, and another VM with a backup of the Nextcloud backup.
This architecture leads to a higher redundancy level, since we can afford to lose one of the two VMs and still be able to retrieve the Nextcloud database. Note that to achieve this, we are creating a virtual private network (VPN) with WireGuard. This will connect the two VMs and allow for file transfers. While there are many ways to proceed, for this guide we will be using [ssh-keygen](https://linux.die.net/man/1/ssh-keygen), [Rsync](https://linux.die.net/man/1/rsync) and [Cron](https://linux.die.net/man/1/crontab).
Note that, in order to reduce the deployment cost, we set the minimum CPU and memory requirements for the Backup VM. We do not need high CPU and memory for this VM since it is only used for storage.
Note that this guide also make use of the ThreeFold gateway. For this reason, this deployment can be set on any two 3Nodes on the TFGrid, i.e. there is no need for IPv4 on the 2 nodes we are deploying on, as long as we set a gateway on a gateway node.
For now, let's see how to achieve this redundant deployment with Rsync!
# 2-Node Terraform Deployment
For this guide, we are deploying a Nextcloud AIO instance along a Backup VM, enabling daily backups of both VMs. The two VMs are connected by a WireGuard VPN. The deployment will be using the [Nextcloud FList](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) available in the **tf-images** ThreeFold Tech repository.
## Create the Terraform Files
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. **var.size** for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main.tf as is.
For this example, we will be deploying the Nextcloud instance with a ThreeFold gateway and a gateway domain. Other configurations are possible.
### Variables File
* Copy the following content and save the file under the name `credentials.auto.tfvars`:
```
mnemonics = "..."
SSH_KEY = "..."
network = "main"
size_vm1 = "50"
cpu_vm1 = "2"
memory_vm1 = "4096"
size_vm2 = "50"
cpu_vm2 = "1"
memory_vm2 = "512"
gateway_id = "50"
vm1_id = "5453"
vm2_id = "12"
deployment_name = "nextcloudgatewayvpn"
nextcloud_flist = "https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist"
```
Make sure to add your own seed phrase and SSH public key. Simply replace the three dots by the content. Note that you can deploy on a different node than node 5453 for the **vm1** node. If you want to deploy on another node than node 5453 for the **gateway** node, make sure that you choose a gateway node. To find a gateway node, go on the [ThreeFold Dashboard](https://dashboard.grid.tf/) Nodes section of the Explorer and select **Gateways (Only)**.
Obviously, you can decide to increase or modify the quantity for the CPU, memory and size variables. Note that we set the minimum CPU and memory parameters for the Backup VM (**vm2**). This will reduce the cost of the deployment. Since the Backup VM is only used for storage, we don't need to set the CPU and memory higher.
### Main File
* Copy the following content and save the file under the name `main.tf`:
```
variable "mnemonics" {
type = string
default = "your mnemonics"
}
variable "network" {
type = string
default = "main"
}
variable "SSH_KEY" {
type = string
default = "your SSH pub key"
}
variable "deployment_name" {
type = string
}
variable "size_vm1" {
type = string
}
variable "cpu_vm1" {
type = string
}
variable "memory_vm1" {
type = string
}
variable "size_vm2" {
type = string
}
variable "cpu_vm2" {
type = string
}
variable "memory_vm2" {
type = string
}
variable "nextcloud_flist" {
type = string
}
variable "gateway_id" {
type = string
}
variable "vm1_id" {
type = string
}
variable "vm2_id" {
type = string
}
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
mnemonics = var.mnemonics
network = var.network
}
data "grid_gateway_domain" "domain" {
node = var.gateway_id
name = var.deployment_name
}
resource "grid_network" "net" {
nodes = [var.gateway_id, var.vm1_id, var.vm2_id]
ip_range = "10.1.0.0/16"
name = "network"
description = "My network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
node = var.vm1_id
network_name = grid_network.net.name
disks {
name = "data"
size = var.size_vm1
}
vms {
name = "vm1"
flist = var.nextcloud_flist
cpu = var.cpu_vm1
memory = var.memory_vm1
rootfs_size = 15000
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
GATEWAY = "true"
IPV4 = "false"
NEXTCLOUD_DOMAIN = data.grid_gateway_domain.domain.fqdn
}
mounts {
disk_name = "data"
mount_point = "/mnt/data"
}
}
}
resource "grid_deployment" "d2" {
disks {
name = "disk2"
size = var.size_vm2
}
node = var.vm2_id
network_name = grid_network.net.name
vms {
name = "vm2"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu_vm2
mounts {
disk_name = "disk2"
mount_point = "/disk2"
}
memory = var.memory_vm2
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
planetary = true
}
}
resource "grid_name_proxy" "p1" {
node = var.gateway_id
name = data.grid_gateway_domain.domain.name
backends = [format("http://%s:80", grid_deployment.d1.vms[0].ip)]
network = grid_network.net.name
tls_passthrough = false
}
output "wg_config" {
value = grid_network.net.access_wg_config
}
output "vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "vm2_ip" {
value = grid_deployment.d2.vms[0].ip
}
output "fqdn" {
value = data.grid_gateway_domain.domain.fqdn
}
```
## Deploy the 2-Node VPN
We now deploy the 2-node VPN with Terraform. Make sure that you are in the correct folder containing the main and variables files.
* Initialize Terraform:
* ```
terraform init
```
* Apply Terraform to deploy Nextcloud:
* ```
terraform apply
```
Note that, at any moment, if you want to see the information on your Terraform deployment, write the following:
* ```
terraform show
```
# Nextcloud Setup
* Access Nextcloud Setup
* Once you've deployed Nextcloud, you can access the Nextcloud Setup page by pasting on a browser the URL displayed on the line `fqdn = "..."` of the `terraform show` output. For more information on this, [read this documentation](../../../dashboard/solutions/nextcloud.md#nextcloud-setup).
* Create a backup and set a daily backup and update
* Make sure to create a backup with `/mnt/backup` as the mount point, and set a daily update and backup for your Nextcloud VM. For more information, [read this documentation](../../../dashboard/solutions/nextcloud.md#backups-and-updates).
> Note: By default, the daily Borgbackup is set at 4:00 UTC. If you change this parameter, make sure to adjust the moment the [Rsync backup](#create-a-cron-job-for-the-rsync-daily-backup) is done.
# Nextcloud VM Prerequisites
We need to install a few things on the Nextcloud VM before going further.
* Update the Nextcloud VM
* ```
apt update
```
* Install ping on the Nextcloud VM if you want to test the VPN connection (Optional)
* ```
apt install iputils-ping -y
```
* Install Rsync on the Nextcloud VM
* ```
apt install rsync
```
* Install nano on the Nextcloud VM
* ```
apt install nano
```
* Install Cron on the Nextcloud VM
* apt install cron
# Prepare the VMs for the Rsync Daily Backup
* Test the VPN (Optional) with [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping)
* ```
ping <WireGuard_VM_IP_Address>
```
* Generate an SSH key pair on the Backup VM
* ```
ssh-keygen
```
* Take note of the public key in the Backup VM
* ```
cat ~/.ssh/id_rsa.pub
```
* Add the public key of the Backup VM in the Nextcloud VM
* ```
nano ~/.ssh/authorized_keys
```
> Make sure to put the Backup VM SSH public key before the public key already present in the file **authorized_keys** of the Nextcloud VM.
# Create a Cron Job for the Rsync Daily Backup
We now set a daily cron job that will make a backup between the Nextcloud VM and the Backup VM using Rsync.
* Open the crontab on the Backup VM
* ```
crontab -e
```
* Add the cron job at the end of the file
* ```
0 8 * * * rsync -avz --no-perms -O --progress --delete --log-file=/root/rsync_storage.log root@10.1.3.2:/mnt/backup/ /mnt/backup/
```
> Note: By default, the Nextcloud automatic backup is set at 4:00 UTC. For this reason, we set the Rsync daily backup at 8:00 UTC.
> Note: To set Rsync with a script, [read this documentation](../../computer_it_basics/file_transfer.md#automate-backup-with-rsync).
# Future Projects
This concept can be expanded in many directions. We can generate a script to facilitate the process, we can set a script directly in an FList for minimal user configurations, we can also explore Mariadb and GlusterFS instead of Rsync.
As a generic deployment, we can develop a weblet that makes a daily backup of any other ThreeFold Playground weblet.
# Questions and Feedback
We invite others to propose ideas and codes if they feel inspired!
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.

View File

@@ -1,359 +0,0 @@
<h1>Deploy a Nomad Cluster</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [What is Nomad?](#what-is-nomad)
- [Prerequisites](#prerequisites)
- [Create the Terraform Files](#create-the-terraform-files)
- [Main File](#main-file)
- [Credentials File](#credentials-file)
- [Deploy the Nomad Cluster](#deploy-the-nomad-cluster)
- [SSH into the Client and Server Nodes](#ssh-into-the-client-and-server-nodes)
- [SSH with the Planetary Network](#ssh-with-the-planetary-network)
- [SSH with WireGuard](#ssh-with-wireguard)
- [Destroy the Nomad Deployment](#destroy-the-nomad-deployment)
- [Conclusion](#conclusion)
***
## Introduction
In this ThreeFold Guide, we will learn how to deploy a Nomad cluster on the TFGrid with Terraform. We cover a basic two client and three server nodes Nomad cluster. After completing this guide, you will have sufficient knowledge to build your own personalized Nomad cluster.
## What is Nomad?
[Nomad](https://www.nomadproject.io/) is a simple and flexible scheduler and orchestrator to deploy and manage containers and non-containerized applications across on-premises and clouds at scale.
In the dynamic world of cloud computing, managing and orchestrating workloads across diverse environments can be a daunting task. Nomad emerges as a powerful solution, simplifying and streamlining the deployment, scheduling, and management of applications.
Nomad's elegance lies in its lightweight architecture and ease of use. It operates as a single binary, minimizing resource consumption and complexity. Its intuitive user interface and straightforward configuration make it accessible to a wide range of users, from novices to experienced DevOps.
Nomad's versatility extends beyond its user-friendliness. It seamlessly handles a wide array of workloads, including legacy applications, microservices, and batch jobs. Its adaptability extends to diverse environments, effortlessly orchestrating workloads across on-premises infrastructure and public clouds. It's more of Kubernetes for humans!
## Prerequisites
* [Install Terraform](https://developer.hashicorp.com/terraform/downloads)
* [Install WireGuard](https://www.wireguard.com/install/)
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
If you are new to Terraform, feel free to read this basic [Terraform Full VM guide](../terraform_full_vm.md) to get you started.
## Create the Terraform Files
For this guide, we use two files to deploy with Terraform: a main file and a variables file. The variables file contains the environment variables and the main file contains the necessary information to deploy your workload.
To facilitate the deployment, only the environment variables file needs to be adjusted. The file `main.tf` will be using the environment variables from the variables files (e.g. `var.cpu` for the CPU parameter) and thus you do not need to change this file.
Of course, you can adjust the two files based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main file as is.
Also note that this deployment uses both the Planetary network and WireGuard.
### Main File
We start by creating the main file for our Nomad cluster.
* Create a directory for your Terraform Nomad cluster
* ```
mkdir nomad
```
* ```
cd nomad
```
* Create the `main.tf` file
* ```
nano main.tf
```
* Copy the following `main.tf` template and save the file
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "nomadcluster"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid]
ip_range = "10.1.0.0/16"
description = "nomad network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid
network_name = grid_network.net1.name
vms {
name = "server1"
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist"
cpu = var.cpu
memory = var.memory
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
entrypoint = "/sbin/zinit init"
ip = "10.1.3.2"
env_vars = {
SSH_KEY = var.SSH_KEY
}
planetary = true
}
vms {
name = "server2"
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist"
cpu = var.cpu
memory = var.memory
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
FIRST_SERVER_IP = "10.1.3.2"
}
planetary = true
}
vms {
name = "server3"
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist"
cpu = var.cpu
memory = var.memory
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
FIRST_SERVER_IP = "10.1.3.2"
}
planetary = true
}
vms {
name = "client1"
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist"
cpu = var.cpu
memory = var.memory
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
FIRST_SERVER_IP = "10.1.3.2"
}
planetary = true
}
vms {
name = "client2"
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist"
cpu = var.cpu
memory = var.memory
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
FIRST_SERVER_IP = "10.1.3.2"
}
planetary = true
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "server1_wg_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "server2_wg_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "server3_wg_ip" {
value = grid_deployment.d1.vms[2].ip
}
output "client1_wg_ip" {
value = grid_deployment.d1.vms[3].ip
}
output "client2_wg_ip" {
value = grid_deployment.d1.vms[4].ip
}
output "server1_planetary_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "server2_planetary_ip" {
value = grid_deployment.d1.vms[1].ygg_ip
}
output "server3_planetary_ip" {
value = grid_deployment.d1.vms[2].ygg_ip
}
output "client1_planetary_ip" {
value = grid_deployment.d1.vms[3].ygg_ip
}
output "client2_planetary_ip" {
value = grid_deployment.d1.vms[4].ygg_ip
}
```
### Credentials File
We create a credentials file that will contain the environment variables. This file should be in the same directory as the main file.
* Create the `credentials.auto.tfvars` file
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file
* ```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid = "..."
size = "50"
cpu = "2"
memory = "1024"
```
Make sure to replace the three dots by your own information for `mnemonics` and `SSH_KEY`. You will also need to find a suitable node for your deployment and set its node ID (`tfnodeid`). Feel free to adjust the parameters `size`, `cpu` and `memory` if needed.
## Deploy the Nomad Cluster
We now deploy the Nomad Cluster with Terraform. Make sure that you are in the directory containing the `main.tf` file.
* Initialize Terraform
* ```
terraform init
```
* Apply Terraform to deploy the Nomad cluster
* ```
terraform apply
```
## SSH into the Client and Server Nodes
You can now SSH into the client and server nodes using both the Planetary network and WireGuard.
Note that the IP addresses will be shown under `Outputs` after running the command `Terraform apply`, with `planetary_ip` for the Planetary network and `wg_ip` for WireGuard.
### SSH with the Planetary Network
* To [SSH with the Planetary network](../../getstarted/ssh_guide/ssh_openssh.md), write the following with the proper IP address
* ```
ssh root@planetary_ip
```
You now have an SSH connection access over the Planetary network to the client and server nodes of your Nomad cluster.
### SSH with WireGuard
To SSH with WireGuard, we first need to set the proper WireGuard configurations.
* Create a file named `wg.conf` in the directory `/etc/wireguard`
* ```
nano /etc/wireguard/wg.conf
```
* Paste the content provided by the Terraform deployment in the file `wg.conf` and save it.
* Note that you can use `terraform show` to see the Terraform output. The WireGuard configurations (`wg_config`) stands in between the two `EOT` instances.
* Start WireGuard on your local computer
* ```
wg-quick up wg
```
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the WireGuard IP of a node to make sure the connection is correct
* ```
ping wg_ip
```
We are now ready to SSH into the client and server nodes with WireGuard.
* To SSH with WireGuard, write the following with the proper IP address:
* ```
ssh root@wg_ip
```
You now have an SSH connection access over WireGuard to the client and server nodes of your Nomad cluster. For more information on connecting with WireGuard, read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
## Destroy the Nomad Deployment
If you want to destroy the Nomad deployment, write the following in the terminal:
* ```
terraform destroy
```
* Then write `yes` to confirm.
Make sure that you are in the corresponding Terraform folder when writing this command.
## Conclusion
You now have the basic knowledge to deploy a Nomad cluster on the TFGrid. Feel free to explore the many possibilities available that come with Nomad.
You can now use a Nomad cluster to deploy your workloads. For more information on this, read this documentation on [how to deploy a Redis workload on the Nomad cluster](https://developer.hashicorp.com/nomad/tutorials/get-started/gs-deploy-job).
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,53 +0,0 @@
<h1> Terraform Provider </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [Environment Variables](#environment-variables)
- [Remarks](#remarks)
***
## Introduction
We present the basics of the Terraform Provider.
## Example
``` terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
mnemonics = "FROM THE CREATE TWIN STEP"
network = grid network, one of: dev test qa main
key_type = key type registered on substrate (ed25519 or sr25519)
relay_url = example: "wss://relay.dev.grid.tf"
rmb_timeout = timeout duration in seconds for rmb calls
substrate_url = substrate url, example: "wss://tfchain.dev.grid.tf/ws"
}
```
## Environment Variables
should be recognizable as Env variables too
- `MNEMONICS`
- `NETWORK`
- `SUBSTRATE_URL`
- `KEY_TYPE`
- `RELAY_URL`
- `RMB_TIMEOUT`
The *_URL variables can be used to override the dafault urls associated with the specified network
## Remarks
- Grid terraform provider is hosted on terraform registry [here](https://registry.terraform.io/providers/threefoldtech/grid/latest/docs?pollNotifications=true)
- All provider input variables and their description can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/index.md)
- Capitalized environment variables can be used instead of writing them in the provider (e.g. MNEMONICS)

View File

@@ -1,119 +0,0 @@
<h1> Terraform and Provisioner </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [Params docs](#params-docs)
- [Requirements](#requirements)
- [Connection Block](#connection-block)
- [Provisioner Block](#provisioner-block)
- [More Info](#more-info)
***
## Introduction
In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/external_provisioner/remote-exec_hello-world/main.tf), we will see how to deploy a VM and apply provisioner commands on it on the TFGrid.
## Example
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
locals {
name = "myvm"
}
resource "grid_network" "net1" {
nodes = [1]
ip_range = "10.1.0.0/24"
name = local.name
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
name = local.name
node = 1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/grid3_ubuntu20.04-latest.flist"
entrypoint = "/init.sh"
cpu = 2
memory = 1024
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
connection {
type = "ssh"
user = "root"
agent = true
host = grid_deployment.d1.vms[0].ygg_ip
}
provisioner "remote-exec" {
inline = [
"echo 'Hello world!' > /root/readme.txt"
]
}
}
```
## Params docs
### Requirements
- the machine should have `ssh server` running
- the machine should have `scp` installed
### Connection Block
- defines how we will connect to the deployed machine
``` terraform
connection {
type = "ssh"
user = "root"
agent = true
host = grid_deployment.d1.vms[0].ygg_ip
}
```
type: defines the used service to connect to
user: the connecting users
agent: if used the provisoner will use the default key to connect to the remote machine
host: the ip/host of the remote machine
### Provisioner Block
- defines the actual provisioner behaviour
``` terraform
provisioner "remote-exec" {
inline = [
"echo 'Hello world!' > /root/readme.txt"
]
}
```
- remote-exec: the provisoner type we are willing to use can be remote, local or another type
- inline: This is a list of command strings. They are executed in the order they are provided. This cannot be provided with script or scripts.
- script: This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be provided with inline or scripts.
- scripts: This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be provided with inline or script.
### More Info
A complete list of provisioner parameters can be found [here](https://www.terraform.io/language/resources/provisioners/remote-exec).

View File

@@ -1,55 +0,0 @@
<h1> Updating </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Updating with Terraform](#updating-with-terraform)
- [Adjustments](#adjustments)
***
## Introduction
We present ways to update using Terraform. Note that this is not fully supported.
Some of the updates are working, but the code is not finished, use at your own risk.
## Updating with Terraform
Updates are triggered by changing the deployments fields.
So for example, if you have the following network resource:
```terraform
resource "grid_network" "net" {
nodes = [2]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
```
Then decided to add a node:
```terraform
resource "grid_network" "net" {
nodes = [2, 4]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
```
After calling `terraform apply`, the provider does the following:
- Add node 4 to the network.
- Update the version of the workload.
- Update the version of the deployment.
- Update the hash in the contract (the contract id will stay the same)
## Adjustments
There are workloads that doesn't support in-place updates (e.g. Zmachines). To change them there are a couple of options (all performs destroy/create so data can be lost):
1. `terraform taint grid_deployment.d1` (next apply will destroy ALL workloads within grid_deployment.d1 and create a new deployment)
2. `terraform destroy --target grid_deployment.d1 && terraform apply --target grid_deployment.d1` (same as above)
3. Remove the vm, then execute a `terraform apply`, then add the vm with the new config (this performs two updates but keeps neighboring workloads inside the same deployment intact). (CAUTION: this could be done only if the vm is last one in the list of vms, otherwise undesired behavior will occur)

View File

@@ -1,280 +0,0 @@
<h1>SSH Into a 3Node with Wireguard</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
- [Create the Terraform Files](#create-the-terraform-files)
- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform)
- [Set the Wireguard Connection](#set-the-wireguard-connection)
- [SSH into the 3Node with Wireguard](#ssh-into-the-3node-with-wireguard)
- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment)
- [Conclusion](#conclusion)
***
## Introduction
In this ThreeFold Guide, we show how simple it is to deploy a micro VM on the ThreeFold Grid with Terraform and to make an SSH connection with Wireguard.
## Prerequisites
* [Install Terraform](../terraform_install.md)
* [Install Wireguard](https://www.wireguard.com/install/)
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows).
## Find a 3Node with the ThreeFold Explorer
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid.
We show here how to find a suitable 3Node using the ThreeFold Explorer.
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) to find a 3Node
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
* For proper understanding, we give further information on some relevant columns:
* `ID` refers to the node ID
* `Free Public IPs` refers to available IPv4 public IP addresses
* `HRU` refers to HDD storage
* `SRU` refers to SSD storage
* `MRU` refers to RAM (memory)
* `CRU` refers to virtual cores (vcores)
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. Here's what would work for our currernt situation.
* `Free SRU (GB)`: 15
* `Free MRU (GB)`: 1
* `Total CRU (Cores)`: 1
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
## Create the Terraform Files
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file.
Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-ssh`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on.
Now let's create the Terraform files.
* Open the terminal and go to the home directory
* ```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-wg-ssh`:
* ```
mkdir -p terraform/deployment-wg-ssh
```
* ```
cd terraform/deployment-wg-ssh
```
```
* Create the `main.tf` file:
* ```
nano main.tf
```
* Copy the `main.tf` content and save the file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "tfvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid1]
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
```
* Create the `credentials.auto.tfvars` file:
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content, set the node ID as well as your mnemonics and SSH public key, then save the file.
* ```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid1 = "..."
size = "15"
cpu = "1"
memory = "512"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node server you wish to deploy on. Simply replace the three dots by the proper content.
## Deploy the Micro VM with Terraform
We now deploy the micro VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-ssh` containing the main and variables files.
* Initialize Terraform:
* ```
terraform init
```
* Apply Terraform to deploy the micro VM:
* ```
terraform apply
```
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
* ```
terraform show
```
## Set the Wireguard Connection
To set the Wireguard connection, on your local computer, you will need to take the Terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT.
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`.
* ```
nano /usr/local/etc/wireguard/wg.conf
```
* Paste the content between the two `EOT` displayed after you set `terraform apply`.
* Start the wireguard:
* ```
wg-quick up wg
```
If you want to stop the Wireguard service, write the following on your terminal:
* ```
wg-quick down wg
```
> Note: If it doesn't work and you already did a Wireguard connection with the same file from Terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`.
As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the Wireguard connection is correct. Make sure to replace `vm_wg_ip` with the proper IP address:
* ```
ping vm_wg_ip
```
* Note that, with this Terraform deployment, the Wireguard IP address of the micro VM is named `node1_zmachine1_ip`
## SSH into the 3Node with Wireguard
To SSH into the 3Node with Wireguard, simply write the following in the terminal with the proper Wireguard IP address:
```
ssh root@vm_wg_ip
```
You now have access into the VM over Wireguard SSH connection.
## Destroy the Terraform Deployment
If you want to destroy the Terraform deployment, write the following in the terminal:
* ```
terraform destroy
```
* Then write `yes` to confirm.
Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-ssh`.
## Conclusion
In this simple ThreeFold Guide, you learned how to SSH into a 3Node with Wireguard and Terraform. Feel free to explore further Terraform and Wireguard.
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,345 +0,0 @@
<h1>Deploy Micro VMs and Set a Wireguard VPN</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform)
- [Deploy the Micro VMs with Terraform](#deploy-the-micro-vms-with-terraform)
- [Set the Wireguard Connection](#set-the-wireguard-connection)
- [SSH into the 3Node](#ssh-into-the-3node)
- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment)
- [Conclusion](#conclusion)
***
## Introduction
In this ThreeFold Guide, we will learn how to deploy two micro virtual machines (Ubuntu 22.04) with Terraform. The Terraform deployment will be composed of a virtual private network (VPN) using Wireguard. The two VMs will thus be connected in a private and secure network.
Note that this concept can be extended with more than two micro VMs. Once you understand this guide, you will be able to adjust and deploy your own personalized Wireguard VPN on the ThreeFold Grid.
## Prerequisites
* [Install Terraform](../terraform_install.md)
* [Install Wireguard](https://www.wireguard.com/install/)
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows).
## Find a 3Node with the ThreeFold Explorer
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address.
We show here how to find a suitable 3Node using the ThreeFold Explorer.
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
* For proper understanding, we give further information on some relevant columns:
* `ID` refers to the node ID
* `Free Public IPs` refers to available IPv4 public IP addresses
* `HRU` refers to HDD storage
* `SRU` refers to SSD storage
* `MRU` refers to RAM (memory)
* `CRU` refers to virtual cores (vcores)
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
* `Free SRU (GB)`: 15
* `Free MRU (GB)`: 1
* `Total CRU (Cores)`: 1
* `Free Public IP`: 2
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
## Create a Two Servers Wireguard VPN with Terraform
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file.
Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-vpn`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on.
Now let's create the Terraform files.
* Open the terminal and go to the home directory
* ```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-wg-vpn`:
* ```
mkdir -p terraform && cd $_
```
* ```
mkdir deployment-wg-vpn && cd $_
```
* Create the `main.tf` file:
* ```
nano main.tf
```
* Copy the `main.tf` content and save the file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "tfnodeid2" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "tfvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid1, var.tfnodeid2]
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
resource "grid_deployment" "d2" {
disks {
name = "disk2"
size = var.size
}
name = local.name
node = var.tfnodeid2
network_name = grid_network.net1.name
vms {
name = "vm2"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk2"
mount_point = "/disk2"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_zmachine2_ip" {
value = grid_deployment.d2.vms[0].ip
}
output "ygg_ip1" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "ygg_ip2" {
value = grid_deployment.d2.vms[0].ygg_ip
}
output "ipv4_vm1" {
value = grid_deployment.d1.vms[0].computedip
}
output "ipv4_vm2" {
value = grid_deployment.d2.vms[0].computedip
}
```
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly.
* Create the `credentials.auto.tfvars` file:
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid1 = "..."
tfnodeid2 = "..."
size = "15"
cpu = "1"
memory = "512"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content.
Set the parameters for your VMs as you wish. The two servers will have the same parameters. For this example, we use the minimum parameters.
## Deploy the Micro VMs with Terraform
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-vpn` containing the main and variables files.
* Initialize Terraform by writing the following in the terminal:
* ```
terraform init
```
* Apply the Terraform deployment:
* ```
terraform apply
```
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
* ```
terraform show
```
## Set the Wireguard Connection
To set the Wireguard connection, on your local computer, you will need to take the terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT.
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`.
* ```
nano /usr/local/etc/wireguard/wg.conf
```
* Paste the content between the two `EOT` displayed after you set `terraform apply`.
* Start the wireguard:
* ```
wg-quick up wg
```
If you want to stop the Wireguard service, write the following on your terminal:
* ```
wg-quick down wg
```
> Note: If it doesn't work and you already did a Wireguard connection with the same file from terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`.
As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VMs to make sure the Wireguard connection is correct. Make sure to replace `wg_vm_ip` with the proper IP address for each VM:
* ```
ping wg_vm_ip
```
## SSH into the 3Node
You can now SSH into the 3Nodes with either Wireguard or IPv4.
To SSH with Wireguard, write the following with the proper IP address for each 3Node:
```
ssh root@vm_wg_ip
```
To SSH with IPv4, write the following for each 3Nodes:
```
ssh root@vm_IPv4
```
You now have an SSH connection access to the VMs over Wireguard and IPv4.
## Destroy the Terraform Deployment
If you want to destroy the Terraform deployment, write the following in the terminal:
* ```
terraform destroy
```
* Then write `yes` to confirm.
Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-vpn`.
## Conclusion
In this ThreeFold Guide, we learned how easy it is to deploy a VPN with Wireguard and Terraform. You can adjust the parameters how you like and explore different possibilities.
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,270 +0,0 @@
# Grid provider for terraform
- A resource, and a data source (`internal/provider/`),
- Examples (`examples/`)
## Requirements
- [Terraform](https://www.terraform.io/downloads.html) >= 0.13.x
- [Go](https://golang.org/doc/install) >= 1.15
## Building The Provider
Note: please clone all of the following repos in the same directory
- clone github.com/threefoldtech/zos (switch to master-3 branch)
- Clone github.com/threefoldtech/tf_terraform_provider (deployment_resource branch)
- Enter the repository directory
```bash
go get
mkdir -p ~/.terraform.d/plugins/threefoldtech.com/providers/grid/0.1/linux_amd64
go build -o terraform-provider-grid
mv terraform-provider-grid ~/.terraform.d/plugins/threefoldtech.com/providers/grid/0.1/linux_amd64
```
## example deployment
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {}
resource "grid_deployment" "d1" {
node = 2
disks {
name = "mydisk1"
size = 2
description = "this is my disk description1"
}
disks {
name = "mydisk2"
size=2
description = "this is my disk2"
}
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 2048
entrypoint = "/sbin/zinit init"
mounts {
disk_name = "mydisk1"
mount_point = "/opt"
}
mounts {
disk_name = "mydisk2"
mount_point = "/test"
}
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP"
}
}
}
```
## Using the provider
to create your twin please check [grid substrate getting started](grid_substrate_getting_started)
```bash
./msgbusd --twin <TWIN_ID> #run message bus with your twin id
cd examples/resources
export MNEMONICS="<mnemonics words>"
terraform init && terraform apply
```
## Destroying deployment
```bash
terraform destroy
```
## More examples
a two machine deployment with the first using a public ip
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_network" "net1" {
nodes = [2]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
node = 2
network_name = grid_network.net1.name
ip_range = grid_network.net1.nodes_ip_range["2"]
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
publicip = true
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_vm2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "public_ip" {
value = grid_deployment.d1.vms[0].computedip
}
```
multinode deployments
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_network" "net1" {
nodes = [4, 2]
ip_range = "172.20.0.0/16"
name = "net1"
description = "new network"
}
resource "grid_deployment" "d1" {
node = 4
network_name = grid_network.net1.name
ip_range = grid_network.net1.deployment_info[0].ip_range
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
}
}
resource "grid_deployment" "d2" {
node = 2
network_name = grid_network.net1.name
ip_range = grid_network.net1.nodes_ip_range["2"]
vms {
name = "vm3"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node2_vm1_ip" {
value = grid_deployment.d2.vms[0].ip
}
```
zds
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_deployment" "d1" {
node = 2
zdbs{
name = "zdb1"
size = 1
description = "zdb1 description"
password = "zdbpasswd1"
mode = "user"
}
zdbs{
name = "zdb2"You can easily check using [explorer-ui](@explorer_home) ,
size = 2
description = "zdb2 description"
password = "zdbpasswd2"
mode = "seq"
}
}
output "deployment_id" {
value = grid_deployment.d1.id
}
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

View File

@@ -1,506 +0,0 @@
<h1> Terraform Caprover </h1>
<h2>Table of Contents</h2>
- [What is CapRover?](#what-is-caprover)
- [Features of Caprover](#features-of-caprover)
- [Prerequisites](#prerequisites)
- [How to Run CapRover on ThreeFold Grid 3](#how-to-run-caprover-on-threefold-grid-3)
- [Clone the Project Repo](#clone-the-project-repo)
- [A) leader node deployment/setup:](#a-leader-node-deploymentsetup)
- [Step 1: Deploy a Leader Node](#step-1-deploy-a-leader-node)
- [Step 2: Connect Root Domain](#step-2-connect-root-domain)
- [Note](#note)
- [Step 3: CapRover Root Domain Configurations](#step-3-caprover-root-domain-configurations)
- [Step 4: Access the Captain Dashboard](#step-4-access-the-captain-dashboard)
- [To allow cluster mode](#to-allow-cluster-mode)
- [B) Worker Node Deployment/setup:](#b-worker-node-deploymentsetup)
- [Implementations Details:](#implementations-details)
***
## What is CapRover?
[CapRover](https://caprover.com/) is an easy-to-use app/database deployment and web server manager that works for a variety of applications such as Node.js, Ruby, PHP, Postgres, and MongoDB. It runs fast and is very robust, as it uses Docker, Nginx, LetsEncrypt, and NetData under the hood behind its user-friendly interface.
Heres a link to CapRover's open source repository on [GitHub](https://github.com/caprover/caprover).
## Features of Caprover
- CLI for automation and scripting
- Web GUI for ease of access and convenience
- No lock-in: Remove CapRover and your apps keep working !
- Docker Swarm under the hood for containerization and clustering.
- Nginx (fully customizable template) under the hood for load-balancing.
- Lets Encrypt under the hood for free SSL (HTTPS).
- **One-Click Apps** : Deploying one-click apps is a matter of seconds! MongoDB, Parse, MySQL, WordPress, Postgres and many more.
- **Fully Customizable** : Optionally fully customizable nginx config allowing you to enable HTTP2, specific caching logic, custom SSL certs and etc.
- **Cluster Ready** : Attach more nodes and create a cluster in seconds! CapRover automatically configures nginx to load balance.
- **Increase Productivity** : Focus on your apps ! Not the bells and whistles, just to run your apps.
- **Easy Deploy** : Many ways to deploy. You can upload your source from dashboard, use command line caprover deploy, use webhooks and build upon git push
## Prerequisites
- Domain Name:
after installation, you will need to point a wildcard DNS entry to your CapRover IP Address.
Note that you can use CapRover without a domain too. But you won't be able to setup HTTPS or add `Self hosted Docker Registry`.
- TerraForm installed to provision, adjust and tear down infrastructure using the tf configuration files provided here.
- Yggdrasil installed and enabled for End-to-end encrypted IPv6 networking.
- account created on [Polkadot](https://polkadot.js.org/apps/?rpc=wss://tfchain.dev.threefold.io/ws#/accounts) and got an twin id, and saved you mnemonics.
- TFTs in your account balance (in development, Transferer some test TFTs from ALICE account).
## How to Run CapRover on ThreeFold Grid 3
In this guide, we will use Caprover to setup your own private Platform as a service (PaaS) on TFGrid 3 infrastructure.
### Clone the Project Repo
```sh
git clone https://github.com/freeflowuniverse/freeflow_caprover.git
```
### A) leader node deployment/setup:
#### Step 1: Deploy a Leader Node
Create a leader caprover node using terraform, here's an example :
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
mnemonics = "<your-mnemonics>"
network = "dev" # or test to use testnet
}
resource "grid_network" "net0" {
nodes = [4]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d0" {
node = 4
network_name = grid_network.net0.name
ip_range = lookup(grid_network.net0.nodes_ip_range, 4, "")
disks {
name = "data0"
# will hold images, volumes etc. modify the size according to your needs
size = 20
description = "volume holding docker data"
}
disks {
name = "data1"
# will hold data reltaed to caprover conf, nginx stuff, lets encrypt stuff.
size = 5
description = "volume holding captain data"
}
vms {
name = "caprover"
flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist"
# modify the cores according to your needs
cpu = 4
publicip = true
# modify the memory according to your needs
memory = 8192
entrypoint = "/sbin/zinit init"
mounts {
disk_name = "data0"
mount_point = "/var/lib/docker"
}
mounts {
disk_name = "data1"
mount_point = "/captain"
}
env_vars = {
"PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576"
# SWM_NODE_MODE env var is required, should be "leader" or "worker"
# leader: will run sshd, containerd, dockerd as zinit services plus caprover service in leader mode which start caprover, lets encrypt, nginx containers.
# worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster. check the wroker terrafrom file example.
"SWM_NODE_MODE" = "leader"
# CAPROVER_ROOT_DOMAIN is optional env var, by providing it you can access the captain dashboard after vm initilization by visiting http://captain.your-root-domain
# otherwise you will have to add the root domain manually from the captain dashboard by visiting http://{publicip}:3000 to access the dashboard
"CAPROVER_ROOT_DOMAIN" = "roverapps.grid.tf"
}
}
}
output "wg_config" {
value = grid_network.net0.access_wg_config
}
output "ygg_ip" {
value = grid_deployment.d0.vms[0].ygg_ip
}
output "vm_ip" {
value = grid_deployment.d0.vms[0].ip
}
output "vm_public_ip" {
value = grid_deployment.d0.vms[0].computedip
}
```
```bash
cd freeflow_caprover/terraform/leader/
vim main.tf
```
- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on.
- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing.
- In the `PUBLIC_KEY` env var value put your ssh public key .
- In the `CAPROVER_ROOT_DOMAIN` env var value put your root domain, this is optional and you can add it later from the dashboard put it will save you the extra step and allow you to access your dashboard using your domain name directly after the deployment.
- save the file, and execute the following commands:
```bash
terraform init
terraform apply
```
- wait till you see `apply complete`, and note the VM public ip in the final output.
- verify the status of the VM
```bash
ssh root@{public_ip_address}
zinit list
zinit log caprover
```
You will see output like this:
```bash
root@caprover:~ # zinit list
sshd: Running
containerd: Running
dockerd: Running
sshd-init: Success
caprover: Running
root@caprover:~ # zinit log caprover
[+] caprover: CapRover Root Domain: newapps.grid.tf
[+] caprover: {
[+] caprover: "namespace": "captain",
[+] caprover: "customDomain": "newapps.grid.tf"
[+] caprover: }
[+] caprover: CapRover will be available at http://captain.newapps.grid. tf after installation
[-] caprover: docker: Cannot connect to the Docker daemon at unix:///var/ run/docker.sock. Is the docker daemon running?.
[-] caprover: See 'docker run --help'.
[-] caprover: Unable to find image 'caprover/caprover:latest' locally
[-] caprover: latest: Pulling from caprover/caprover
[-] caprover: af4c2580c6c3: Pulling fs layer
[-] caprover: 4ea40d27a2cf: Pulling fs layer
[-] caprover: 523d612e9cd2: Pulling fs layer
[-] caprover: 8fee6a1847b0: Pulling fs layer
[-] caprover: 60cce3519052: Pulling fs layer
[-] caprover: 4bae1011637c: Pulling fs layer
[-] caprover: ecf48b6c1f43: Pulling fs layer
[-] caprover: 856f69196742: Pulling fs layer
[-] caprover: e86a512b6f8c: Pulling fs layer
[-] caprover: cecbd06d956f: Pulling fs layer
[-] caprover: cdd679ff24b0: Pulling fs layer
[-] caprover: d60abbe06609: Pulling fs layer
[-] caprover: 0ac0240c1a59: Pulling fs layer
[-] caprover: 52d300ad83da: Pulling fs layer
[-] caprover: 8fee6a1847b0: Waiting
[-] caprover: e86a512b6f8c: Waiting
[-] caprover: 60cce3519052: Waiting
[-] caprover: cecbd06d956f: Waiting
[-] caprover: cdd679ff24b0: Waiting
[-] caprover: 4bae1011637c: Waiting
[-] caprover: d60abbe06609: Waiting
[-] caprover: 0ac0240c1a59: Waiting
[-] caprover: 52d300ad83da: Waiting
[-] caprover: 856f69196742: Waiting
[-] caprover: ecf48b6c1f43: Waiting
[-] caprover: 523d612e9cd2: Verifying Checksum
[-] caprover: 523d612e9cd2: Download complete
[-] caprover: 4ea40d27a2cf: Verifying Checksum
[-] caprover: 4ea40d27a2cf: Download complete
[-] caprover: af4c2580c6c3: Verifying Checksum
[-] caprover: af4c2580c6c3: Download complete
[-] caprover: 4bae1011637c: Verifying Checksum
[-] caprover: 4bae1011637c: Download complete
[-] caprover: 8fee6a1847b0: Verifying Checksum
[-] caprover: 8fee6a1847b0: Download complete
[-] caprover: 856f69196742: Verifying Checksum
[-] caprover: 856f69196742: Download complete
[-] caprover: ecf48b6c1f43: Verifying Checksum
[-] caprover: ecf48b6c1f43: Download complete
[-] caprover: e86a512b6f8c: Verifying Checksum
[-] caprover: e86a512b6f8c: Download complete
[-] caprover: cdd679ff24b0: Verifying Checksum
[-] caprover: cdd679ff24b0: Download complete
[-] caprover: d60abbe06609: Verifying Checksum
[-] caprover: d60abbe06609: Download complete
[-] caprover: cecbd06d956f: Download complete
[-] caprover: 0ac0240c1a59: Verifying Checksum
[-] caprover: 0ac0240c1a59: Download complete
[-] caprover: 60cce3519052: Verifying Checksum
[-] caprover: 60cce3519052: Download complete
[-] caprover: af4c2580c6c3: Pull complete
[-] caprover: 52d300ad83da: Download complete
[-] caprover: 4ea40d27a2cf: Pull complete
[-] caprover: 523d612e9cd2: Pull complete
[-] caprover: 8fee6a1847b0: Pull complete
[-] caprover: 60cce3519052: Pull complete
[-] caprover: 4bae1011637c: Pull complete
[-] caprover: ecf48b6c1f43: Pull complete
[-] caprover: 856f69196742: Pull complete
[-] caprover: e86a512b6f8c: Pull complete
[-] caprover: cecbd06d956f: Pull complete
[-] caprover: cdd679ff24b0: Pull complete
[-] caprover: d60abbe06609: Pull complete
[-] caprover: 0ac0240c1a59: Pull complete
[-] caprover: 52d300ad83da: Pull complete
[-] caprover: Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
[-] caprover: Status: Downloaded newer image for caprover/caprover:latest
[+] caprover: Captain Starting ...
[+] caprover: Overriding skipVerifyingDomains from /captain/data/ config-override.json
[+] caprover: Installing Captain Service ...
[+] caprover:
[+] caprover: Installation of CapRover is starting...
[+] caprover: For troubleshooting, please see: https://caprover.com/docs/ troubleshooting.html
[+] caprover:
[+] caprover:
[+] caprover:
[+] caprover:
[+] caprover:
[+] caprover: >>> Checking System Compatibility <<<
[+] caprover: Docker Version passed.
[+] caprover: Ubuntu detected.
[+] caprover: X86 CPU detected.
[+] caprover: Total RAM 8339 MB
[+] caprover: Pulling: nginx:1
[+] caprover: Pulling: caprover/caprover-placeholder-app:latest
[+] caprover: Pulling: caprover/certbot-sleeping:v1.6.0
[+] caprover: October 12th 2021, 12:49:26.301 pm Fresh installation!
[+] caprover: October 12th 2021, 12:49:26.309 pm Starting swarm at 185.206.122.32:2377
[+] caprover: Swarm started: z06ymksbcoren9cl7g2xzw9so
[+] caprover: *** CapRover is initializing ***
[+] caprover: Please wait at least 60 seconds before trying to access CapRover.
[+] caprover: ===================================
[+] caprover: **** Installation is done! *****
[+] caprover: CapRover is available at http://captain.newapps.grid.tf
[+] caprover: Default password is: captain42
[+] caprover: ===================================
```
Wait until you see \***\* Installation is done! \*\*\*** in the caprover service log.
#### Step 2: Connect Root Domain
After the container runs, you will now need to connect your CapRover instance to a Root Domain.
Lets say you own example.com. You can set \*.something.example.com as an A-record in your DNS settings to point to the IP address of the server where you installed CapRover. To do this, go to the DNS settings in your domain provider website, and set a wild card A record entry.
For example: Type: A, Name (or host): \*.something.example.com, IP (or Points to): `110.122.131.141` where this is the IP address of your CapRover machine.
```yaml
TYPE: A record
HOST: \*.something.example.com
POINTS TO: (IP Address of your server)
TTL: (doesnt really matter)
```
To confirm, go to https://mxtoolbox.com/DNSLookup.aspx and enter `somethingrandom.something.example.com` and check if IP address resolves to the IP you set in your DNS.
##### Note
`somethingrandom` is needed because you set a wildcard entry in your DNS by setting `*.something.example.com` as your host, not `something.example.com`.
#### Step 3: CapRover Root Domain Configurations
skip this step if you provided your root domain in the TerraFrom configuration file
Once the CapRover is initialized, you can visit `http://[IP_OF_YOUR_SERVER]:3000` in your browser and login to CapRover using the default password `captain42`. You can change your password later.
In the UI enter you root domain and press Update Domain button.
#### Step 4: Access the Captain Dashboard
Once you set your root domain as caprover.example.com, you will be redirected to captain.caprover.example.com.
Now CapRover is ready and running in a single node.
##### To allow cluster mode
- Enable HTTPS
- Go to CapRover `Dashboard` tab, then in `CapRover Root Domain Configurations` press on `Enable HTTPS` then you will asked to enter your email address
- Docker Registry Configuration
- Go to CapRover `Cluster` tab, then in `Docker Registry Configuration` section, press on `Self hosted Docker Registry` or add your `Remote Docker Registry`
- Run the following command in the ssh session:
```bash
docker swarm join-token worker
```
It will output something like this:
```bash
docker swarm join --token SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ez fpdd6svnnbq7 185.206.122.33:2377
```
- To add a worker node to this swarm, you need:
- Generated token `SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ezfpdd6svnnbq7`
- Leader node public ip `185.206.122.33`
This information is required in the next section to run CapRover in cluster mode.
### B) Worker Node Deployment/setup:
We show how to deploy a worker node by providing an example worker Terraform file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
mnemonics = "<your-mnemonics>"
network = "dev" # or test to use testnet
}
resource "grid_network" "net2" {
nodes = [4]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
resource "grid_deployment" "d2" {
node = 4
network_name = grid_network.net2.name
ip_range = lookup(grid_network.net2.nodes_ip_range, 4, "")
disks {
name = "data2"
# will hold images, volumes etc. modify the size according to your needs
size = 20
description = "volume holding docker data"
}
vms {
name = "caprover"
flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist"
# modify the cores according to your needs
cpu = 2
publicip = true
# modify the memory according to your needs
memory = 2048
entrypoint = "/sbin/zinit init"
mounts {
disk_name = "data2"
mount_point = "/var/lib/docker"
}
env_vars = {
"PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576"
}
# SWM_NODE_MODE env var is required, should be "leader" or "worker"
# leader: check the wroker terrafrom file example.
# worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster.
"SWM_NODE_MODE" = "worker"
# from the leader node (the one running caprover) run `docker swarm join-token worker`
# you must add the generated token to SWMTKN env var and the leader public ip to LEADER_PUBLIC_IP env var
"SWMTKN"="SWMTKN-1-522cdsyhknmavpdok4wi86r1nihsnipioc9hzfw9dnsvaj5bed-8clrf4f2002f9wziabyxzz32d"
"LEADER_PUBLIC_IP" = "185.206.122.38"
}
}
output "wg_config" {
value = grid_network.net2.access_wg_config
}
output "ygg_ip" {
value = grid_deployment.d2.vms[0].ygg_ip
}
output "vm_ip" {
value = grid_deployment.d2.vms[0].ip
}
output "vm_public_ip" {
value = grid_deployment.d2.vms[0].computedip
}
```
```bash
cd freeflow_caprover/terraform/worker/
vim main.tf
```
- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on.
- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing.
- In the `PUBLIC_KEY` env var value put your ssh public key.
- In the `SWMTKN` env var value put the previously generated token.
- In the `LEADER_PUBLIC_IP` env var value put the leader node public ip.
- Save the file, and execute the following commands:
```bash
terraform init
terraform apply
```
- Wait till you see `apply complete`, and note the VM public ip in the final output.
- Verify the status of the VM.
```bash
ssh root@{public_ip_address}
zinit list
zinit log caprover
```
You will see output like this:
```bash
root@caprover:~# zinit list
caprover: Success
dockerd: Running
containerd: Running
sshd: Running
sshd-init: Success
root@caprover:~# zinit log caprover
[-] caprover: Cannot connect to the Docker daemon at unix:///var/run/ docker.sock. Is the docker daemon running?
[+] caprover: This node joined a swarm as a worker.
```
This means that your worker node is now ready and have joined the cluster successfully.
You can also verify this from CapRover dashboard in `Cluster` tab. Check `Nodes` section, you should be able to see the new worker node added there.
Now CapRover is ready in cluster mode (more than one server).
To run One-Click Apps please follow this [tutorial](https://caprover.com/docs/one-click-apps.html)
## Implementations Details:
- we use Ubuntu 18.04 to minimize the production issues as CapRover is tested on Ubuntu 18.04 and Docker 19.03.
- In standard installation, CapRover has to be installed on a machine with a public IP address.
- Services are managed by `Zinit` service manager to bring these processes up and running in case of any failure:
- sshd-init : service used to add user public key in vm ssh authorized keys (it run once).
- containerd: service to maintain container runtime needed by docker.
- caprover: service to run caprover container(it run once).
- dockerd: service to run docker daemon.
- sshd: service to maintain ssh server daemon.
- we adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system
```bash
echo -500 >/proc/self/oom_score_adj
```

View File

@@ -1,210 +0,0 @@
<h1> Kubernetes Cluster </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [Grid Kubernetes Resource](#grid-kubernetes-resource)
- [Kubernetes Outputs](#kubernetes-outputs)
- [More Info](#more-info)
- [Demo Video](#demo-video)
***
## Introduction
While Kubernetes deployments can be quite difficult and can require lots of experience, we provide here a very simple way to provision Kubernetes cluster on the TFGrid.
## Example
An example for deploying a kubernetes cluster could be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/k8s/main.tf)
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_scheduler" "sched" {
requests {
name = "master_node"
cru = 2
sru = 512
mru = 2048
distinct = true
public_ips_count = 1
}
requests {
name = "worker1_node"
cru = 2
sru = 512
mru = 2048
distinct = true
}
requests {
name = "worker2_node"
cru = 2
sru = 512
mru = 2048
distinct = true
}
requests {
name = "worker3_node"
cru = 2
sru = 512
mru = 2048
distinct = true
}
}
locals {
solution_type = "Kubernetes"
name = "myk8s"
}
resource "grid_network" "net1" {
solution_type = local.solution_type
name = local.name
nodes = distinct(values(grid_scheduler.sched.nodes))
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_kubernetes" "k8s1" {
solution_type = local.solution_type
name = local.name
network_name = grid_network.net1.name
token = "12345678910122"
ssh_key = "PUT YOUR SSH KEY HERE"
master {
disk_size = 2
node = grid_scheduler.sched.nodes["master_node"]
name = "mr"
cpu = 2
publicip = true
memory = 2048
}
workers {
disk_size = 2
node = grid_scheduler.sched.nodes["worker1_node"]
name = "w0"
cpu = 2
memory = 2048
}
workers {
disk_size = 2
node = grid_scheduler.sched.nodes["worker2_node"]
name = "w2"
cpu = 2
memory = 2048
}
workers {
disk_size = 2
node = grid_scheduler.sched.nodes["worker3_node"]
name = "w3"
cpu = 2
memory = 2048
}
}
output "computed_master_public_ip" {
value = grid_kubernetes.k8s1.master[0].computedip
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
```
Everything looks similar to our first example, the global terraform section, the provider section and the network section.
## Grid Kubernetes Resource
```terraform
resource "grid_kubernetes" "k8s1" {
solution_type = local.solution_type
name = local.name
network_name = grid_network.net1.name
token = "12345678910122"
ssh_key = "PUT YOUR SSH KEY HERE"
master {
disk_size = 2
node = grid_scheduler.sched.nodes["master_node"]
name = "mr"
cpu = 2
publicip = true
memory = 2048
}
workers {
disk_size = 2
node = grid_scheduler.sched.nodes["worker1_node"]
name = "w0"
cpu = 2
memory = 2048
}
workers {
disk_size = 2
node = grid_scheduler.sched.nodes["worker2_node"]
name = "w2"
cpu = 2
memory = 2048
}
workers {
disk_size = 2
node = grid_scheduler.sched.nodes["worker3_node"]
name = "w3"
cpu = 2
memory = 2048
}
}
```
It requires
- Network name that would contain the cluster
- A cluster token to work as a key for other nodes to join the cluster
- SSH key to access the cluster VMs.
Then, we describe the the master and worker nodes in terms of:
- name within the deployment
- disk size
- node to deploy it on
- cpu
- memory
- whether or not this node needs a public ip
### Kubernetes Outputs
```terraform
output "master_public_ip" {
value = grid_kubernetes.k8s1.master[0].computedip
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
```
We will be mainly interested in the master node public ip `computed IP` and the wireguard configurations
## More Info
A complete list of k8s resource parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/kubernetes.md).
## Demo Video
Here is a video showing how to deploying k8s with Terraform.
<div class="aspect-w-16 aspect-h-9">
<iframe src="https://player.vimeo.com/video/654552300?h=c61feb579b" width="640" height="564" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe>
</div>

View File

@@ -1,7 +0,0 @@
<h1> Demo Video Showing Deploying k8s with Terraform </h1>
<div class="aspect-w-16 aspect-h-9">
<iframe src="https://player.vimeo.com/video/654552300?h=c61feb579b" width="640" height="564" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe>
</div>

View File

@@ -1,20 +0,0 @@
<h1> Quantum Safe Filesystem (QSFS) </h1>
<h2> Table of Contents </h2>
- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md)
- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md)
***
## Introduction
Quantum Storage is a FUSE filesystem that uses mechanisms of forward error correction (Reed Solomon codes) to make sure data (files and metadata) are stored in multiple remote places in a way that we can afford losing x number of locations without losing the data.
The aim is to support unlimited local storage with remote backends for offload and backup which cannot be broken, even by a quantum computer.
## QSFS Workload Parameters and Documentation
A complete list of QSFS workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-qsfs).
The [quantum-storage](https://github.com/threefoldtech/quantum-storage) repo contains a more thorough description of QSFS operation.

View File

@@ -1,211 +0,0 @@
<h1> QSFS on Full VM </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Create the Terraform Files](#create-the-terraform-files)
- [Full Example](#full-example)
- [Mounting the QSFS Disk](#mounting-the-qsfs-disk)
- [Debugging](#debugging)
***
## Introduction
This short ThreeFold Guide will teach you how to deploy a Full VM with QSFS disk on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04 based cloud-init image.
The steps are very simple. You first need to create the Terraform files, and then deploy the full VM and the QSFS workloads. After the deployment is done, you will need to SSH into the full VM and manually mount the QSFS disk.
The main goal of this guide is to show you all the necessary steps to deploy a Full VM with QSFS disk on the TGrid using Terraform.
## Prerequisites
- [Install Terraform](../terraform_install.md)
You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
## Create the Terraform Files
Deploying a FullVM is a bit different than deploying a MicroVM, let take a look first at these differences
- FullVMs uses `cloud-init` images and unlike the microVMs it needs at least one disk attached to the vm to copy the image to, and it serves as the root fs for the vm.
- QSFS disk is based on `virtiofs`, and you can't use QSFS disk as the first mount in a FullVM, instead you need a regular disk.
- Any extra disks/mounts will be available on the vm but unlike mounts on MicroVMs, extra disks won't be mounted automatically. you will need to mount it manually after the deployment.
Let modify the qsfs-on-microVM [example](./terraform_qsfs_on_microvm.md) to deploy a QSFS on FullVM this time:
- Inside the `grid_deployment` resource we will need to add a disk for the vm root fs.
```terraform
disks {
name = "roof-fs"
size = 10
description = "root fs"
}
```
- We need also to add an extra mount inside the `grid_deployment` resource in `vms` block. it must be the first mounts block in the vm:
```terraform
mounts {
disk_name = "rootfs"
mount_point = "/"
}
```
- We also need to specify the flist for our FullVM, inside the `grid_deployment` in the `vms` block, change the flist filed to use this image:
- https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist
## Full Example
The full example would be like this:
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
locals {
metas = ["meta1", "meta2", "meta3", "meta4"]
datas = ["data1", "data2", "data3", "data4"]
}
resource "grid_network" "net1" {
nodes = [11]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
resource "grid_deployment" "d1" {
node = 11
dynamic "zdbs" {
for_each = local.metas
content {
name = zdbs.value
description = "description"
password = "password"
size = 10
mode = "user"
}
}
dynamic "zdbs" {
for_each = local.datas
content {
name = zdbs.value
description = "description"
password = "password"
size = 10
mode = "seq"
}
}
}
resource "grid_deployment" "qsfs" {
node = 11
network_name = grid_network.net1.name
disks {
name = "rootfs"
size = 10
description = "rootfs"
}
qsfs {
name = "qsfs"
description = "description6"
cache = 10240 # 10 GB
minimal_shards = 2
expected_shards = 4
redundant_groups = 0
redundant_nodes = 0
max_zdb_data_dir_size = 512 # 512 MB
encryption_algorithm = "AES"
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
compression_algorithm = "snappy"
metadata {
type = "zdb"
prefix = "hamada"
encryption_algorithm = "AES"
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
dynamic "backends" {
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"]
content {
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
namespace = backends.value.namespace
password = backends.value.password
}
}
}
groups {
dynamic "backends" {
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"]
content {
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
namespace = backends.value.namespace
password = backends.value.password
}
}
}
}
vms {
name = "vm"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
planetary = true
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576"
}
mounts {
disk_name = "rootfs"
mount_point = "/"
}
mounts {
disk_name = "qsfs"
mount_point = "/qsfs"
}
}
}
output "metrics" {
value = grid_deployment.qsfs.qsfs[0].metrics_endpoint
}
output "ygg_ip" {
value = grid_deployment.qsfs.vms[0].ygg_ip
}
```
**note**: the `grid_deployment.qsfs.name` should be the same as the qsfs disk name in `grid_deployment.vms.mounts.disk_name`.
## Mounting the QSFS Disk
After applying this terraform file, you will need to manually mount the disk.
SSH into the VM and type `mount -t virtiofs <QSFS DISK NAME> /qsfs`:
```bash
mkdir /qsfs
mount -t virtiofs qsfs /qsfs
```
## Debugging
During deployment, you might encounter the following error when using mount command:
`mount: /qsfs: wrong fs type, bad option, bad superblock on qsfs3, missing codepage or helper program, or other error.`
- **Explanations**: Most likely you typed a wrong qsfs deployment/disk name that not matched with the one from qsfs deployment.
- **Solution**: Double check your terraform file, and make sure the name you are using as qsfs deployment/disk name is matching the one you are trying to mount on your VM.

View File

@@ -1,348 +0,0 @@
<h1> QSFS on Micro VM with Terraform</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Find a 3Node](#find-a-3node)
- [Create the Terraform Files](#create-the-terraform-files)
- [Create the Files with the Provider](#create-the-files-with-the-provider)
- [Create the Files Manually](#create-the-files-manually)
- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform)
- [SSH into the 3Node](#ssh-into-the-3node)
- [Questions and Feedback](#questions-and-feedback)
***
## Introduction
In this ThreeFold Guide, we will learn how to deploy a Quantum Safe File System (QSFS) deployment with Terraform. The main template for this example can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/qsfs/main.tf).
## Prerequisites
In this guide, we will be using Terraform to deploy a QSFS workload on a micro VM that runs on the TFGrid. Make sure to have the latest Terraform version.
- [Install Terraform](../terraform_install.md)
## Find a 3Node
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address.
We show here how to find a suitable 3Node using the ThreeFold Explorer.
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
* For proper understanding, we give further information on some relevant columns:
* `ID` refers to the node ID
* `Free Public IPs` refers to available IPv4 public IP addresses
* `HRU` refers to HDD storage
* `SRU` refers to SSD storage
* `MRU` refers to RAM (memory)
* `CRU` refers to virtual cores (vcores)
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
* `Free SRU (GB)`: 15
* `Free MRU (GB)`: 1
* `Total CRU (Cores)`: 1
* `Free Public IP`: 2
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
## Create the Terraform Files
We present two different methods to create the Terraform files. In the first method, we will create the Terraform files using the [TFGrid Terraform Provider](https://github.com/threefoldtech/terraform-provider-grid). In the second method, we will create the Terraform files manually. Feel free to choose the method that suits you best.
### Create the Files with the Provider
Creating the Terraform files is very straightforward. We want to clone the repository `terraform-provider-grid` locally and run some simple commands to properly set and start the deployment.
* Clone the repository `terraform-provider-grid`
* ```
git clone https://github.com/threefoldtech/terraform-provider-grid
```
* Go to the subdirectory containing the examples
* ```
cd terraform-provider-grid/examples/resources/qsfs
```
* Set your own mnemonics (replace `mnemonics words` with your own mnemonics)
* ```
export MNEMONICS="mnemonics words"
```
* Set the network (replace `network` by the desired network, e.g. `dev`, `qa`, `test` or `main`)
* ```
export NETWORK="network"
```
* Initialize the Terraform deployment
* ```
terraform init
```
* Apply the Terraform deployment
* ```
terraform apply
```
* At any moment, you can destroy the deployment with the following line
* ```
terraform destroy
```
When using this method, you might need to change some parameters within the `main.tf` depending on your specific deployment.
### Create the Files Manually
For this method, we use two files to deploy with Terraform. The first file contains the environment variables (**credentials.auto.tfvars**) and the second file contains the parameters to deploy our workloads (**main.tf**). To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file, but only the file **credentials.auto.tfvars**.
* Open the terminal and go to the home directory (optional)
* ```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-qsfs-microvm`:
* ```
mkdir -p terraform && cd $_
```
* ```
mkdir deployment-qsfs-microvm && cd $_
```
* Create the `main.tf` file:
* ```
nano main.tf
```
* Copy the `main.tf` content and save the file.
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
# Variables
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "network" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
variable "minimal_shards" {
type = string
}
variable "expected_shards" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = var.network
}
locals {
metas = ["meta1", "meta2", "meta3", "meta4"]
datas = ["data1", "data2", "data3", "data4"]
}
resource "grid_network" "net1" {
nodes = [var.tfnodeid1]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
resource "grid_deployment" "d1" {
node = var.tfnodeid1
dynamic "zdbs" {
for_each = local.metas
content {
name = zdbs.value
description = "description"
password = "password"
size = var.size
mode = "user"
}
}
dynamic "zdbs" {
for_each = local.datas
content {
name = zdbs.value
description = "description"
password = "password"
size = var.size
mode = "seq"
}
}
}
resource "grid_deployment" "qsfs" {
node = var.tfnodeid1
network_name = grid_network.net1.name
qsfs {
name = "qsfs"
description = "description6"
cache = 10240 # 10 GB
minimal_shards = var.minimal_shards
expected_shards = var.expected_shards
redundant_groups = 0
redundant_nodes = 0
max_zdb_data_dir_size = 512 # 512 MB
encryption_algorithm = "AES"
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
compression_algorithm = "snappy"
metadata {
type = "zdb"
prefix = "hamada"
encryption_algorithm = "AES"
encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af"
dynamic "backends" {
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"]
content {
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
namespace = backends.value.namespace
password = backends.value.password
}
}
}
groups {
dynamic "backends" {
for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"]
content {
address = format("[%s]:%d", backends.value.ips[1], backends.value.port)
namespace = backends.value.namespace
password = backends.value.password
}
}
}
}
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = var.cpu
memory = var.memory
entrypoint = "/sbin/zinit init"
planetary = true
env_vars = {
SSH_KEY = var.SSH_KEY
}
mounts {
disk_name = "qsfs"
mount_point = "/qsfs"
}
}
}
output "metrics" {
value = grid_deployment.qsfs.qsfs[0].metrics_endpoint
}
output "ygg_ip" {
value = grid_deployment.qsfs.vms[0].ygg_ip
}
```
Note that we named the VM as **vm1**.
* Create the `credentials.auto.tfvars` file:
* ```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```terraform
# Network
network = "main"
# Credentials
mnemonics = "..."
SSH_KEY = "..."
# Node Parameters
tfnodeid1 = "..."
size = "15"
cpu = "1"
memory = "512"
# QSFS Parameters
minimal_shards = "2"
expected_shards = "4"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node you want to deploy on. Simply replace the three dots by the content. If you want to deploy on the Test net, you can replace **main** by **test**.
Set the parameters for your VMs as you wish. For this example, we use the minimum parameters.
For the section QSFS Parameters, you can decide on how many VMs your data will be sharded. You can also decide the minimum of VMs to recover the whole of your data. For example, a 16 minimum, 20 expected configuration will disperse your data on 20 3Nodes, and the deployment will only need at any time 16 VMs to recover the whole of your data. This gives resilience and redundancy to your storage. A 2 minimum, 4 expected configuration is given here for the main template.
## Deploy the Micro VM with Terraform
We now deploy the QSFS deployment with Terraform. Make sure that you are in the correct folder `terraform/deployment-qsfs-microvm` containing the main and variables files.
* Initialize Terraform by writing the following in the terminal:
* ```
terraform init
```
* Apply the Terraform deployment:
* ```
terraform apply
```
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
* ```
terraform show
```
## SSH into the 3Node
You can now SSH into the 3Node with Planetary Network.
To SSH with Planetary Network, write the following:
```
ssh root@planetary_IP
```
Note that the IP address should be the value of the parameter **ygg_ip** from the Terraform Outputs.
You now have an SSH connection access to the VM over Planetary Network.
## Questions and Feedback
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,13 +0,0 @@
<h1> Terraform Resources </h1>
<h2> Table of Contents </h2>
- [Using Scheduler](./terraform_scheduler.md)
- [Virtual Machine](./terraform_vm.html)
- [Web Gateway](./terraform_vm_gateway.html)
- [Kubernetes Cluster](./terraform_k8s.html)
- [ZDB](./terraform_zdb.html)
- [Quantum Safe Filesystem](./terraform_qsfs.md)
- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md)
- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md)
- [CapRover](./terraform_caprover.html)

View File

@@ -1,153 +0,0 @@
<h1> Scheduler Resource </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [How the Scheduler Works](#how-the-scheduler-works)
- [Quick Example](#quick-example)
***
## Introduction
Using the TFGrid scheduler enables users to automatically get the nodes that match their criterias. We present here some basic information on this resource.
## How the Scheduler Works
To better understand the scheduler, we summarize the main process:
- At first if `farm_id` is specified, then the scheduler will check if this farm has the Farmerbot enabled
- If so it will try to find a suitable node using the Farmerbot.
- If the Farmerbot is not enabled, it will use grid proxy to find a suitable node.
## Quick Example
Let's take a look at the following example:
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
version = "1.8.1-dev"
}
}
}
provider "grid" {
}
locals {
name = "testvm"
}
resource "grid_scheduler" "sched" {
requests {
farm_id = 53
name = "node1"
cru = 3
sru = 1024
mru = 2048
node_exclude = [33] # exlude node 33 from your search
public_ips_count = 0 # this deployment needs 0 public ips
public_config = false # this node does not need to have public config
}
}
resource "grid_network" "net1" {
name = local.name
nodes = [grid_scheduler.sched.nodes["node1"]]
ip_range = "10.1.0.0/16"
description = "newer network"
}
resource "grid_deployment" "d1" {
name = local.name
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
}
output "vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "vm1_ygg_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "vm2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "vm2_ygg_ip" {
value = grid_deployment.d1.vms[1].ygg_ip
}
```
From the example above, we take a closer look at the following section:
```
resource "grid_scheduler" "sched" {
requests {
name = "node1"
cru = 3
sru = 1024
mru = 2048
node_exclude = [33] # exlude node 33 from your search
public_ips_count = 0 # this deployment needs 0 public ips
public_config = false # this node does not need to have public config
}
}
```
In this case, the user is specifying the requirements which match the deployments.
Later on, the user can use the result of the scheduler which contains the `[nodes]` in the deployments:
```
resource "grid_network" "net1" {
name = local.name
nodes = [grid_scheduler.sched.nodes["node1"]]
...
}
```
and
```
resource "grid_deployment" "d1" {
name = local.name
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
vms {
name = "vm1"
...
}
...
}
```

View File

@@ -1,282 +0,0 @@
<h1> VM Deployment </h1>
<h2>Table of Contents </h2>
- [Introduction](#introduction)
- [Template](#template)
- [Using scheduler](#using-scheduler)
- [Using Grid Explorer](#using-grid-explorer)
- [Describing the overlay network for the project](#describing-the-overlay-network-for-the-project)
- [Describing the deployment](#describing-the-deployment)
- [Which flists to use](#which-flists-to-use)
- [Remark multiple VMs](#remark-multiple-vms)
- [Reference](#reference)
***
## Introduction
The following provides the basic information to deploy a VM with Terraform on the TFGrid.
## Template
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
version = "1.8.1-dev"
}
}
}
provider "grid" {
mnemonics = "FROM THE CREATE TWIN STEP"
network = "dev" # or test to use testnet
}
locals {
name = "testvm"
}
resource "grid_scheduler" "sched" {
requests {
name = "node1"
cru = 3
sru = 1024
mru = 2048
node_exclude = [33] # exlude node 33 from your search
public_ips_count = 0 # this deployment needs 0 public ips
public_config = false # this node does not need to have public config
}
}
resource "grid_network" "net1" {
name = local.name
nodes = [grid_scheduler.sched.nodes["node1"]]
ip_range = "10.1.0.0/16"
description = "newer network"
}
resource "grid_deployment" "d1" {
name = local.name
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
}
output "vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "vm1_ygg_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "vm2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "vm2_ygg_ip" {
value = grid_deployment.d1.vms[1].ygg_ip
}
```
## Using scheduler
- If the user decided to choose [scheduler](terraform_scheduler.md) to find a node for him, then he will use the node returned from the scheduler as the example above
## Using Grid Explorer
- If not, the user can still specify the node directly if he wants using the grid explorer to find a node that matches his requirements
## Describing the overlay network for the project
```terraform
resource "grid_network" "net1" {
nodes = [grid_scheduler.sched.nodes["node1"]]
ip_range = "10.1.0.0/16"
name = "network"
description = "some network"
add_wg_access = true
}
```
We tell terraform we will have a network one node `having the node ID returned from the scheduler` using the IP Range `10.1.0.0/16` and add wireguard access for this network
## Describing the deployment
```terraform
resource "grid_deployment" "d1" {
name = local.name
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
}
}
```
It's bit long for sure but let's try to dissect it a bit
```terraform
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "")
```
- `node = grid_scheduler.sched.nodes["node1"]` means this deployment will happen on node returned from the scheduler. Otherwise the user can specify the node as `node = 2` and in this case the choice of the node is completely up to the user at this point. They need to do the capacity planning. Check the [Node Finder](../../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria.
- `network_name` which network to deploy our project on, and here we choose the `name` of network `net1`
- `ip_range` here we [lookup](https://www.terraform.io/docs/language/functions/lookup.html) the iprange of node `2` and initially load it with `""`
> Advannced note: Direct map access fails during the planning if the key doesn't exist which happens in cases like adding a node to the network and a new deployment on this node. So it's replaced with this to make a default empty value to pass the planning validation and it's validated anyway inside the plugin.
## Which flists to use
see [list of flists](../../../developers/flist/grid3_supported_flists.md)
## Remark multiple VMs
in terraform you can define items of a list like the following
```
listname {
}
listname {
}
```
So to add a VM
```terraform
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
publicip = true
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY ="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeq1MFCQOv3OCLO1HxdQl8V0CxAwt5AzdsNOL91wmHiG9ocgnq2yipv7qz+uCS0AdyOSzB9umyLcOZl2apnuyzSOd+2k6Cj9ipkgVx4nx4q5W1xt4MWIwKPfbfBA9gDMVpaGYpT6ZEv2ykFPnjG0obXzIjAaOsRthawuEF8bPZku1yi83SDtpU7I0pLOl3oifuwPpXTAVkK6GabSfbCJQWBDSYXXM20eRcAhIMmt79zo78FNItHmWpfPxPTWlYW02f7vVxTN/LUeRFoaNXXY+cuPxmcmXp912kW0vhK9IvWXqGAEuSycUOwync/yj+8f7dRU7upFGqd6bXUh67iMl7 ahmed@ahmedheaven"
}
}
```
- We give it a name within our deployment `vm1`
- `flist` is used to define the flist to run within the VM. Check the [list of flists](../../../developers/flist/grid3_supported_flists.md)
- `cpu` and `memory` are used to define the cpu and memory
- `publicip` is usued to define if it requires a public IP or not
- `entrypoint` is used define the entrypoint which in most of the cases in `/sbin/zinit init`, but in case of flists based on vms it can be specific to each flist
- `env_vars` are used to define te environment variables, in this example we define `SSH_KEY` to authorize me accessing the machine
Here we say we will have this deployment on node with `twin ID 2` using the overlay network defined from before `grid_network.net1.name` and use the ip range allocated to that specific node `2`
The file describes only the desired state which is `a deployment of two VMs and their specifications in terms of cpu and memory, and some environment variables e.g sshkey to ssh into the machine`
## Reference
A complete list of VM workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vms).
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_network" "net1" {
nodes = [8]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
node = 8
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "")
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
publicip = true
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
planetary = true
}
vms {
name = "anothervm"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
}
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_zmachine2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "public_ip" {
value = grid_deployment.d1.vms[0].computedip
}
output "ygg_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
```

View File

@@ -1,172 +0,0 @@
<h1> Terraform Web Gateway With VM </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Expose with Prefix](#expose-with-prefix)
- [Expose with Full Domain](#expose-with-full-domain)
- [Using Gateway Name on Private Networks (WireGuard)](#using-gateway-name-on-private-networks-wireguard)
***
## Introduction
In this section, we provide the basic information for a VM web gateway using Terraform on the TFGrid.
## Expose with Prefix
A complete list of gateway name workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/name_proxy.md).
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
# this data source is used to break circular dependency in cases similar to the following:
# vm: needs to know the domain in its init script
# gateway_name: needs the ip of the vm to use as backend.
# - the fqdn can be computed from grid_gateway_domain for the vm
# - the backend can reference the vm ip directly
data "grid_gateway_domain" "domain" {
node = 7
name = "ashraf"
}
resource "grid_network" "net1" {
nodes = [8]
ip_range = "10.1.0.0/24"
name = "network"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
node = 8
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "")
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/strm-helloworld-http-latest.flist"
cpu = 2
publicip = true
memory = 1024
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP ashraf@thinkpad"
}
planetary = true
}
}
resource "grid_name_proxy" "p1" {
node = 7
name = "ashraf"
backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])]
tls_passthrough = false
}
output "fqdn" {
value = data.grid_gateway_domain.domain.fqdn
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "public_ip" {
value = split("/",grid_deployment.d1.vms[0].computedip)[0]
}
output "ygg_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
```
please note to use grid_name_proxy you should choose a node that has public config and has a domain in its public config like node 7 in the following example
![ ](./img/graphql_publicconf.png)
Here
- we created a grid domain resource `ashraf` to be deployed on gateway node `7` to end up with a domain `ashraf.ghent01.devnet.grid.tf`
- we create a proxy for the gateway to send the traffic coming to `ashraf.ghent01.devnet.grid.tf` to the vm as a backend, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replcae it with `true` your backend service needs to be able to do the TLS termination
## Expose with Full Domain
A complete list of gateway fqdn workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/fqdn_proxy.md).
it is more like the above example the only difference is you need to create an `A record` on your name provider for `remote.omar.grid.tf` to gateway node `7` IPv4.
```
resource "grid_fqdn_proxy" "p1" {
node = 7
name = "workloadname"
fqdn = "remote.omar.grid.tf"
backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])]
tls_passthrough = true
}
output "fqdn" {
value = grid_fqdn_proxy.p1.fqdn
}
```
## Using Gateway Name on Private Networks (WireGuard)
It is possible to create a vm with private ip (wireguard) and use it as a backend for a gateway contract. this is done as the following
- Create a gateway domain data source. this data source will construct the full domain so we can use it afterwards
```
data "grid_gateway_domain" "domain" {
node = grid_scheduler.sched.nodes["node1"]
name = "examp123456"
}
```
- create a network resource
```
resource "grid_network" "net1" {
nodes = [grid_scheduler.sched.nodes["node1"]]
ip>_range = "10.1.0.0/16"
name = mynet
description = "newer network"
}
```
- Create a vm to host your service
```
resource "grid_deployment" "d1" {
name = vm1
node = grid_scheduler.sched.nodes["node1"]
network_name = grid_network.net1.name
vms {
...
}
}
```
- Create a grid_name_proxy resource using the network created above and the wireguard ip of the vm that host the service. Also consider changing the port to the correct port
```
resource "grid_name_proxy" "p1" {
node = grid_scheduler.sched.nodes["node1"]
name = "examp123456"
backends = [format("http://%s:9000", grid_deployment.d1.vms[0].ip)]
network = grid_network.net1.name
tls_passthrough = false
}
```
- To know the full domain created using the data source above you can show it via
```
output "fqdn" {
value = data.grid_gateway_domain.domain.fqdn
}
```
- Now vist the domain you should be able to reach your service hosted on the vm

View File

@@ -1,64 +0,0 @@
<h1> Deploying a ZDB with terraform </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We provide a basic template for ZDB deployment with Terraform on the TFGrid.
A brief description of zdb fields can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-zdbs).
A more thorough description of zdb operation can be found in its parent [repo](https://github.com/threefoldtech/0-db).
## Example
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_deployment" "d1" {
node = 4
zdbs{
name = "zdb1"
size = 10
description = "zdb1 description"
password = "zdbpasswd1"
mode = "user"
}
zdbs{
name = "zdb2"
size = 2
description = "zdb2 description"
password = "zdbpasswd2"
mode = "seq"
}
}
output "deployment_id" {
value = grid_deployment.d1.id
}
output "zdb1_endpoint" {
value = format("[%s]:%d", grid_deployment.d1.zdbs[0].ips[0], grid_deployment.d1.zdbs[0].port)
}
output "zdb1_namespace" {
value = grid_deployment.d1.zdbs[0].namespace
}
```

View File

@@ -1,108 +0,0 @@
# Zlogs
Zlogs is a utility that allows you to stream VM logs to a remote location. You can find the full description [here](https://github.com/threefoldtech/zos/tree/main/docs/manual/zlogs)
## Using Zlogs
In terraform, a vm has a zlogs field, this field should contain a list of target URLs to stream logs to.
Valid protocols are: `ws`, `wss`, and `redis`.
For example, to deploy two VMs named "vm1" and "vm2", with one vm1 streaming logs to vm2, this is what main.tf looks like:
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
}
resource "grid_network" "net1" {
nodes = [2, 4]
ip_range = "10.1.0.0/16"
name = "network"
description = "some network description"
add_wg_access = true
}
resource "grid_deployment" "d1" {
node = 2
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "")
vms {
name = "vm1" #streaming logs
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
entrypoint = "/sbin/zinit init"
cpu = 2
memory = 1024
env_vars = {
SSH_KEY = "PUT YOUR SSH KEY HERE"
}
zlogs = tolist([
format("ws://%s:5000", replace(grid_deployment.d2.vms[0].computedip, "//.*/", "")),
])
}
}
resource "grid_deployment" "d2" {
node = 4
network_name = grid_network.net1.name
ip_range = lookup(grid_network.net1.nodes_ip_range, 4, "")
vms {
name = "vm2" #receiving logs
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = "PUT YOUR SSH KEY HERE"
}
publicip = true
}
}
```
At this point, two VMs are deployed, and vm1 is ready to stream logs to vm2.
But what is missing here is that vm1 is not actually producing any logs, and vm2 is not listening for incoming messages.
### Creating a server
- First, we will create a server on vm2. This should be a websocket server listening on port 5000 as per our zlogs definition in main.tf ```ws://%s:5000```.
- a simple python websocket server looks like this:
```
import asyncio
import websockets
import gzip
async def echo(websocket):
async for message in websocket:
data = gzip.decompress(message).decode('utf-8')
f = open("output.txt", "a")
f.write(data)
f.close()
async def main():
async with websockets.serve(echo, "0.0.0.0", 5000, ping_interval=None):
await asyncio.Future()
asyncio.run(main())
```
- Note that incoming messages are decompressed since zlogs compresses any messages using gzip.
- After a message is decompressed, it is then appended to `output.txt`.
### Streaming logs
- Zlogs streams anything written to stdout of the zinit process on a vm.
- So, simply running ```echo "to be streamed" 1>/proc/1/fd/1``` on vm1 should successfuly stream this message to the vm2 and we should be able to see it in `output.txt`.
- Also, if we want to stream a service's logs, a service definition file should be created in ```/etc/zinit/example.yaml``` on vm1 and should look like this:
```
exec: sh -c "echo 'to be streamed'"
log: stdout
```

View File

@@ -1,30 +0,0 @@
- [**Home**](@threefold:threefold_home)
- [**Manual 3 Home**](@manual3_home_new)
---
**Terraform**
- [Read Me First](@terraform_readme)
- [Install](@terraform_install)
- [Basics](@terraform_basics)
- [Tutorial](@terraform_get_started)
- [Delete](@terraform_delete)
---
**Resources**
- [using scheduler](@terraform_scheduler)
- [Virtual Machine](@terraform_vm)
- [Web Gateway](@terraform_vm_gateway)
- [Kubernetes cluster](@terraform_k8s)
- [ZDB](@terraform_zdb)
- [Quantum Filesystem](@terraform_qsfs)
- [CapRover](@terraform_caprover)
**Advanced**
- [Terraform Provider](@terraform_provider)
- [Terraform Provisioners](@terraform_provisioners)
- [Mounts](@terraform_mounts)
- [Capacity planning](@terraform_capacity_planning)
- [Updates](terraform_updates)

View File

@@ -1,44 +0,0 @@
```terraform
resource "grid_network" "net" {
nodes = [2]
ip_range = "10.1.0.0/16"
name = "network"
description = "newer network"
}
resource "grid_deployment" "d1" {
node = 2
network_name = grid_network.net.name
ip_range = lookup(grid_network.net.nodes_ip_range, 2, "")
disks {
name = "mydisk1"
size = 2
description = "this is my disk description1"
}
disks {
name = "mydisk2"
size=2
description = "this is my disk2"
}
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 1
memory = 1024
entrypoint = "/sbin/zinit init"
mounts {
disk_name = "mydisk1"
mount_point = "/opt"
}
mounts {
disk_name = "mydisk2"
mount_point = "/test"
}
env_vars = {
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP"
}
}
}
```

View File

@@ -1,187 +0,0 @@
<h1>Terraform Basics</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Requirements](#requirements)
- [Basic Commands](#basic-commands)
- [Find A Node](#find-a-node)
- [Preparation](#preparation)
- [Main File Details](#main-file-details)
- [Initializing the Provider](#initializing-the-provider)
- [Export Environment Variables](#export-environment-variables)
- [Output Section](#output-section)
- [Start a Deployment](#start-a-deployment)
- [Delete a Deployment](#delete-a-deployment)
- [Available Flists](#available-flists)
- [Full and Micro Virtual Machines](#full-and-micro-virtual-machines)
- [Tips on Managing Resources](#tips-on-managing-resources)
- [Conclusion](#conclusion)
***
## Introduction
We cover some important aspects of Terraform deployments on the ThreeFold Grid.
For a complete guide on deploying a full VM on the TFGrid, read [this documentation](./terraform_full_vm.md).
## Requirements
Here are the requirements to use Terraform on the TFGrid:
- [Set your TFGrid account](../getstarted/tfgrid3_getstarted.md)
- [Install Terraform](../terraform/terraform_install.md)
## Basic Commands
Here are some very useful commands to use with Terraform:
- Initialize the repo `terraform init`
- Execute a terraform file `terraform apply`
- See the output `terraform output`
- This is useful when you want to output variables such as public ip, planetary network ip, wireguard configurations, etc.
- See the state `terraform show`
- Destroy `terraform destroy`
## Find A Node
There are two options when it comes to finding a node to deploy on. You can use the scheduler or search for a node with the Nodes Explorer.
- Use the [scheduler](resources/terraform_scheduler.md)
- Scheduler will help you find a node that matches your criteria
- Use the Nodes Explorer
- You can check the [Node Finder](../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria.
- Make sure you choose a node which has enough capacity and is available (up and running).
## Preparation
We cover the basic preparations beforing explaining the main file.
- Make a directory for your project
- ```
mkdir myfirstproject
```
- Change directory
- ```
cd myfirstproject
```
- Create a main file and insert content
- ```
nano main.tf
```
## Main File Details
Here is a concrete example of a Terraform main file.
### Initializing the Provider
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
version = "1.8.1"
}
}
}
```
- You can always provide a version to chooses a specific version of the provider like `1.8.1-dev` to use version `1.8.1` for devnet
- If `version = "1.8.1"` is omitted, the provider will fetch the latest version, but for environments other than main you have to specify the version explicitly
- For devnet, qanet and testnet use version = `"<VERSION>-dev", "<VERSION>-qa" and "<VERSION>-rcx"` respectively
Providers can have different arguments e.g using which identity when deploying, which Substrate network to create contracts on, etc. This can be done in the provider section, as shown below:
```terraform
provider "grid" {
mnemonics = "FROM THE CREATE TWIN STEP"
network = "dev" # or test to use testnet
}
```
## Export Environment Variables
When writing the main file, you can decide to leave a variable content empty. In this case you can export the variable content as environment variables.
* Export your mnemonics
* ```
export MNEMONICS="..."
```
* Export the network
* ```
export NETWORK="..."
```
For more info, consult the [Provider Manual](./advanced/terraform_provider.md).
### Output Section
The output section is useful to find information such as:
- the overlay wireguard network configurations
- the private IPs of the VMs
- the public IP of the VM `exposed under computedip`
The output section will look something like this:
```terraform
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_vm2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "public_ip" {
value = grid_deployment.d1.vms[0].computedip
}
```
## Start a Deployment
To start a deployment, run the following command `terraform init && terraform apply`.
## Delete a Deployment
To delete a deployment, run the following command:
```
terraform destroy
```
## Available Flists
You can consult the [list of Flists](../../developers/flist/flist.md) to learn more about the available Flist to use with a virtual machine.
## Full and Micro Virtual Machines
There are some key distinctions to take into account when it comes to deploying full or micro VMs on the TFGrid:
* Only the flist determines if we get a full or a micro VM
* Full VMs ignore the **rootfs** field and use the first mount as their root filesystem (rootfs)
* We can upgrade a full VM by tearing it down, leaving the disk in detached state, and then reattaching the disk to a new VM
* For more information on this, read [this documentation](https://forum.threefold.io/t/full-vm-recovery-tool/4152).
## Tips on Managing Resources
As a general advice, you can use multiple accounts on TFChain and group your resources per account.
This gives you the following benefits:
- More control over TFT spending
- Easier to delete all your contracts
- Less chance to make mistakes
- Can use an account to share access with multiple people
## Conclusion
This was a quick introduction to Terraform, for a complete guide, please read [this documentation](./terraform_full_vm.md). For advanced tutorials and deployments, read [this section](./advanced/terraform_advanced_readme.md). To learn more about the different resources to deploy with Terraform on the TFGrid, read [this section](./resources/terraform_resources_readme.md).

View File

@@ -1,280 +0,0 @@
<h1>Terraform Complete Full VM Deployment</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Main Process](#main-process)
- [Prerequisites](#prerequisites)
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
- [Using the Grid Scheduler](#using-the-grid-scheduler)
- [Using the Grid Explorer](#using-the-grid-explorer)
- [Create the Terraform Files](#create-the-terraform-files)
- [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform)
- [SSH into the 3Node](#ssh-into-the-3node)
- [Delete the Deployment](#delete-the-deployment)
- [Conclusion](#conclusion)
***
## Introduction
This short ThreeFold Guide will teach you how to deploy a Full VM on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04.
The steps are very simple. You first need to create the Terraform files, the variables file and the deployment file, and then deploy the full VM. After the deployment is done, you can SSH into the full VM.
The main goal of this guide is to show you all the necessary steps to deploy a Full VM on the TGrid using Terraform. Once you get acquainted with this first basic deployment, you should be able to explore on your own the possibilities that the TFGrid and Terraform combined provide.
## Main Process
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload.
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` file as is.
On your local computer, create a new folder named `terraform` and a subfolder called `deployments`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
Modify the variable file to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on.
Once this is done, initialize and apply Terraform to deploy your workload, then SSH into the Full VM. That's it! Now let's go through all these steps in further details.
## Prerequisites
- [Install Terraform](./terraform_install.md)
You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
## Find a 3Node with the ThreeFold Explorer
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address.
We present two options to find a suitable node: the scheduler and the TFGrid Explorer.
### Using the Grid Scheduler
Using the TFGrid scheduler can be very efficient depending on what you are trying to achieve. To learn more about the scheduler, please refer to this [Scheduler Guide](resources/terraform_scheduler.md).
### Using the Grid Explorer
We show here how to find a suitable 3Node using the ThreeFold Explorer.
- Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
- Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
- For proper understanding, we give further information on some relevant columns:
- `ID` refers to the node ID
- `Free Public IPs` refers to available IPv4 public IP addresses
- `HRU` refers to HDD storage
- `SRU` refers to SSD storage
- `MRU` refers to RAM (memory)
- `CRU` refers to virtual cores (vcores)
- To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
- At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
- For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
- `Free SRU (GB)`: 15
- `Free MRU (GB)`: 1
- `Total CRU (Cores)`: 1
- `Free Public IP`: 2
- Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
## Create the Terraform Files
Open the terminal.
- Go to the home folder
- ```
cd ~
```
- Create the folder `terraform` and the subfolder `deployment-full-vm`:
- ```
mkdir -p terraform/deployment-full-vm
```
- ```
cd terraform/deployment-full-vm
```
- Create the `main.tf` file:
- ```
nano main.tf
```
- Copy the `main.tf` content and save the file.
```
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
variable "mnemonics" {
type = string
}
variable "SSH_KEY" {
type = string
}
variable "tfnodeid1" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
provider "grid" {
mnemonics = var.mnemonics
network = "main"
}
locals {
name = "tfvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [var.tfnodeid1]
ip_range = "10.1.0.0/16"
description = "newer network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
disks {
name = "disk1"
size = var.size
}
name = local.name
node = var.tfnodeid1
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
cpu = var.cpu
mounts {
disk_name = "disk1"
mount_point = "/disk1"
}
memory = var.memory
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip = true
planetary = true
}
}
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_zmachine1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "ygg_ip1" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "ipv4_vm1" {
value = grid_deployment.d1.vms[0].computedip
}
```
In this file, we name the VM as `vm1`.
- Create the `credentials.auto.tfvars` file:
- ```
nano credentials.auto.tfvars
```
- Copy the `credentials.auto.tfvars` content and save the file.
```
mnemonics = "..."
SSH_KEY = "..."
tfnodeid1 = "..."
size = "15"
cpu = "1"
memory = "512"
```
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the server used. Simply replace the three dots by the content.
We set here the minimum specs for a full VM, but you can adjust these parameters.
## Deploy the Full VM with Terraform
We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployments` containing the main and variables files.
- Initialize Terraform:
- ```
terraform init
```
- Apply Terraform to deploy the full VM:
- ```
terraform apply
```
After deployments, take note of the 3Node' IPv4 address. You will need this address to SSH into the 3Node.
## SSH into the 3Node
- To [SSH into the 3Node](../getstarted/ssh_guide/ssh_guide.md), write the following:
- ```
ssh root@VM_IPv4_Address
```
## Delete the Deployment
To stop the Terraform deployment, you simply need to write the following line in the terminal:
```
terraform destroy
```
Make sure that you are in the Terraform directory you created for this deployment.
## Conclusion
You now have the basic knowledge and know-how to deploy on the TFGrid using Terraform.
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,87 +0,0 @@
![ ](./advanced/img//terraform_.png)
## Using Terraform
- make a directory for your project `mkdir myfirstproject`
- `cd myfirstproject`
- create `main.tf` <- creates the terraform main file
## Create
to start the deployment `terraform init && terraform apply`
## Destroying
can be done using `terraform destroy`
And that's it!! you managed to deploy 2 VMs on the threefold grid v3
## How to use a Terraform File
### Initializing the provider
In terraform's global section
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
version = "1.8.1"
}
}
}
```
- You can always provide a version to chooses a specific version of the provider like `1.8.1-dev` to use version `1.8.1` for devnet
- If `version = "1.8.1"` is omitted, the provider will fetch the latest version but for environments other than main you have to specify the version explicitly
- For devnet, qanet and testnet use version = `"<VERSION>-dev", "<VERSION>-qa" and "<VERSION>-rcx"` respectively
Providers can have different arguments e.g using which identity when deploying, which substrate network to create contracts on, .. etc. This can be done in the provider section
```terraform
provider "grid" {
mnemonics = "FROM THE CREATE TWIN STEP"
network = "dev" # or test to use testnet
}
```
Please note you can leave its content empty and export everything as environment variables
```
export MNEMONICS="....."
export NETWORK="....."
```
For more info see [Provider Manual](./advanced/terraform_provider.md)
### output section
```terraform
output "wg_config" {
value = grid_network.net1.access_wg_config
}
output "node1_vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "node1_vm2_ip" {
value = grid_deployment.d1.vms[1].ip
}
output "public_ip" {
value = grid_deployment.d1.vms[0].computedip
}
```
Output parameters show what has been done:
- the overlay wireguard network configurations
- the private IPs of the VMs
- the public IP of the VM `exposed under computedip`
### Which flists to use in VM
see [list of flists](../manual3_iac/grid3_supported_flists.md)

View File

@@ -1,55 +0,0 @@
<h1> GPU Support and Terraform </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
The TFGrid now supports GPUs. We present here a quick example. This section will be expanded as new information comes in.
## Example
```terraform
terraform {
required_providers {
grid = {
source = "threefoldtechdev.com/providers/grid"
}
}
}
provider "grid" {
}
locals {
name = "testvm"
}
resource "grid_network" "net1" {
name = local.name
nodes = [93]
ip_range = "10.1.0.0/16"
description = "newer network"
}
resource "grid_deployment" "d1" {
name = local.name
node = 93
network_name = grid_network.net1.name
vms {
name = "vm1"
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
cpu = 2
memory = 1024
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = file("~/.ssh/id_rsa.pub")
}
planetary = true
gpus = [
"0000:0e:00.0/1002/744c"
]
}
```

View File

@@ -1,53 +0,0 @@
<h1> Installing Terraform</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Install Terraform](#install-terraform)
- [Install Terraform on Linux](#install-terraform-on-linux)
- [Install Terraform on MAC](#install-terraform-on-mac)
- [Install Terraform on Windows](#install-terraform-on-windows)
- [ThreeFold Terraform Plugin](#threefold-terraform-plugin)
- [Questions and Feedback](#questions-and-feedback)
***
## Introduction
There are many ways to install Terraform depending on your operating system. Terraform is available for Linux, MAC and Windows.
## Install Terraform
You can get Terraform from the Terraform website [download page](https://www.terraform.io/downloads.html). You can also install it using your system package manager. The Terraform [installation manual](https://learn.hashicorp.com/tutorials/terraform/install-cli) contains the essential information for a proper installation.
We cover here the basic steps for Linux, MAC and Windows for convenience. Refer to the official Terraform documentation if needed.
### Install Terraform on Linux
To install Terraform on Linux, we follow the official [Terraform documentation](https://developer.hashicorp.com/terraform/downloads).
* [Install Terraform on Linux](../computer_it_basics/cli_scripts_basics.md#install-terraform)
### Install Terraform on MAC
To install Terraform on MAC, install Brew and then install Terraform.
* [Install Brew](../computer_it_basics/cli_scripts_basics.md#install-brew)
* [Install Terraform with Brew](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-brew)
### Install Terraform on Windows
To install Terraform on Windows, a quick way is to first install Chocolatey and then install Terraform.
* [Install Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-chocolatey)
* [Install Terraform with Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-chocolatey)
## ThreeFold Terraform Plugin
The ThreeFold [Terraform plugin](https://github.com/threefoldtech/terraform-provider-grid) is supported on Linux, MAC and Windows.
There's no need to specifically install the ThreeFold Terraform plugin. Terraform will automatically load it from an online directory according to instruction within the deployment file.
## Questions and Feedback
If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@@ -1,45 +0,0 @@
<h1> Terraform </h1>
Welcome to the *Terraform* section of the ThreeFold Manual!
In this section, we'll embark on a journey to explore the powerful capabilities of Terraform within the ThreeFold Grid ecosystem. Terraform, a cutting-edge infrastructure as code (IaC) tool, empowers you to define and provision your infrastructure efficiently and consistently.
<h2>Table of Contents</h2>
- [What is Terraform?](#what-is-terraform)
- [Terraform on ThreeFold Grid: Unleashing Power and Simplicity](#terraform-on-threefold-grid-unleashing-power-and-simplicity)
- [Get Started](#get-started)
- [Features](#features)
- [What is Not Supported](#what-is-not-supported)
***
## What is Terraform?
Terraform is an open-source tool that enables you to describe and deploy infrastructure using a declarative configuration language. With Terraform, you can define your infrastructure components, such as virtual machines, networks, and storage, in a human-readable configuration file. This file, often referred to as the Terraform script, becomes a blueprint for your entire infrastructure.
The beauty of Terraform lies in its ability to automate the provisioning and management of infrastructure across various cloud providers, ensuring that your deployments are reproducible and scalable. It promotes collaboration, version control, and the ability to treat your infrastructure as code, providing a unified and seamless approach to managing complex environments.
## Terraform on ThreeFold Grid: Unleashing Power and Simplicity
Within the ThreeFold Grid ecosystem, Terraform plays a pivotal role in streamlining the deployment and orchestration of decentralized, peer-to-peer infrastructure. Leveraging the unique capabilities of the ThreeFold Grid, you can use Terraform to define and deploy your workloads, tapping into the TFGrid decentralized architecture for unparalleled scalability, reliability, and sustainability.
This manual will guide you through the process of setting up, configuring, and managing your infrastructure on the ThreeFold Grid using Terraform. Whether you're a seasoned developer, a DevOps professional, or someone exploring the world of decentralized computing for the first time, this guide is designed to provide clear and concise instructions to help you get started.
## Get Started
![ ](../terraform/img//terraform_works.png)
Threefold loves Open Source! In v3.0 we are integrating one of the most popular 'Infrastructure as Code' (IaC) tools of the cloud industry, [Terraform](https://terraform.io). Utilizing the Threefold grid v3 using Terraform gives a consistent workflow and a familiar experience for everyone coming from different background. Terraform describes the state desired of how the deployment should look like instead of imperatively describing the low level details and the mechanics of how things should be glued together.
## Features
- All basic primitives from ThreeFold grid can be deployed, which is a lot.
- Terraform can destroy a deployment
- Terraform shows all the outputs
## What is Not Supported
- we don't support updates/upgrades, if you want a change you need to destroy a deployment & re-create your deployment this in case you want to change the current running instances properties or change the node, but adding a vm to an existing deployment this shouldn't affect other running vm and same if we need to decommission a vm from a deployment this also shouldn't affect the others

View File

@@ -1,34 +0,0 @@
<h1> Terraform</h1>
<h2>Table of Contents</h2>
- [Overview](./terraform_readme.md)
- [Installing Terraform](./terraform_install.md)
- [Terraform Basics](./terraform_basics.md)
- [Full VM Deployment](./terraform_full_vm.md)
- [GPU Support](./terraform_gpu_support.md)
- [Resources](./resources/terraform_resources_readme.md)
- [Using Scheduler](./resources/terraform_scheduler.md)
- [Virtual Machine](./resources/terraform_vm.md)
- [Web Gateway](./resources/terraform_vm_gateway.md)
- [Kubernetes Cluster](./resources/terraform_k8s.md)
- [ZDB](./resources/terraform_zdb.md)
- [Quantum Safe Filesystem](./resources/terraform_qsfs.md)
- [QSFS on Micro VM](./resources/terraform_qsfs_on_microvm.md)
- [QSFS on Full VM](./resources/terraform_qsfs_on_full_vm.md)
- [CapRover](./resources/terraform_caprover.md)
- [Advanced](./advanced/terraform_advanced_readme.md)
- [Terraform Provider](./advanced/terraform_provider.md)
- [Terraform Provisioners](./advanced/terraform_provisioners.md)
- [Mounts](./advanced/terraform_mounts.md)
- [Capacity Planning](./advanced/terraform_capacity_planning.md)
- [Updates](./advanced/terraform_updates.md)
- [SSH Connection with Wireguard](./advanced/terraform_wireguard_ssh.md)
- [Set a Wireguard VPN](./advanced/terraform_wireguard_vpn.md)
- [Synced MariaDB Databases](./advanced/terraform_mariadb_synced_databases.md)
- [Nomad](./advanced/terraform_nomad.md)
- [Nextcloud Deployments](./advanced/terraform_nextcloud_toc.md)
- [Nextcloud All-in-One Deployment](./advanced/terraform_nextcloud_aio.md)
- [Nextcloud Single Deployment](./advanced/terraform_nextcloud_single.md)
- [Nextcloud Redundant Deployment](./advanced/terraform_nextcloud_redundant.md)
- [Nextcloud 2-Node VPN Deployment](./advanced/terraform_nextcloud_vpn.md)