updated smaller collections for manual
This commit is contained in:
Binary file not shown.
After Width: | Height: | Size: 326 KiB |
@@ -0,0 +1,18 @@
|
||||
<h1> Terraform Advanced </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Terraform Provider](./terraform_provider.html)
|
||||
- [Terraform Provisioners](./terraform_provisioners.html)
|
||||
- [Mounts](./terraform_mounts.html)
|
||||
- [Capacity Planning](./terraform_capacity_planning.html)
|
||||
- [Updates](./terraform_updates.html)
|
||||
- [SSH Connection with Wireguard](./terraform_wireguard_ssh.md)
|
||||
- [Set a Wireguard VPN](./terraform_wireguard_vpn.md)
|
||||
- [Synced MariaDB Databases](./terraform_mariadb_synced_databases.md)
|
||||
- [Nomad](./terraform_nomad.md)
|
||||
- [Nextcloud Deployments](./terraform_nextcloud_toc.md)
|
||||
- [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md)
|
||||
- [Nextcloud Single Deployment](./terraform_nextcloud_single.md)
|
||||
- [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md)
|
||||
- [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md)
|
@@ -0,0 +1,159 @@
|
||||
<h1> Capacity Planning </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
- [Preparing the Requests](#preparing-the-requests)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/simple-dynamic/main.tf) we will discuss capacity planning on top of the TFGrid.
|
||||
|
||||
## Example
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "testvm"
|
||||
}
|
||||
|
||||
resource "grid_scheduler" "sched" {
|
||||
requests {
|
||||
name = "node1"
|
||||
cru = 3
|
||||
sru = 1024
|
||||
mru = 2048
|
||||
node_exclude = [33] # exlude node 33 from your search
|
||||
public_ips_count = 0 # this deployment needs 0 public ips
|
||||
public_config = false # this node does not need to have public config
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [grid_scheduler.sched.nodes["node1"]]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
name = local.name
|
||||
node = grid_scheduler.sched.nodes["node1"]
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = "PUT YOUR SSH KEY HERE"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "anothervm"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = "PUT YOUR SSH KEY HERE"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
output "vm1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "vm1_ygg_ip" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "vm2_ip" {
|
||||
value = grid_deployment.d1.vms[1].ip
|
||||
}
|
||||
output "vm2_ygg_ip" {
|
||||
value = grid_deployment.d1.vms[1].ygg_ip
|
||||
}
|
||||
```
|
||||
|
||||
## Preparing the Requests
|
||||
|
||||
```terraform
|
||||
resource "grid_scheduler" "sched" {
|
||||
# a machine for the first server instance
|
||||
requests {
|
||||
name = "server1"
|
||||
cru = 1
|
||||
sru = 256
|
||||
mru = 256
|
||||
}
|
||||
# a machine for the second server instance
|
||||
requests {
|
||||
name = "server2"
|
||||
cru = 1
|
||||
sru = 256
|
||||
mru = 256
|
||||
}
|
||||
# a name workload
|
||||
requests {
|
||||
name = "gateway"
|
||||
public_config = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Here we define a `list` of requests, each request has a name and filter options e.g `cru`, `sru`, `mru`, `hru`, having `public_config` or not, `public_ips_count` for this deployment, whether or not this node should be `dedicated`, whether or not this node should be `distinct` from other nodes in this plannder, `farm_id` to search in, nodes to exlude from search in `node_exclude`, and whether or not this node should be `certified`.
|
||||
|
||||
The full docs for the capacity planner `scheduler` are found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/scheduler.md)
|
||||
|
||||
And after that in our code we can reference the grid_scheduler object with the request name to be used instead of node_id.
|
||||
|
||||
For example:
|
||||
|
||||
```terraform
|
||||
resource "grid_deployment" "server1" {
|
||||
node = grid_scheduler.sched.nodes["server1"]
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, grid_scheduler.sched.nodes["server1"], "")
|
||||
vms {
|
||||
name = "firstserver"
|
||||
flist = "https://hub.grid.tf/omar0.3bot/omarelawady-simple-http-server-latest.flist"
|
||||
cpu = 1
|
||||
memory = 256
|
||||
rootfs_size = 256
|
||||
entrypoint = "/main.sh"
|
||||
env_vars = {
|
||||
SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52"
|
||||
}
|
||||
env_vars = {
|
||||
PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
|
||||
}
|
||||
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Note: you need to call `distinct` while specifying the nodes in the network, because the scheduler may assign server1, server2 on the same node. Example:
|
||||
|
||||
```terraform
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = distinct(values(grid_scheduler.sched.nodes))
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
}
|
||||
```
|
@@ -0,0 +1,585 @@
|
||||
<h1>MariaDB Synced Databases Between Two VMs</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Main Steps](#main-steps)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer)
|
||||
- [Set the VMs](#set-the-vms)
|
||||
- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Deploy the 3Nodes with Terraform](#deploy-the-3nodes-with-terraform)
|
||||
- [SSH into the 3Nodes](#ssh-into-the-3nodes)
|
||||
- [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment)
|
||||
- [Test the Wireguard Connection](#test-the-wireguard-connection)
|
||||
- [Configure the MariaDB Database](#configure-the-mariadb-database)
|
||||
- [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database)
|
||||
- [Create User with Replication Grant](#create-user-with-replication-grant)
|
||||
- [Verify the Access of the User](#verify-the-access-of-the-user)
|
||||
- [Set the VMs to accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection)
|
||||
- [TF Template Worker Server Data](#tf-template-worker-server-data)
|
||||
- [TF Template Master Server Data](#tf-template-master-server-data)
|
||||
- [Set the MariaDB Databases on Both 3Nodes](#set-the-mariadb-databases-on-both-3nodes)
|
||||
- [Install and Set GlusterFS](#install-and-set-glusterfs)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
***
|
||||
|
||||
# Introduction
|
||||
|
||||
In this ThreeFold Guide, we show how to deploy a VPN with Wireguard and create a synced MariaDB database between the two servers using GlusterFS, a scalable network filesystem. Any change in one VM's database will be echoed in the other VM's database. This kind of deployment can lead to useful server architectures.
|
||||
|
||||
|
||||
|
||||
# Main Steps
|
||||
|
||||
This guide might seems overwhelming, but the steps are carefully explained. Take your time and it will all work out!
|
||||
|
||||
To get an overview of the whole process, we present the main steps:
|
||||
|
||||
* Download the dependencies
|
||||
* Find two 3Nodes on the TFGrid
|
||||
* Deploy and set the VMs with Terraform
|
||||
* Create a MariaDB database
|
||||
* Set GlusterFS
|
||||
|
||||
|
||||
|
||||
# Prerequisites
|
||||
|
||||
* [Install Terraform](https://developer.hashicorp.com/terraform/downloads)
|
||||
* [Install Wireguard](https://www.wireguard.com/install/)
|
||||
|
||||
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
|
||||
|
||||
# Find Nodes with the ThreeFold Explorer
|
||||
|
||||
We first need to decide on which 3Nodes we will be deploying our workload.
|
||||
|
||||
We thus start by finding two 3Nodes with sufficient resources. For this current MariaDB guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address.
|
||||
|
||||
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
|
||||
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
|
||||
* For proper understanding, we give further information on some relevant columns:
|
||||
* `ID` refers to the node ID
|
||||
* `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
* `HRU` refers to HDD storage
|
||||
* `SRU` refers to SSD storage
|
||||
* `MRU` refers to RAM (memory)
|
||||
* `CRU` refers to virtual cores (vcores)
|
||||
* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters:
|
||||
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
|
||||
* `Free SRU (GB)`: 50
|
||||
* `Free MRU (GB)`: 2
|
||||
* `Total CRU (Cores)`: 1
|
||||
* `Free Public IP`: 2
|
||||
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
|
||||
|
||||
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
|
||||
|
||||
|
||||
|
||||
# Set the VMs
|
||||
## Create a Two Servers Wireguard VPN with Terraform
|
||||
|
||||
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file.
|
||||
Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
|
||||
|
||||
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-synced-db`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
|
||||
|
||||
Modify the variable files to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on.
|
||||
|
||||
|
||||
|
||||
### Create the Terraform Files
|
||||
|
||||
Open the terminal.
|
||||
|
||||
* Go to the home folder
|
||||
* ```
|
||||
cd ~
|
||||
```
|
||||
|
||||
* Create the folder `terraform` and the subfolder `deployment-synced-db`:
|
||||
* ```
|
||||
mkdir -p terraform/deployment-synced-db
|
||||
```
|
||||
* ```
|
||||
cd terraform/deployment-synced-db
|
||||
```
|
||||
* Create the `main.tf` file:
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the `main.tf` content and save the file.
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid2" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = "main"
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "tfvm"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [var.tfnodeid1, var.tfnodeid2]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
disks {
|
||||
name = "disk1"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid1
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d2" {
|
||||
disks {
|
||||
name = "disk2"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid2
|
||||
network_name = grid_network.net1.name
|
||||
|
||||
vms {
|
||||
name = "vm2"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk2"
|
||||
mount_point = "/disk2"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "node1_zmachine2_ip" {
|
||||
value = grid_deployment.d2.vms[0].ip
|
||||
}
|
||||
|
||||
output "ygg_ip1" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
output "ygg_ip2" {
|
||||
value = grid_deployment.d2.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "ipv4_vm1" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
|
||||
output "ipv4_vm2" {
|
||||
value = grid_deployment.d2.vms[0].computedip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
In this file, we name the first VM as `vm1` and the second VM as `vm2`. For ease of communication, in this guide we call `vm1` as the master VM and `vm2` as the worker VM.
|
||||
|
||||
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2`is 10.1.4.2. This might be different during your own deployment. If so, change the codes in this guide accordingly.
|
||||
|
||||
* Create the `credentials.auto.tfvars` file:
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content and save the file.
|
||||
* ```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
tfnodeid1 = "..."
|
||||
tfnodeid2 = "..."
|
||||
|
||||
size = "50"
|
||||
cpu = "1"
|
||||
memory = "2048"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to increase or modify the quantity in the variables `size`, `cpu` and `memory`.
|
||||
|
||||
|
||||
|
||||
### Deploy the 3Nodes with Terraform
|
||||
|
||||
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-synced-db` with the main and variables files.
|
||||
|
||||
* Initialize Terraform:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
|
||||
* Apply Terraform to deploy the VPN:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
|
||||
After deployments, take note of the 3Nodes' IPv4 address. You will need those addresses to SSH into the 3Nodes.
|
||||
|
||||
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
|
||||
* ```
|
||||
terraform show
|
||||
```
|
||||
|
||||
|
||||
|
||||
### SSH into the 3Nodes
|
||||
|
||||
* To [SSH into the 3Nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following while making sure to set the proper IP address for each VM:
|
||||
* ```
|
||||
ssh root@3node_IPv4_Address
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Preparing the VMs for the Deployment
|
||||
|
||||
* Update and upgrade the system
|
||||
* ```
|
||||
apt update && sudo apt upgrade -y && sudo apt-get install apache2 -y
|
||||
```
|
||||
* After download, you might need to reboot the system for changes to be fully taken into account
|
||||
* ```
|
||||
reboot
|
||||
```
|
||||
* Reconnect to the VMs
|
||||
|
||||
|
||||
|
||||
### Test the Wireguard Connection
|
||||
|
||||
We now want to ping the VMs using Wireguard. This will ensure the connection is properly established.
|
||||
|
||||
First, we set Wireguard with the Terraform output.
|
||||
|
||||
* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/usr/local/etc/wireguard/wg.conf`.
|
||||
* ```
|
||||
nano /usr/local/etc/wireguard/wg.conf
|
||||
```
|
||||
|
||||
* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The WireGuard output stands in between `EOT`.
|
||||
|
||||
* Start the WireGuard on your local computer:
|
||||
* ```
|
||||
wg-quick up wg
|
||||
```
|
||||
|
||||
* To stop the wireguard service:
|
||||
* ```
|
||||
wg-quick down wg
|
||||
```
|
||||
|
||||
> Note: If it doesn't work and you already did a WireGuard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`.
|
||||
This should set everything properly.
|
||||
|
||||
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct:
|
||||
* ```
|
||||
ping 10.1.3.2
|
||||
```
|
||||
* ```
|
||||
ping 10.1.4.2
|
||||
```
|
||||
|
||||
If you correctly receive the packets for the two VMs, you know that the VPN is properly set.
|
||||
|
||||
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
|
||||
|
||||
|
||||
|
||||
# Configure the MariaDB Database
|
||||
|
||||
## Download MariaDB and Configure the Database
|
||||
|
||||
* Download the MariaDB server and client on both the master VM and the worker VM
|
||||
* ```
|
||||
apt install mariadb-server mariadb-client -y
|
||||
```
|
||||
* Configure the MariaDB database
|
||||
* ```
|
||||
nano /etc/mysql/mariadb.conf.d/50-server.cnf
|
||||
```
|
||||
* Do the following changes
|
||||
* Add `#` in front of
|
||||
* `bind-address = 127.0.0.1`
|
||||
* Remove `#` in front of the following lines and replace `X` by `1` for the master VM and by `2` for the worker VM
|
||||
```
|
||||
#server-id = X
|
||||
#log_bin = /var/log/mysql/mysql-bin.log
|
||||
```
|
||||
* Below the lines shown above add the following line:
|
||||
```
|
||||
binlog_do_db = tfdatabase
|
||||
```
|
||||
|
||||
* Restart MariaDB
|
||||
* ```
|
||||
systemctl restart mysql
|
||||
```
|
||||
|
||||
* Launch Mariadb
|
||||
* ```
|
||||
mysql
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Create User with Replication Grant
|
||||
|
||||
* Do the following on both the master and the worker
|
||||
* ```
|
||||
CREATE USER 'repuser'@'%' IDENTIFIED BY 'password';
|
||||
GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ;
|
||||
FLUSH PRIVILEGES;
|
||||
show master status\G;
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Verify the Access of the User
|
||||
* Verify the access of repuser user
|
||||
```
|
||||
SELECT host FROM mysql.user WHERE User = 'repuser';
|
||||
```
|
||||
* You want to see `%` in Host
|
||||
|
||||
|
||||
|
||||
## Set the VMs to accept the MariaDB Connection
|
||||
|
||||
### TF Template Worker Server Data
|
||||
|
||||
* Write the following in the Worker VM
|
||||
* ```
|
||||
CHANGE MASTER TO MASTER_HOST='10.1.3.2',
|
||||
MASTER_USER='repuser',
|
||||
MASTER_PASSWORD='password',
|
||||
MASTER_LOG_FILE='mysql-bin.000001',
|
||||
MASTER_LOG_POS=328;
|
||||
```
|
||||
* ```
|
||||
start slave;
|
||||
```
|
||||
* ```
|
||||
show slave status\G;
|
||||
```
|
||||
|
||||
|
||||
|
||||
### TF Template Master Server Data
|
||||
|
||||
* Write the following in the Master VM
|
||||
* ```
|
||||
CHANGE MASTER TO MASTER_HOST='10.1.4.2',
|
||||
MASTER_USER='repuser',
|
||||
MASTER_PASSWORD='password',
|
||||
MASTER_LOG_FILE='mysql-bin.000001',
|
||||
MASTER_LOG_POS=328;
|
||||
```
|
||||
* ```
|
||||
start slave;
|
||||
```
|
||||
* ```
|
||||
show slave status\G;
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Set the MariaDB Databases on Both 3Nodes
|
||||
|
||||
We now set the MariaDB database. You should choose your own username and password. The password should be the same for the master and worker VMs.
|
||||
|
||||
* On the master VM, write:
|
||||
```
|
||||
CREATE DATABASE tfdatabase;
|
||||
CREATE USER 'ncuser'@'%';
|
||||
GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
* On the worker VM, write:
|
||||
```
|
||||
CREATE USER 'ncuser'@'%';
|
||||
GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
* To see a database, write the following:
|
||||
```
|
||||
show databases;
|
||||
```
|
||||
* To see users on MariaDB:
|
||||
```
|
||||
select user from mysql.user;
|
||||
```
|
||||
* To exit MariaDB:
|
||||
```
|
||||
exit;
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Install and Set GlusterFS
|
||||
|
||||
We will now install and set [GlusterFS](https://www.gluster.org/), a free and open-source software scalable network filesystem.
|
||||
|
||||
* Install GlusterFS on both the master and worker VMs
|
||||
* ```
|
||||
add-apt-repository ppa:gluster/glusterfs-7 -y && apt install glusterfs-server -y
|
||||
```
|
||||
* Start the GlusterFS service on both VMs
|
||||
* ```
|
||||
systemctl start glusterd.service && systemctl enable glusterd.service
|
||||
```
|
||||
* Set the master to worker probe IP on the master VM:
|
||||
* ```
|
||||
gluster peer probe 10.1.4.2
|
||||
```
|
||||
|
||||
* See the peer status on the worker VM:
|
||||
* ```
|
||||
gluster peer status
|
||||
```
|
||||
|
||||
* Set the master and worker IP address on the master VM:
|
||||
* ```
|
||||
gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force
|
||||
```
|
||||
|
||||
* Start Gluster:
|
||||
* ```
|
||||
gluster volume start vol1
|
||||
```
|
||||
|
||||
* Check the status on the worker VM:
|
||||
* ```
|
||||
gluster volume status
|
||||
```
|
||||
|
||||
* Mount the server with the master IP on the master VM:
|
||||
* ```
|
||||
mount -t glusterfs 10.1.3.2:/vol1 /var/www
|
||||
```
|
||||
|
||||
* See if the mount is there on the master VM:
|
||||
* ```
|
||||
df -h
|
||||
```
|
||||
|
||||
* Mount the Server with the worker IP on the worker VM:
|
||||
* ```
|
||||
mount -t glusterfs 10.1.4.2:/vol1 /var/www
|
||||
```
|
||||
|
||||
* See if the mount is there on the worker VM:
|
||||
* ```
|
||||
df -h
|
||||
```
|
||||
|
||||
We now update the mount with the filse fstab on both master and worker.
|
||||
|
||||
* To prevent the mount from being aborted if the server reboot, write the following on both servers:
|
||||
* ```
|
||||
nano /etc/fstab
|
||||
```
|
||||
* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2):
|
||||
* ```
|
||||
10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
|
||||
```
|
||||
|
||||
* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2):
|
||||
* ```
|
||||
10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
|
||||
```
|
||||
|
||||
The databases of both VMs are accessible in `/var/www`. This means that any change in either folder `/var/www` of each VM will be reflected in the same folder of the other VM. In order words, the databases are now synced in real-time.
|
||||
|
||||
|
||||
|
||||
# Conclusion
|
||||
|
||||
You now have two VMs syncing their MariaDB databases. This can be very useful for a plethora of projects requiring redundancy in storage.
|
||||
|
||||
You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB and GlusterFS.
|
||||
|
||||
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
||||
|
@@ -0,0 +1,86 @@
|
||||
<h1> Deploying a VM with Mounts Using Terraform </h1>
|
||||
|
||||
<h2> Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
- [More Info](#more-info)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/mounts/main.tf), we will see how to deploy a VM and mount disks on it on the TFGrid.
|
||||
|
||||
## Example
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [2, 4]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
node = 2
|
||||
network_name = grid_network.net1.name
|
||||
ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "")
|
||||
disks {
|
||||
name = "data"
|
||||
size = 10
|
||||
description = "volume holding app data"
|
||||
}
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
publicip = true
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
mounts {
|
||||
disk_name = "data"
|
||||
mount_point = "/app"
|
||||
}
|
||||
env_vars = {
|
||||
SSH_KEY = "PUT YOUR SSH KEY HERE"
|
||||
}
|
||||
}
|
||||
vms {
|
||||
name = "anothervm"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"
|
||||
cpu = 1
|
||||
memory = 1024
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = "PUT YOUR SSH KEY HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "node1_zmachine2_ip" {
|
||||
value = grid_deployment.d1.vms[1].ip
|
||||
}
|
||||
output "public_ip" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
```
|
||||
|
||||
## More Info
|
||||
|
||||
A complete list of Mount workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vmsmounts).
|
@@ -0,0 +1,140 @@
|
||||
<h1> Nextcloud All-in-One Deployment </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Deploy a Full VM](#deploy-a-full-vm)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Set the DNS Record for Your Domain](#set-the-dns-record-for-your-domain)
|
||||
- [Install Nextcloud All-in-One](#install-nextcloud-all-in-one)
|
||||
- [Set BorgBackup](#set-borgbackup)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present a quick way to install Nextcloud All-in-One on the TFGrid. This guide is based heavily on the Nextcloud documentation available [here](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). It's mostly a simple adaptation to the TFGrid with some additional information on how to set correctly the firewall and the DNS record for your domain.
|
||||
|
||||
|
||||
|
||||
## Deploy a Full VM
|
||||
|
||||
* Deploy a Full VM with the [TF Dashboard](../../getstarted/ssh_guide/ssh_openssh.md) or [Terraform](../terraform_full_vm.md)
|
||||
* Minimum specs:
|
||||
* IPv4 Address
|
||||
* 2 vcores
|
||||
* 4096 MB of RAM
|
||||
* 50 GB of Storage
|
||||
* Take note of the VM IP address
|
||||
* SSH into the Full VM
|
||||
|
||||
|
||||
|
||||
## Set a Firewall
|
||||
|
||||
We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
|
||||
|
||||
It should already be installed on your system. If it is not, install it with the following command:
|
||||
|
||||
```
|
||||
apt install ufw
|
||||
```
|
||||
|
||||
For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443).
|
||||
|
||||
We thus add the following rules:
|
||||
|
||||
* Allow SSH (port 22)
|
||||
* ```
|
||||
ufw allow ssh
|
||||
```
|
||||
* Allow HTTP (port 80)
|
||||
* ```
|
||||
ufw allow http
|
||||
```
|
||||
* Allow https (port 443)
|
||||
* ```
|
||||
ufw allow https
|
||||
```
|
||||
* Allow port 8443
|
||||
* ```
|
||||
ufw allow 8443
|
||||
```
|
||||
* Allow port 3478 for Nextcloud Talk
|
||||
* ```
|
||||
ufw allow 3478
|
||||
```
|
||||
|
||||
* To enable the firewall, write the following:
|
||||
* ```
|
||||
ufw enable
|
||||
```
|
||||
|
||||
* To see the current security rules, write the following:
|
||||
* ```
|
||||
ufw status verbose
|
||||
```
|
||||
|
||||
You now have enabled the firewall with proper security rules for your Nextcloud deployment.
|
||||
|
||||
|
||||
|
||||
## Set the DNS Record for Your Domain
|
||||
|
||||
* Go to your domain name registrar (e.g. Namecheap)
|
||||
* In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on:
|
||||
* Type: A Record
|
||||
* Host: @
|
||||
* Value: <VM_IP_Address>
|
||||
* TTL: Automatic
|
||||
* It might take up to 30 minutes to set the DNS properly.
|
||||
* To check if the A record has been registered, you can use a common DNS checker:
|
||||
* ```
|
||||
https://dnschecker.org/#A/<domain-name>
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Install Nextcloud All-in-One
|
||||
|
||||
For the rest of the guide, we follow the steps availabe on the Nextcloud website's tutorial [How to Install the Nextcloud All-in-One on Linux](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/).
|
||||
|
||||
* Install Docker
|
||||
* ```
|
||||
curl -fsSL get.docker.com | sudo sh
|
||||
```
|
||||
* Install Nextcloud AIO
|
||||
* ```
|
||||
sudo docker run \
|
||||
--sig-proxy=false \
|
||||
--name nextcloud-aio-mastercontainer \
|
||||
--restart always \
|
||||
--publish 80:80 \
|
||||
--publish 8080:8080 \
|
||||
--publish 8443:8443 \
|
||||
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
|
||||
nextcloud/all-in-one:latest
|
||||
```
|
||||
* Reach the AIO interface on your browser:
|
||||
* ```
|
||||
https://<domain_name>:8443
|
||||
```
|
||||
* Example: `https://nextcloudwebsite.com:8443`
|
||||
* Take note of the Nextcloud password
|
||||
* Log in with the given password
|
||||
* Add your domain name and click `Submit`
|
||||
* Click `Start containers`
|
||||
* Click `Open your Nextcloud`
|
||||
|
||||
You can now easily access Nextcloud AIO with your domain URL!
|
||||
|
||||
|
||||
## Set BorgBackup
|
||||
|
||||
On the AIO interface, you can easily set BorgBackup. Since we are using Linux, we use the mounting directory `/mnt/backup`. Make sure to take note of the backup password.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Most of the information in this guide can be found on the Nextcloud official website. We presented this guide to show another way to deploy Nextcloud on the TFGrid.
|
@@ -0,0 +1,908 @@
|
||||
<h1>Nextcloud Redundant Deployment</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Main Steps](#main-steps)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer)
|
||||
- [Set the VMs](#set-the-vms)
|
||||
- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Deploy the 3nodes with Terraform](#deploy-the-3nodes-with-terraform)
|
||||
- [SSH into the 3nodes](#ssh-into-the-3nodes)
|
||||
- [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment)
|
||||
- [Test the Wireguard Connection](#test-the-wireguard-connection)
|
||||
- [Create the MariaDB Database](#create-the-mariadb-database)
|
||||
- [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database)
|
||||
- [Create User with Replication Grant](#create-user-with-replication-grant)
|
||||
- [Verify the Access of the User](#verify-the-access-of-the-user)
|
||||
- [Set the VMs to Accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection)
|
||||
- [TF Template Worker Server Data](#tf-template-worker-server-data)
|
||||
- [TF Template Master Server Data](#tf-template-master-server-data)
|
||||
- [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database)
|
||||
- [Install and Set GlusterFS](#install-and-set-glusterfs)
|
||||
- [Install PHP and Nextcloud](#install-php-and-nextcloud)
|
||||
- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns)
|
||||
- [Worker File for DuckDNS](#worker-file-for-duckdns)
|
||||
- [Set Apache](#set-apache)
|
||||
- [Access Nextcloud on a Web Browser with the Subdomain](#access-nextcloud-on-a-web-browser-with-the-subdomain)
|
||||
- [Enable HTTPS](#enable-https)
|
||||
- [Install Certbot](#install-certbot)
|
||||
- [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain)
|
||||
- [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Conclusion](#conclusion)
|
||||
- [Acknowledgements and References](#acknowledgements-and-references)
|
||||
|
||||
***
|
||||
|
||||
# Introduction
|
||||
|
||||
In this Threefold Guide, we deploy a redundant [Nextcloud](https://nextcloud.com/) instance that is continually synced on two different 3node servers running on the [Threefold Grid](https://threefold.io/).
|
||||
|
||||
We will learn how to deploy two full virtual machines (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). The Terraform deployment will be composed of a virtual private network (VPN) using [Wireguard](https://www.wireguard.com/). The two VMs will thus be connected in a private and secure network. Once this is done, we will link the two VMs together by setting up a [MariaDB](https://mariadb.org/) database and using [GlusterFS](https://www.gluster.org/). Then, we will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity.
|
||||
|
||||
The advantage of this redundant Nextcloud deployment is obvious: if one of the two VMs goes down, the Nextcloud instance will still be accessible, as the other VM will take the lead. Also, the two VMs will be continually synced in real-time. If the master node goes down, the data will be synced to the worker node, and the worker node will become the master node. Once the master VM goes back online, the data will be synced to the master node and the master node will retake the lead as the master node.
|
||||
|
||||
This kind of real-time backup of the database is not only limited to Nextcloud. You can use the same architecture to deploy different workloads while having the redundancy over two 3node servers. This architecture could be deployed over more than two 3nodes. Feel free to explore and let us know in the [Threefold Forum](http://forum.threefold.io/) if you come up with exciting and different variations of this kind of deployment.
|
||||
|
||||
As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
|
||||
Let's go!
|
||||
|
||||
|
||||
|
||||
# Main Steps
|
||||
|
||||
This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out!
|
||||
|
||||
To get an overview of the whole process, we present the main steps:
|
||||
|
||||
* Download the dependencies
|
||||
* Find two 3nodes on the TF Grid
|
||||
* Deploy and set the VMs with Terraform
|
||||
* Create a MariaDB database
|
||||
* Download and set GlusterFS
|
||||
* Install PHP and Nextcloud
|
||||
* Create a subdomain with DuckDNS
|
||||
* Set Apache
|
||||
* Access Nextcloud
|
||||
* Add HTTPS protection
|
||||
* Set a firewall
|
||||
|
||||
|
||||
|
||||
# Prerequisites
|
||||
|
||||
* [Install Terraform](../terraform_install.md)
|
||||
* [Install Wireguard](https://www.wireguard.com/install/)
|
||||
|
||||
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
|
||||
|
||||
# Find Nodes with the ThreeFold Explorer
|
||||
|
||||
We first need to decide on which 3Nodes we will be deploying our workload.
|
||||
|
||||
We thus start by finding two 3Nodes with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for 3Nodes with each a public IPv4 address.
|
||||
|
||||
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
|
||||
* Find two 3Nodes with suitable resources for the deployment and take note of their node IDs on the leftmost column `ID`
|
||||
* For proper understanding, we give further information on some relevant columns:
|
||||
* `ID` refers to the node ID
|
||||
* `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
* `HRU` refers to HDD storage
|
||||
* `SRU` refers to SSD storage
|
||||
* `MRU` refers to RAM (memory)
|
||||
* `CRU` refers to virtual cores (vcores)
|
||||
* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters:
|
||||
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
|
||||
* `Free SRU (GB)`: 50
|
||||
* `Free MRU (GB)`: 2
|
||||
* `Total CRU (Cores)`: 1
|
||||
* `Free Public IP`: 2
|
||||
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
|
||||
|
||||
Once you've found two 3Nodes, take note of their node IDs. You will need to use those IDs when creating the Terraform files.
|
||||
|
||||
|
||||
|
||||
# Set the VMs
|
||||
## Create a Two Servers Wireguard VPN with Terraform
|
||||
|
||||
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
|
||||
|
||||
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
|
||||
|
||||
Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node IDs of the two 3nodes you will be deploying on.
|
||||
|
||||
### Create the Terraform Files
|
||||
|
||||
Open the terminal.
|
||||
|
||||
* Go to the home folder
|
||||
* ```
|
||||
cd ~
|
||||
```
|
||||
|
||||
* Create the folder `terraform` and the subfolder `deployment-nextcloud`:
|
||||
* ```
|
||||
mkdir -p terraform/deployment-nextcloud
|
||||
```
|
||||
* ```
|
||||
cd terraform/deployment-nextcloud
|
||||
```
|
||||
* Create the `main.tf` file:
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the `main.tf` content and save the file.
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid2" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = "main"
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "tfvm"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [var.tfnodeid1, var.tfnodeid2]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
disks {
|
||||
name = "disk1"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid1
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d2" {
|
||||
disks {
|
||||
name = "disk2"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid2
|
||||
network_name = grid_network.net1.name
|
||||
|
||||
vms {
|
||||
name = "vm2"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk2"
|
||||
mount_point = "/disk2"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "node1_zmachine2_ip" {
|
||||
value = grid_deployment.d2.vms[0].ip
|
||||
}
|
||||
|
||||
output "ygg_ip1" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
output "ygg_ip2" {
|
||||
value = grid_deployment.d2.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "ipv4_vm1" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
|
||||
output "ipv4_vm2" {
|
||||
value = grid_deployment.d2.vms[0].computedip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
In this file, we name the first VM as `vm1` and the second VM as `vm2`. In the guide, we call `vm1` as the master VM and `vm2` as the worker VM.
|
||||
|
||||
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly.
|
||||
|
||||
* Create the `credentials.auto.tfvars` file:
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content and save the file.
|
||||
* ```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
tfnodeid1 = "..."
|
||||
tfnodeid2 = "..."
|
||||
|
||||
size = "50"
|
||||
cpu = "1"
|
||||
memory = "2048"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers.
|
||||
|
||||
### Deploy the 3nodes with Terraform
|
||||
|
||||
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-nextcloud` with the main and variables files.
|
||||
|
||||
* Initialize Terraform:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
|
||||
* Apply Terraform to deploy the VPN:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
|
||||
After deployments, take note of the 3nodes' IPv4 address. You will need those addresses to SSH into the 3nodes.
|
||||
|
||||
### SSH into the 3nodes
|
||||
|
||||
* To [SSH into the 3nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following:
|
||||
* ```
|
||||
ssh root@VM_IPv4_Address
|
||||
```
|
||||
|
||||
### Preparing the VMs for the Deployment
|
||||
|
||||
* Update and upgrade the system
|
||||
* ```
|
||||
apt update && apt upgrade -y && apt-get install apache2 -y
|
||||
```
|
||||
* After download, reboot the system
|
||||
* ```
|
||||
reboot
|
||||
```
|
||||
* Reconnect to the VMs
|
||||
|
||||
|
||||
|
||||
### Test the Wireguard Connection
|
||||
|
||||
We now want to ping the VMs using Wireguard. This will ensure the connection is properly established.
|
||||
|
||||
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
|
||||
|
||||
First, we set Wireguard with the Terraform output.
|
||||
|
||||
* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/etc/wireguard/wg.conf`.
|
||||
* ```
|
||||
nano /etc/wireguard/wg.conf
|
||||
```
|
||||
|
||||
* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The Wireguard output stands in between `EOT`.
|
||||
|
||||
* Start Wireguard on your local computer:
|
||||
* ```
|
||||
wg-quick up wg
|
||||
```
|
||||
|
||||
* To stop the wireguard service:
|
||||
* ```
|
||||
wg-quick down wg
|
||||
```
|
||||
|
||||
If it doesn't work and you already did a wireguard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`.
|
||||
This should set everything properly.
|
||||
|
||||
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct:
|
||||
* ```
|
||||
ping 10.1.3.2
|
||||
```
|
||||
* ```
|
||||
ping 10.1.4.2
|
||||
```
|
||||
|
||||
If you correctly receive the packets from the two VMs, you know that the VPN is properly set.
|
||||
|
||||
|
||||
|
||||
# Create the MariaDB Database
|
||||
|
||||
## Download MariaDB and Configure the Database
|
||||
|
||||
* Download MariaDB's server and client on both VMs
|
||||
* ```
|
||||
apt install mariadb-server mariadb-client -y
|
||||
```
|
||||
* Configure the MariaDB database
|
||||
* ```
|
||||
nano /etc/mysql/mariadb.conf.d/50-server.cnf
|
||||
```
|
||||
* Do the following changes
|
||||
* Add `#` in front of
|
||||
* `bind-address = 127.0.0.1`
|
||||
* Remove `#` in front of the following lines and replace `X` by `1` on the master VM and by `2` on the worker VM
|
||||
```
|
||||
#server-id = X
|
||||
#log_bin = /var/log/mysql/mysql-bin.log
|
||||
```
|
||||
* Below the lines shown above add the following line:
|
||||
```
|
||||
binlog_do_db = nextcloud
|
||||
```
|
||||
|
||||
* Restart MariaDB
|
||||
* ```
|
||||
systemctl restart mysql
|
||||
```
|
||||
|
||||
* Launch MariaDB
|
||||
* ```
|
||||
mysql
|
||||
```
|
||||
|
||||
## Create User with Replication Grant
|
||||
|
||||
* Do the following on both VMs
|
||||
* ```
|
||||
CREATE USER 'repuser'@'%' IDENTIFIED BY 'password';
|
||||
GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ;
|
||||
FLUSH PRIVILEGES;
|
||||
show master status\G;
|
||||
```
|
||||
|
||||
## Verify the Access of the User
|
||||
* Verify the access of the user
|
||||
```
|
||||
SELECT host FROM mysql.user WHERE User = 'repuser';
|
||||
```
|
||||
* You want to see `%` in Host
|
||||
|
||||
## Set the VMs to Accept the MariaDB Connection
|
||||
|
||||
### TF Template Worker Server Data
|
||||
|
||||
* Write the following in the worker VM
|
||||
* ```
|
||||
CHANGE MASTER TO MASTER_HOST='10.1.3.2',
|
||||
MASTER_USER='repuser',
|
||||
MASTER_PASSWORD='password',
|
||||
MASTER_LOG_FILE='mysql-bin.000001',
|
||||
MASTER_LOG_POS=328;
|
||||
```
|
||||
* ```
|
||||
start slave;
|
||||
```
|
||||
* ```
|
||||
show slave status\G;
|
||||
```
|
||||
### TF Template Master Server Data
|
||||
|
||||
* Write the following in the master VM
|
||||
* ```
|
||||
CHANGE MASTER TO MASTER_HOST='10.1.4.2',
|
||||
MASTER_USER='repuser',
|
||||
MASTER_PASSWORD='password',
|
||||
MASTER_LOG_FILE='mysql-bin.000001',
|
||||
MASTER_LOG_POS=328;
|
||||
```
|
||||
* ```
|
||||
start slave;
|
||||
```
|
||||
* ```
|
||||
show slave status\G;
|
||||
```
|
||||
|
||||
## Set the Nextcloud User and Database
|
||||
|
||||
We now set the Nextcloud database. You should choose your own username and password. The password should be the same for the master and worker VMs.
|
||||
|
||||
* On the master VM, write:
|
||||
```
|
||||
CREATE DATABASE nextcloud;
|
||||
CREATE USER 'ncuser'@'%';
|
||||
GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
* On the worker VM, write:
|
||||
```
|
||||
CREATE USER 'ncuser'@'%';
|
||||
GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
* To see the databases, write:
|
||||
```
|
||||
show databases;
|
||||
```
|
||||
* To see users, write:
|
||||
```
|
||||
select user from mysql.user;
|
||||
```
|
||||
* To exit MariaDB, write:
|
||||
```
|
||||
exit;
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Install and Set GlusterFS
|
||||
|
||||
We will now install and set [GlusterFS](https://www.gluster.org/), a free and open source software scalable network filesystem.
|
||||
|
||||
* Install GlusterFS on both the master and worker VMs
|
||||
* ```
|
||||
echo | add-apt-repository ppa:gluster/glusterfs-7 && apt install glusterfs-server -y
|
||||
```
|
||||
* Start the GlusterFS service on both VMs
|
||||
* ```
|
||||
systemctl start glusterd.service && systemctl enable glusterd.service
|
||||
```
|
||||
* Set the master to worker probe IP on the master VM:
|
||||
* ```
|
||||
gluster peer probe 10.1.4.2
|
||||
```
|
||||
|
||||
* See the peer status on the worker VM:
|
||||
* ```
|
||||
gluster peer status
|
||||
```
|
||||
|
||||
* Set the master and worker IP address on the master VM:
|
||||
* ```
|
||||
gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force
|
||||
```
|
||||
|
||||
* Start GlusterFS on the master VM:
|
||||
* ```
|
||||
gluster volume start vol1
|
||||
```
|
||||
|
||||
* Check the status on the worker VM:
|
||||
* ```
|
||||
gluster volume status
|
||||
```
|
||||
|
||||
* Mount the server with the master IP on the master VM:
|
||||
* ```
|
||||
mount -t glusterfs 10.1.3.2:/vol1 /var/www
|
||||
```
|
||||
|
||||
* See if the mount is there on the master VM:
|
||||
* ```
|
||||
df -h
|
||||
```
|
||||
|
||||
* Mount the server with the worker IP on the worker VM:
|
||||
* ```
|
||||
mount -t glusterfs 10.1.4.2:/vol1 /var/www
|
||||
```
|
||||
|
||||
* See if the mount is there on the worker VM:
|
||||
* ```
|
||||
df -h
|
||||
```
|
||||
|
||||
We now update the mount with the filse fstab on both VMs.
|
||||
|
||||
* To prevent the mount from being aborted if the server reboots, write the following on both servers:
|
||||
* ```
|
||||
nano /etc/fstab
|
||||
```
|
||||
|
||||
* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2):
|
||||
* ```
|
||||
10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
|
||||
```
|
||||
|
||||
* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2):
|
||||
* ```
|
||||
10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Install PHP and Nextcloud
|
||||
|
||||
* Install PHP and the PHP modules for Nextcloud on both the master and the worker:
|
||||
* ```
|
||||
apt install php -y && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y
|
||||
```
|
||||
|
||||
We will now install Nextcloud. This is done only on the master VM.
|
||||
|
||||
* On both the master and worker VMs, go to the folder `/var/www`:
|
||||
* ```
|
||||
cd /var/www
|
||||
```
|
||||
|
||||
* To install the latest Nextcloud version, go to the Nextcloud homepage:
|
||||
* See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/).
|
||||
|
||||
* We now download Nextcloud on the master VM.
|
||||
* ```
|
||||
wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip
|
||||
```
|
||||
|
||||
You only need to download on the master VM, since you set a peer-to-peer connection, it will also be accessible on the worker VM.
|
||||
|
||||
* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress:
|
||||
* ```
|
||||
apt install p7zip-full -y
|
||||
```
|
||||
* ```
|
||||
7z x nextcloud-27.0.1.zip -o/var/www/
|
||||
```
|
||||
|
||||
* After the download, see if the Nextcloud file is there on the worker VM:
|
||||
* ```
|
||||
ls
|
||||
```
|
||||
|
||||
* Then, we grant permissions to the folder. Do this on both the master VM and the worker VM.
|
||||
* ```
|
||||
chown www-data:www-data /var/www/nextcloud/ -R
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Create a Subdomain with DuckDNS
|
||||
|
||||
We want to create a subdomain to access Nextcloud over the public internet.
|
||||
|
||||
For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit.
|
||||
|
||||
We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website. Make sure to do this for both VMs.
|
||||
|
||||
* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/).
|
||||
* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system.
|
||||
|
||||
Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure.
|
||||
|
||||
## Worker File for DuckDNS
|
||||
|
||||
In our current scenario, we want to make sure the master VM stays the main IP address for the DuckDNS subdomain as long as the master VM is online. To do so, we add an `if` statement in the worker VM's `duck.sh` file. The process is as follow: the worker VM will ping the master VM and if it sees that the master VM is offline, it will run the command to update DuckDNS's subdomain with the worker VM's IP address. When the master VM goes back online, it will run the `duck.sh` file within 5 minutes and the DuckDNS's subdomain will be updated with the master VM's IP address.
|
||||
|
||||
The content of the `duck.sh` file for the worker VM is the following. Make sure to replace the line `echo ...` with the line provided by DuckDNS and to replace `mastervm_IPv4_address` with the master VM's IP address.
|
||||
|
||||
```
|
||||
ping -c 2 mastervm_IPv4_address
|
||||
|
||||
if [ $? != 0 ]
|
||||
then
|
||||
|
||||
echo url="https://www.duckdns.org/update?domains=exampledomain&token=a7c4d0ad-114e-40ef-ba1d-d217904a50f2&ip=" | curl -k -o ~/duckdns/duck.log -K -
|
||||
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
Note: When the master VM goes offline, after 5 minutes maximum DuckDNS will change the IP address from the master’s to the worker’s. Without clearing the DNS cache, your browser might have some difficulties connecting to the updated IP address when reaching the URL `subdomain.duckdns.org`. Thus you might need to [clear your DNS cache](https://blog.hubspot.com/website/flush-dns). You can also use the [Tor browser](https://www.torproject.org/) to connect to Nextcloud. If the IP address changes, you can simply leave the browser and reopen another session as the browser will automatically clear the DNS cache.
|
||||
|
||||
|
||||
|
||||
# Set Apache
|
||||
|
||||
We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`.
|
||||
|
||||
* On both the master and worker VMs, write the following:
|
||||
* ```
|
||||
nano /etc/apache2/sites-available/nextcloud.conf
|
||||
```
|
||||
|
||||
The file should look like this, with your own subdomain instead of `subdomain`:
|
||||
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
DocumentRoot "/var/www/nextcloud"
|
||||
ServerName subdomain.duckdns.org
|
||||
ServerAlias www.subdomain.duckdns.org
|
||||
|
||||
ErrorLog ${APACHE_LOG_DIR}/nextcloud.error
|
||||
CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined
|
||||
|
||||
<Directory /var/www/nextcloud/>
|
||||
Require all granted
|
||||
Options FollowSymlinks MultiViews
|
||||
AllowOverride All
|
||||
|
||||
<IfModule mod_dav.c>
|
||||
Dav off
|
||||
</IfModule>
|
||||
|
||||
SetEnv HOME /var/www/nextcloud
|
||||
SetEnv HTTP_HOME /var/www/nextcloud
|
||||
Satisfy Any
|
||||
|
||||
</Directory>
|
||||
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
* On both the master VM and the worker VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file:
|
||||
* ```
|
||||
a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl
|
||||
```
|
||||
|
||||
* Then, reload and restart Apache:
|
||||
* ```
|
||||
systemctl reload apache2 && systemctl restart apache2
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Access Nextcloud on a Web Browser with the Subdomain
|
||||
|
||||
We now access Nextcloud over the public Internet.
|
||||
|
||||
* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain):
|
||||
* ```
|
||||
subdomain.duckdns.org
|
||||
```
|
||||
|
||||
Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser.
|
||||
|
||||
* Choose a name and a password. For this guide, we use the following:
|
||||
* ```
|
||||
ncadmin
|
||||
password1234
|
||||
```
|
||||
|
||||
* Enter the Nextcloud Database information created with MariaDB and click install:
|
||||
* ```
|
||||
Database user: ncuser
|
||||
Database password: password1234
|
||||
Database name: nextcloud
|
||||
Database location: localhost
|
||||
```
|
||||
|
||||
Nextcloud will then proceed to complete the installation.
|
||||
|
||||
We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database.
|
||||
|
||||
After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain.
|
||||
|
||||
|
||||
|
||||
# Enable HTTPS
|
||||
|
||||
## Install Certbot
|
||||
|
||||
We will now enable HTTPS. This needs to be done on the master VM as well as the worker VM. This section can be done simultaneously on the two VMs. But make sure to do the next section on setting the Certbot with only one VM at a time.
|
||||
|
||||
To enable HTTPS, first install `letsencrypt` with `certbot`:
|
||||
|
||||
Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/)
|
||||
|
||||
* See if you have the latest version of snap:
|
||||
* ```
|
||||
snap install core; snap refresh core
|
||||
```
|
||||
|
||||
* Remove certbot-auto:
|
||||
* ```
|
||||
apt-get remove certbot
|
||||
```
|
||||
|
||||
* Install certbot:
|
||||
* ```
|
||||
snap install --classic certbot
|
||||
```
|
||||
|
||||
* Ensure that certbot can be run:
|
||||
* ```
|
||||
ln -s /snap/bin/certbot /usr/bin/certbot
|
||||
```
|
||||
|
||||
* Then, install certbot-apache:
|
||||
* ```
|
||||
apt install python3-certbot-apache -y
|
||||
```
|
||||
|
||||
## Set the Certbot with the DNS Domain
|
||||
|
||||
To avoid errors, set HTTPS with the master VM and power off the worker VM.
|
||||
|
||||
* To do so with a 3node, you can simply comment the `vms` section of the worker VM in the Terraform `main.tf` file and do `terraform apply` on the terminal.
|
||||
* Put `/*` one line above the section, and `*/` one line below the section `vms`:
|
||||
```
|
||||
/*
|
||||
vms {
|
||||
name = "vm2"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk2"
|
||||
mount_point = "/disk2"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
*/
|
||||
```
|
||||
* Put `#` in front of the appropriated lines, as shown below:
|
||||
```
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
#output "node1_zmachine2_ip" {
|
||||
# value = grid_deployment.d2.vms[0].ip
|
||||
#}
|
||||
|
||||
output "ygg_ip1" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
#output "ygg_ip2" {
|
||||
# value = grid_deployment.d2.vms[0].ygg_ip
|
||||
#}
|
||||
|
||||
output "ipv4_vm1" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
|
||||
#output "ipv4_vm2" {
|
||||
# value = grid_deployment.d2.vms[0].computedip
|
||||
#}
|
||||
```
|
||||
|
||||
* To add the HTTPS protection, write the following line on the master VM with your own subdomain:
|
||||
* ```
|
||||
certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org
|
||||
```
|
||||
|
||||
* Once the HTTPS is set, you can reset the worker VM:
|
||||
* To reset the worker VM, simply remove `/*`, `*/` and `#` on the main file and redo `terraform apply` on the terminal.
|
||||
|
||||
Note: You then need to redo the same process with the worker VM. This time, make sure to set the master VM offline to avoid errors. This means that you should comment the section `vms`of `vm1`instead of `vm2`.
|
||||
|
||||
## Verify HTTPS Automatic Renewal
|
||||
|
||||
* Make a dry run of the certbot renewal to verify that it is correctly set up.
|
||||
* ```
|
||||
certbot renew --dry-run
|
||||
```
|
||||
|
||||
You now have HTTPS security on your Nextcloud instance.
|
||||
|
||||
# Set a Firewall
|
||||
|
||||
Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
|
||||
|
||||
It should already be installed on your system. If it is not, install it with the following command:
|
||||
|
||||
```
|
||||
apt install ufw
|
||||
```
|
||||
|
||||
For our security rules, we want to allow SSH, HTTP and HTTPS.
|
||||
|
||||
We thus add the following rules:
|
||||
|
||||
|
||||
* Allow SSH (port 22)
|
||||
* ```
|
||||
ufw allow ssh
|
||||
```
|
||||
* Allow HTTP (port 80)
|
||||
* ```
|
||||
ufw allow http
|
||||
```
|
||||
* Allow https (port 443)
|
||||
* ```
|
||||
ufw allow https
|
||||
```
|
||||
|
||||
* To enable the firewall, write the following:
|
||||
* ```
|
||||
ufw enable
|
||||
```
|
||||
|
||||
* To see the current security rules, write the following:
|
||||
* ```
|
||||
ufw status verbose
|
||||
```
|
||||
|
||||
You now have enabled the firewall with proper security rules for your Nextcloud deployment.
|
||||
|
||||
|
||||
|
||||
# Conclusion
|
||||
|
||||
If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone!
|
||||
|
||||
The Nextcloud database is synced in real-time on two different 3nodes. When one 3node goes offline, the database is still synchronized on the other 3node. Once the powered-off 3node goes back online, the database is synced automatically with the node that was powered off.
|
||||
|
||||
You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this.
|
||||
|
||||
You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB, GlusterFS, PHP and Nextcloud. Now, you know how to deploy workloads on the Threefold Grid with an efficient architecture in order to ensure redundancy. This is just the beginning. The Threefold Grid has a somewhat infinite potential when it comes to deployments, workloads, architectures and server projects. Let's see where it goes from here!
|
||||
|
||||
This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge.
|
||||
|
||||
|
||||
|
||||
# Acknowledgements and References
|
||||
|
||||
A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort!
|
||||
|
||||
The main reference for this guide is this [amazing video](https://youtu.be/ARsqxUw1ONc) by NETVN82. Many steps were modified or added to make this suitable with Wireguard and the Threefold Grid. Other configurations are possible. We invite you to explore the possibilities offered by the Threefold Grid!
|
||||
|
||||
This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform.
|
@@ -0,0 +1,594 @@
|
||||
<h1>Nextcloud Single Deployment </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Main Steps](#main-steps)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
|
||||
- [Set the Full VM](#set-the-full-vm)
|
||||
- [Overview](#overview)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform)
|
||||
- [SSH into the 3Node](#ssh-into-the-3node)
|
||||
- [Prepare the Full VM](#prepare-the-full-vm)
|
||||
- [Create the MariaDB Database](#create-the-mariadb-database)
|
||||
- [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database)
|
||||
- [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database)
|
||||
- [Install PHP and Nextcloud](#install-php-and-nextcloud)
|
||||
- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns)
|
||||
- [Set Apache](#set-apache)
|
||||
- [Access Nextcloud on a Web Browser](#access-nextcloud-on-a-web-browser)
|
||||
- [Enable HTTPS](#enable-https)
|
||||
- [Install Certbot](#install-certbot)
|
||||
- [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain)
|
||||
- [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Conclusion](#conclusion)
|
||||
- [Acknowledgements and References](#acknowledgements-and-references)
|
||||
|
||||
***
|
||||
|
||||
# Introduction
|
||||
|
||||
In this Threefold Guide, we deploy a [Nextcloud](https://nextcloud.com/) instance on a full VM running on the [Threefold Grid](https://threefold.io/).
|
||||
|
||||
We will learn how to deploy a full virtual machine (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). We will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity.
|
||||
|
||||
As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
|
||||
Let's go!
|
||||
|
||||
|
||||
|
||||
# Main Steps
|
||||
|
||||
This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out!
|
||||
|
||||
To get an overview of the whole process, we present the main steps:
|
||||
|
||||
* Download the dependencies
|
||||
* Find a 3Node on the TF Grid
|
||||
* Deploy and set the VM with Terraform
|
||||
* Install PHP and Nextcloud
|
||||
* Create a subdomain with DuckDNS
|
||||
* Set Apache
|
||||
* Access Nextcloud
|
||||
* Add HTTPS protection
|
||||
* Set a firewall
|
||||
|
||||
|
||||
|
||||
# Prerequisites
|
||||
|
||||
- [Install Terraform](../terraform_install.md)
|
||||
|
||||
You need to download and install properly Terraform on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
|
||||
|
||||
# Find a 3Node with the ThreeFold Explorer
|
||||
|
||||
We first need to decide on which 3Node we will be deploying our workload.
|
||||
|
||||
We thus start by finding a 3Node with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address.
|
||||
|
||||
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
|
||||
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
|
||||
* For proper understanding, we give further information on some relevant columns:
|
||||
* `ID` refers to the node ID
|
||||
* `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
* `HRU` refers to HDD storage
|
||||
* `SRU` refers to SSD storage
|
||||
* `MRU` refers to RAM (memory)
|
||||
* `CRU` refers to virtual cores (vcores)
|
||||
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
|
||||
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Node.
|
||||
* `Free SRU (GB)`: 50
|
||||
* `Free MRU (GB)`: 2
|
||||
* `Total CRU (Cores)`: 1
|
||||
* `Free Public IP`: 2
|
||||
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
|
||||
|
||||
Once you've found a 3Node, take note of its node ID. You will need to use this ID when creating the Terraform files.
|
||||
|
||||
|
||||
|
||||
# Set the Full VM
|
||||
|
||||
## Overview
|
||||
|
||||
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
|
||||
|
||||
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-single-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
|
||||
|
||||
Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on.
|
||||
|
||||
## Create the Terraform Files
|
||||
|
||||
Open the terminal and follow those steps.
|
||||
|
||||
* Go to the home folder
|
||||
* ```
|
||||
cd ~
|
||||
```
|
||||
|
||||
* Create the folder `terraform` and the subfolder `deployment-single-nextcloud`:
|
||||
* ```
|
||||
mkdir -p terraform/deployment-single-nextcloud
|
||||
```
|
||||
* ```
|
||||
cd terraform/deployment-single-nextcloud
|
||||
```
|
||||
* Create the `main.tf` file:
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the `main.tf` content and save the file.
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = "main"
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "tfvm"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [var.tfnodeid1, var.tfnodeid2]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
disks {
|
||||
name = "disk1"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid1
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
|
||||
output "ygg_ip1" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "ipv4_vm1" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
In this file, we name the full VM as `vm1`.
|
||||
|
||||
* Create the `credentials.auto.tfvars` file:
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content and save the file.
|
||||
* ```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
tfnodeid1 = "..."
|
||||
|
||||
size = "50"
|
||||
cpu = "1"
|
||||
memory = "2048"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node. Simply replace the three dots by the appropriate content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers.
|
||||
|
||||
## Deploy the Full VM with Terraform
|
||||
|
||||
We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-single-nextcloud` with the main and variables files.
|
||||
|
||||
* Initialize Terraform:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
|
||||
* Apply Terraform to deploy the full VM:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
|
||||
After deployments, take note of the 3Node's IPv4 address. You will need this address to SSH into the 3Node.
|
||||
|
||||
## SSH into the 3Node
|
||||
|
||||
* To [SSH into the 3Node](../../getstarted/ssh_guide/ssh_guide.md), write the following:
|
||||
* ```
|
||||
ssh root@VM_IPv4_Address
|
||||
```
|
||||
|
||||
## Prepare the Full VM
|
||||
|
||||
* Update and upgrade the system
|
||||
* ```
|
||||
apt update && apt upgrade && apt-get install apache2
|
||||
```
|
||||
* After download, reboot the system
|
||||
* ```
|
||||
reboot
|
||||
```
|
||||
* Reconnect to the VM
|
||||
|
||||
|
||||
|
||||
# Create the MariaDB Database
|
||||
|
||||
## Download MariaDB and Configure the Database
|
||||
|
||||
* Download MariaDB's server and client
|
||||
* ```
|
||||
apt install mariadb-server mariadb-client
|
||||
```
|
||||
* Configure the MariaDB database
|
||||
* ```
|
||||
nano /etc/mysql/mariadb.conf.d/50-server.cnf
|
||||
```
|
||||
* Do the following changes
|
||||
* Add `#` in front of
|
||||
* `bind-address = 127.0.0.1`
|
||||
* Remove `#` in front of the following lines and make sure the variable `server-id` is set to `1`
|
||||
```
|
||||
#server-id = 1
|
||||
#log_bin = /var/log/mysql/mysql-bin.log
|
||||
```
|
||||
* Below the lines shown above add the following line:
|
||||
```
|
||||
binlog_do_db = nextcloud
|
||||
```
|
||||
|
||||
* Restart MariaDB
|
||||
* ```
|
||||
systemctl restart mysql
|
||||
```
|
||||
|
||||
* Launch MariaDB
|
||||
* ```
|
||||
mysql
|
||||
```
|
||||
|
||||
## Set the Nextcloud User and Database
|
||||
|
||||
We now set the Nextcloud database. You should choose your own username and password.
|
||||
|
||||
* On the full VM, write:
|
||||
```
|
||||
CREATE DATABASE nextcloud;
|
||||
CREATE USER 'ncuser'@'%';
|
||||
GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
* To see the databases, write:
|
||||
```
|
||||
show databases;
|
||||
```
|
||||
* To see users, write:
|
||||
```
|
||||
select user from mysql.user;
|
||||
```
|
||||
* To exit MariaDB, write:
|
||||
```
|
||||
exit;
|
||||
```
|
||||
|
||||
|
||||
# Install PHP and Nextcloud
|
||||
|
||||
* Install PHP and the PHP modules for Nextcloud on both the master and the worker:
|
||||
* ```
|
||||
apt install php && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip
|
||||
```
|
||||
|
||||
We will now install Nextcloud.
|
||||
|
||||
* On the full VM, go to the folder `/var/www`:
|
||||
* ```
|
||||
cd /var/www
|
||||
```
|
||||
|
||||
* To install the latest Nextcloud version, go to the Nextcloud homepage:
|
||||
* See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/).
|
||||
|
||||
* We now download Nextcloud on the full VM.
|
||||
* ```
|
||||
wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip
|
||||
```
|
||||
|
||||
* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress:
|
||||
* ```
|
||||
apt install p7zip-full
|
||||
```
|
||||
* ```
|
||||
7z x nextcloud-27.0.1.zip -o/var/www/
|
||||
```
|
||||
* Then, we grant permissions to the folder.
|
||||
* ```
|
||||
chown www-data:www-data /var/www/nextcloud/ -R
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Create a Subdomain with DuckDNS
|
||||
|
||||
We want to create a subdomain to access Nextcloud over the public internet.
|
||||
|
||||
For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit.
|
||||
|
||||
We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website.
|
||||
|
||||
* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/).
|
||||
* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system.
|
||||
|
||||
Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure.
|
||||
|
||||
|
||||
|
||||
# Set Apache
|
||||
|
||||
We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`.
|
||||
|
||||
* On full VM, write the following:
|
||||
* ```
|
||||
nano /etc/apache2/sites-available/nextcloud.conf
|
||||
```
|
||||
|
||||
The file should look like this, with your own subdomain instead of `subdomain`:
|
||||
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
DocumentRoot "/var/www/nextcloud"
|
||||
ServerName subdomain.duckdns.org
|
||||
ServerAlias www.subdomain.duckdns.org
|
||||
|
||||
ErrorLog ${APACHE_LOG_DIR}/nextcloud.error
|
||||
CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined
|
||||
|
||||
<Directory /var/www/nextcloud/>
|
||||
Require all granted
|
||||
Options FollowSymlinks MultiViews
|
||||
AllowOverride All
|
||||
|
||||
<IfModule mod_dav.c>
|
||||
Dav off
|
||||
</IfModule>
|
||||
|
||||
SetEnv HOME /var/www/nextcloud
|
||||
SetEnv HTTP_HOME /var/www/nextcloud
|
||||
Satisfy Any
|
||||
|
||||
</Directory>
|
||||
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
* On the full VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file:
|
||||
* ```
|
||||
a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl
|
||||
```
|
||||
|
||||
* Then, reload and restart Apache:
|
||||
* ```
|
||||
systemctl reload apache2 && systemctl restart apache2
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Access Nextcloud on a Web Browser
|
||||
|
||||
We now access Nextcloud over the public Internet.
|
||||
|
||||
* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain):
|
||||
* ```
|
||||
subdomain.duckdns.org
|
||||
```
|
||||
|
||||
Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser.
|
||||
|
||||
* Choose a name and a password. For this guide, we use the following:
|
||||
* ```
|
||||
ncadmin
|
||||
password1234
|
||||
```
|
||||
|
||||
* Enter the Nextcloud Database information created with MariaDB and click install:
|
||||
* ```
|
||||
Database user: ncuser
|
||||
Database password: password1234
|
||||
Database name: nextcloud
|
||||
Database location: localhost
|
||||
```
|
||||
|
||||
Nextcloud will then proceed to complete the installation.
|
||||
|
||||
We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database.
|
||||
|
||||
After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain.
|
||||
|
||||
|
||||
|
||||
# Enable HTTPS
|
||||
|
||||
## Install Certbot
|
||||
|
||||
We will now enable HTTPS on the full VM.
|
||||
|
||||
To enable HTTPS, first install `letsencrypt` with `certbot`:
|
||||
|
||||
Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/)
|
||||
|
||||
* See if you have the latest version of snap:
|
||||
* ```
|
||||
snap install core; snap refresh core
|
||||
```
|
||||
|
||||
* Remove certbot-auto:
|
||||
* ```
|
||||
apt-get remove certbot
|
||||
```
|
||||
|
||||
* Install certbot:
|
||||
* ```
|
||||
snap install --classic certbot
|
||||
```
|
||||
|
||||
* Ensure that certbot can be run:
|
||||
* ```
|
||||
ln -s /snap/bin/certbot /usr/bin/certbot
|
||||
```
|
||||
|
||||
* Then, install certbot-apache:
|
||||
* ```
|
||||
apt install python3-certbot-apache
|
||||
```
|
||||
|
||||
## Set the Certbot with the DNS Domain
|
||||
|
||||
We now set the certbot with the DNS domain.
|
||||
|
||||
* To add the HTTPS protection, write the following line on the full VM with your own subdomain:
|
||||
* ```
|
||||
certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org
|
||||
```
|
||||
|
||||
## Verify HTTPS Automatic Renewal
|
||||
|
||||
* Make a dry run of the certbot renewal to verify that it is correctly set up.
|
||||
* ```
|
||||
certbot renew --dry-run
|
||||
```
|
||||
|
||||
You now have HTTPS security on your Nextcloud instance.
|
||||
|
||||
# Set a Firewall
|
||||
|
||||
Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
|
||||
|
||||
It should already be installed on your system. If it is not, install it with the following command:
|
||||
|
||||
```
|
||||
apt install ufw
|
||||
```
|
||||
|
||||
For our security rules, we want to allow SSH, HTTP and HTTPS.
|
||||
|
||||
We thus add the following rules:
|
||||
|
||||
|
||||
* Allow SSH (port 22)
|
||||
* ```
|
||||
ufw allow ssh
|
||||
```
|
||||
* Allow HTTP (port 80)
|
||||
* ```
|
||||
ufw allow http
|
||||
```
|
||||
* Allow https (port 443)
|
||||
* ```
|
||||
ufw allow https
|
||||
```
|
||||
|
||||
* To enable the firewall, write the following:
|
||||
* ```
|
||||
ufw enable
|
||||
```
|
||||
|
||||
* To see the current security rules, write the following:
|
||||
* ```
|
||||
ufw status verbose
|
||||
```
|
||||
|
||||
You now have enabled the firewall with proper security rules for your Nextcloud deployment.
|
||||
|
||||
|
||||
|
||||
# Conclusion
|
||||
|
||||
If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone!
|
||||
|
||||
You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this.
|
||||
|
||||
You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Terraform, MariaDB, PHP and Nextcloud.
|
||||
|
||||
This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge.
|
||||
|
||||
|
||||
|
||||
# Acknowledgements and References
|
||||
|
||||
A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort!
|
||||
|
||||
This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform.
|
||||
|
||||
This single Nextcloud instance guide is an adaptation from the [Nextcloud Redundant Deployment guide](terraform_nextcloud_redundant.md). The inspiration to make a single instance deployment guide comes from [RobertL](https://forum.threefold.io/t/threefold-guide-nextcloud-redundant-deployment-on-two-3node-servers/3915/3) on the ThreeFold Forum.
|
||||
|
||||
Thanks to everyone who helped shape this guide.
|
@@ -0,0 +1,10 @@
|
||||
<h1> Nextcloud Deployments </h2>
|
||||
|
||||
We present here different Nextcloud deployments. While this section is focused on Nextcloud, those deployment architectures can be used as templates for other kind of deployments on the TFGrid.
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md)
|
||||
- [Nextcloud Single Deployment](./terraform_nextcloud_single.md)
|
||||
- [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md)
|
||||
- [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md)
|
@@ -0,0 +1,343 @@
|
||||
<h1>Nextcloud 2-Node VPN Deployment</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [2-Node Terraform Deployment](#2-node-terraform-deployment)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Variables File](#variables-file)
|
||||
- [Main File](#main-file)
|
||||
- [Deploy the 2-Node VPN](#deploy-the-2-node-vpn)
|
||||
- [Nextcloud Setup](#nextcloud-setup)
|
||||
- [Nextcloud VM Prerequisites](#nextcloud-vm-prerequisites)
|
||||
- [Prepare the VMs for the Rsync Daily Backup](#prepare-the-vms-for-the-rsync-daily-backup)
|
||||
- [Create a Cron Job for the Rsync Daily Backup](#create-a-cron-job-for-the-rsync-daily-backup)
|
||||
- [Future Projects](#future-projects)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
# Introduction
|
||||
|
||||
This guide is a proof-of-concept to show that, using two VMs in a WireGuard VPN, it is possible to, on the first VM, set a Nextcloud AIO instance on the TFGrid, set on it a daily backup and update with Borgbackup, and, on the second VM, set a second daily backup of the first backup. This means that we have 2 virtual machines, one VM with the Nextcloud instance and the Nextcloud backup, and another VM with a backup of the Nextcloud backup.
|
||||
|
||||
This architecture leads to a higher redundancy level, since we can afford to lose one of the two VMs and still be able to retrieve the Nextcloud database. Note that to achieve this, we are creating a virtual private network (VPN) with WireGuard. This will connect the two VMs and allow for file transfers. While there are many ways to proceed, for this guide we will be using [ssh-keygen](https://linux.die.net/man/1/ssh-keygen), [Rsync](https://linux.die.net/man/1/rsync) and [Cron](https://linux.die.net/man/1/crontab).
|
||||
|
||||
Note that, in order to reduce the deployment cost, we set the minimum CPU and memory requirements for the Backup VM. We do not need high CPU and memory for this VM since it is only used for storage.
|
||||
|
||||
Note that this guide also make use of the ThreeFold gateway. For this reason, this deployment can be set on any two 3Nodes on the TFGrid, i.e. there is no need for IPv4 on the 2 nodes we are deploying on, as long as we set a gateway on a gateway node.
|
||||
|
||||
For now, let's see how to achieve this redundant deployment with Rsync!
|
||||
|
||||
# 2-Node Terraform Deployment
|
||||
|
||||
For this guide, we are deploying a Nextcloud AIO instance along a Backup VM, enabling daily backups of both VMs. The two VMs are connected by a WireGuard VPN. The deployment will be using the [Nextcloud FList](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) available in the **tf-images** ThreeFold Tech repository.
|
||||
|
||||
## Create the Terraform Files
|
||||
|
||||
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. **var.size** for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main.tf as is.
|
||||
|
||||
For this example, we will be deploying the Nextcloud instance with a ThreeFold gateway and a gateway domain. Other configurations are possible.
|
||||
|
||||
### Variables File
|
||||
|
||||
* Copy the following content and save the file under the name `credentials.auto.tfvars`:
|
||||
|
||||
```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
network = "main"
|
||||
|
||||
size_vm1 = "50"
|
||||
cpu_vm1 = "2"
|
||||
memory_vm1 = "4096"
|
||||
|
||||
size_vm2 = "50"
|
||||
cpu_vm2 = "1"
|
||||
memory_vm2 = "512"
|
||||
|
||||
gateway_id = "50"
|
||||
vm1_id = "5453"
|
||||
vm2_id = "12"
|
||||
|
||||
deployment_name = "nextcloudgatewayvpn"
|
||||
nextcloud_flist = "https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. Simply replace the three dots by the content. Note that you can deploy on a different node than node 5453 for the **vm1** node. If you want to deploy on another node than node 5453 for the **gateway** node, make sure that you choose a gateway node. To find a gateway node, go on the [ThreeFold Dashboard](https://dashboard.grid.tf/) Nodes section of the Explorer and select **Gateways (Only)**.
|
||||
|
||||
Obviously, you can decide to increase or modify the quantity for the CPU, memory and size variables. Note that we set the minimum CPU and memory parameters for the Backup VM (**vm2**). This will reduce the cost of the deployment. Since the Backup VM is only used for storage, we don't need to set the CPU and memory higher.
|
||||
|
||||
### Main File
|
||||
|
||||
* Copy the following content and save the file under the name `main.tf`:
|
||||
|
||||
```
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
default = "your mnemonics"
|
||||
}
|
||||
|
||||
variable "network" {
|
||||
type = string
|
||||
default = "main"
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
default = "your SSH pub key"
|
||||
}
|
||||
|
||||
variable "deployment_name" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size_vm1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu_vm1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory_vm1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size_vm2" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu_vm2" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory_vm2" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "nextcloud_flist" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "gateway_id" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "vm1_id" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "vm2_id" {
|
||||
type = string
|
||||
}
|
||||
|
||||
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = var.network
|
||||
}
|
||||
|
||||
data "grid_gateway_domain" "domain" {
|
||||
node = var.gateway_id
|
||||
name = var.deployment_name
|
||||
}
|
||||
|
||||
resource "grid_network" "net" {
|
||||
nodes = [var.gateway_id, var.vm1_id, var.vm2_id]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "My network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
node = var.vm1_id
|
||||
network_name = grid_network.net.name
|
||||
|
||||
disks {
|
||||
name = "data"
|
||||
size = var.size_vm1
|
||||
}
|
||||
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = var.nextcloud_flist
|
||||
cpu = var.cpu_vm1
|
||||
memory = var.memory_vm1
|
||||
rootfs_size = 15000
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
GATEWAY = "true"
|
||||
IPV4 = "false"
|
||||
NEXTCLOUD_DOMAIN = data.grid_gateway_domain.domain.fqdn
|
||||
}
|
||||
mounts {
|
||||
disk_name = "data"
|
||||
mount_point = "/mnt/data"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d2" {
|
||||
disks {
|
||||
name = "disk2"
|
||||
size = var.size_vm2
|
||||
}
|
||||
node = var.vm2_id
|
||||
network_name = grid_network.net.name
|
||||
|
||||
vms {
|
||||
name = "vm2"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu_vm2
|
||||
mounts {
|
||||
disk_name = "disk2"
|
||||
mount_point = "/disk2"
|
||||
}
|
||||
memory = var.memory_vm2
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_name_proxy" "p1" {
|
||||
node = var.gateway_id
|
||||
name = data.grid_gateway_domain.domain.name
|
||||
backends = [format("http://%s:80", grid_deployment.d1.vms[0].ip)]
|
||||
network = grid_network.net.name
|
||||
tls_passthrough = false
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net.access_wg_config
|
||||
}
|
||||
|
||||
output "vm1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
|
||||
output "vm2_ip" {
|
||||
value = grid_deployment.d2.vms[0].ip
|
||||
}
|
||||
|
||||
|
||||
output "fqdn" {
|
||||
value = data.grid_gateway_domain.domain.fqdn
|
||||
}
|
||||
```
|
||||
|
||||
## Deploy the 2-Node VPN
|
||||
|
||||
We now deploy the 2-node VPN with Terraform. Make sure that you are in the correct folder containing the main and variables files.
|
||||
|
||||
* Initialize Terraform:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
|
||||
* Apply Terraform to deploy Nextcloud:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
|
||||
Note that, at any moment, if you want to see the information on your Terraform deployment, write the following:
|
||||
* ```
|
||||
terraform show
|
||||
```
|
||||
|
||||
# Nextcloud Setup
|
||||
|
||||
* Access Nextcloud Setup
|
||||
* Once you've deployed Nextcloud, you can access the Nextcloud Setup page by pasting on a browser the URL displayed on the line `fqdn = "..."` of the `terraform show` output. For more information on this, [read this documentation](../../../dashboard/solutions/nextcloud.md#nextcloud-setup).
|
||||
* Create a backup and set a daily backup and update
|
||||
* Make sure to create a backup with `/mnt/backup` as the mount point, and set a daily update and backup for your Nextcloud VM. For more information, [read this documentation](../../../dashboard/solutions/nextcloud.md#backups-and-updates).
|
||||
|
||||
> Note: By default, the daily Borgbackup is set at 4:00 UTC. If you change this parameter, make sure to adjust the moment the [Rsync backup](#create-a-cron-job-for-the-rsync-daily-backup) is done.
|
||||
|
||||
# Nextcloud VM Prerequisites
|
||||
|
||||
We need to install a few things on the Nextcloud VM before going further.
|
||||
|
||||
* Update the Nextcloud VM
|
||||
* ```
|
||||
apt update
|
||||
```
|
||||
* Install ping on the Nextcloud VM if you want to test the VPN connection (Optional)
|
||||
* ```
|
||||
apt install iputils-ping -y
|
||||
```
|
||||
* Install Rsync on the Nextcloud VM
|
||||
* ```
|
||||
apt install rsync
|
||||
```
|
||||
* Install nano on the Nextcloud VM
|
||||
* ```
|
||||
apt install nano
|
||||
```
|
||||
* Install Cron on the Nextcloud VM
|
||||
* apt install cron
|
||||
|
||||
# Prepare the VMs for the Rsync Daily Backup
|
||||
|
||||
* Test the VPN (Optional) with [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping)
|
||||
* ```
|
||||
ping <WireGuard_VM_IP_Address>
|
||||
```
|
||||
* Generate an SSH key pair on the Backup VM
|
||||
* ```
|
||||
ssh-keygen
|
||||
```
|
||||
* Take note of the public key in the Backup VM
|
||||
* ```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Add the public key of the Backup VM in the Nextcloud VM
|
||||
* ```
|
||||
nano ~/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
> Make sure to put the Backup VM SSH public key before the public key already present in the file **authorized_keys** of the Nextcloud VM.
|
||||
|
||||
# Create a Cron Job for the Rsync Daily Backup
|
||||
|
||||
We now set a daily cron job that will make a backup between the Nextcloud VM and the Backup VM using Rsync.
|
||||
|
||||
* Open the crontab on the Backup VM
|
||||
* ```
|
||||
crontab -e
|
||||
```
|
||||
* Add the cron job at the end of the file
|
||||
* ```
|
||||
0 8 * * * rsync -avz --no-perms -O --progress --delete --log-file=/root/rsync_storage.log root@10.1.3.2:/mnt/backup/ /mnt/backup/
|
||||
```
|
||||
|
||||
> Note: By default, the Nextcloud automatic backup is set at 4:00 UTC. For this reason, we set the Rsync daily backup at 8:00 UTC.
|
||||
|
||||
> Note: To set Rsync with a script, [read this documentation](../../computer_it_basics/file_transfer.md#automate-backup-with-rsync).
|
||||
|
||||
# Future Projects
|
||||
|
||||
This concept can be expanded in many directions. We can generate a script to facilitate the process, we can set a script directly in an FList for minimal user configurations, we can also explore Mariadb and GlusterFS instead of Rsync.
|
||||
|
||||
As a generic deployment, we can develop a weblet that makes a daily backup of any other ThreeFold Playground weblet.
|
||||
|
||||
# Questions and Feedback
|
||||
|
||||
We invite others to propose ideas and codes if they feel inspired!
|
||||
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
@@ -0,0 +1,359 @@
|
||||
<h1>Deploy a Nomad Cluster</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [What is Nomad?](#what-is-nomad)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Main File](#main-file)
|
||||
- [Credentials File](#credentials-file)
|
||||
- [Deploy the Nomad Cluster](#deploy-the-nomad-cluster)
|
||||
- [SSH into the Client and Server Nodes](#ssh-into-the-client-and-server-nodes)
|
||||
- [SSH with the Planetary Network](#ssh-with-the-planetary-network)
|
||||
- [SSH with WireGuard](#ssh-with-wireguard)
|
||||
- [Destroy the Nomad Deployment](#destroy-the-nomad-deployment)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this ThreeFold Guide, we will learn how to deploy a Nomad cluster on the TFGrid with Terraform. We cover a basic two client and three server nodes Nomad cluster. After completing this guide, you will have sufficient knowledge to build your own personalized Nomad cluster.
|
||||
|
||||
|
||||
|
||||
## What is Nomad?
|
||||
|
||||
[Nomad](https://www.nomadproject.io/) is a simple and flexible scheduler and orchestrator to deploy and manage containers and non-containerized applications across on-premises and clouds at scale.
|
||||
|
||||
In the dynamic world of cloud computing, managing and orchestrating workloads across diverse environments can be a daunting task. Nomad emerges as a powerful solution, simplifying and streamlining the deployment, scheduling, and management of applications.
|
||||
|
||||
Nomad's elegance lies in its lightweight architecture and ease of use. It operates as a single binary, minimizing resource consumption and complexity. Its intuitive user interface and straightforward configuration make it accessible to a wide range of users, from novices to experienced DevOps.
|
||||
|
||||
Nomad's versatility extends beyond its user-friendliness. It seamlessly handles a wide array of workloads, including legacy applications, microservices, and batch jobs. Its adaptability extends to diverse environments, effortlessly orchestrating workloads across on-premises infrastructure and public clouds. It's more of Kubernetes for humans!
|
||||
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* [Install Terraform](https://developer.hashicorp.com/terraform/downloads)
|
||||
* [Install WireGuard](https://www.wireguard.com/install/)
|
||||
|
||||
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
If you are new to Terraform, feel free to read this basic [Terraform Full VM guide](../terraform_full_vm.md) to get you started.
|
||||
|
||||
|
||||
|
||||
## Create the Terraform Files
|
||||
|
||||
For this guide, we use two files to deploy with Terraform: a main file and a variables file. The variables file contains the environment variables and the main file contains the necessary information to deploy your workload.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The file `main.tf` will be using the environment variables from the variables files (e.g. `var.cpu` for the CPU parameter) and thus you do not need to change this file.
|
||||
|
||||
Of course, you can adjust the two files based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main file as is.
|
||||
|
||||
Also note that this deployment uses both the Planetary network and WireGuard.
|
||||
|
||||
### Main File
|
||||
|
||||
We start by creating the main file for our Nomad cluster.
|
||||
|
||||
* Create a directory for your Terraform Nomad cluster
|
||||
* ```
|
||||
mkdir nomad
|
||||
```
|
||||
* ```
|
||||
cd nomad
|
||||
```
|
||||
* Create the `main.tf` file
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the following `main.tf` template and save the file
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = "main"
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "nomadcluster"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [var.tfnodeid]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "nomad network"
|
||||
add_wg_access = true
|
||||
}
|
||||
resource "grid_deployment" "d1" {
|
||||
disks {
|
||||
name = "disk1"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "server1"
|
||||
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
entrypoint = "/sbin/zinit init"
|
||||
ip = "10.1.3.2"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "server2"
|
||||
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
FIRST_SERVER_IP = "10.1.3.2"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "server3"
|
||||
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
FIRST_SERVER_IP = "10.1.3.2"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "client1"
|
||||
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
FIRST_SERVER_IP = "10.1.3.2"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
vms {
|
||||
name = "client2"
|
||||
flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist"
|
||||
cpu = var.cpu
|
||||
memory = var.memory
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
FIRST_SERVER_IP = "10.1.3.2"
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
|
||||
output "server1_wg_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "server2_wg_ip" {
|
||||
value = grid_deployment.d1.vms[1].ip
|
||||
}
|
||||
output "server3_wg_ip" {
|
||||
value = grid_deployment.d1.vms[2].ip
|
||||
}
|
||||
output "client1_wg_ip" {
|
||||
value = grid_deployment.d1.vms[3].ip
|
||||
}
|
||||
output "client2_wg_ip" {
|
||||
value = grid_deployment.d1.vms[4].ip
|
||||
}
|
||||
|
||||
output "server1_planetary_ip" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
output "server2_planetary_ip" {
|
||||
value = grid_deployment.d1.vms[1].ygg_ip
|
||||
}
|
||||
output "server3_planetary_ip" {
|
||||
value = grid_deployment.d1.vms[2].ygg_ip
|
||||
}
|
||||
output "client1_planetary_ip" {
|
||||
value = grid_deployment.d1.vms[3].ygg_ip
|
||||
}
|
||||
output "client2_planetary_ip" {
|
||||
value = grid_deployment.d1.vms[4].ygg_ip
|
||||
}
|
||||
```
|
||||
|
||||
### Credentials File
|
||||
|
||||
We create a credentials file that will contain the environment variables. This file should be in the same directory as the main file.
|
||||
|
||||
* Create the `credentials.auto.tfvars` file
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content and save the file
|
||||
* ```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
tfnodeid = "..."
|
||||
|
||||
size = "50"
|
||||
cpu = "2"
|
||||
memory = "1024"
|
||||
```
|
||||
|
||||
Make sure to replace the three dots by your own information for `mnemonics` and `SSH_KEY`. You will also need to find a suitable node for your deployment and set its node ID (`tfnodeid`). Feel free to adjust the parameters `size`, `cpu` and `memory` if needed.
|
||||
|
||||
|
||||
|
||||
## Deploy the Nomad Cluster
|
||||
|
||||
We now deploy the Nomad Cluster with Terraform. Make sure that you are in the directory containing the `main.tf` file.
|
||||
|
||||
* Initialize Terraform
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
|
||||
* Apply Terraform to deploy the Nomad cluster
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
|
||||
|
||||
|
||||
## SSH into the Client and Server Nodes
|
||||
|
||||
You can now SSH into the client and server nodes using both the Planetary network and WireGuard.
|
||||
|
||||
Note that the IP addresses will be shown under `Outputs` after running the command `Terraform apply`, with `planetary_ip` for the Planetary network and `wg_ip` for WireGuard.
|
||||
|
||||
### SSH with the Planetary Network
|
||||
|
||||
* To [SSH with the Planetary network](../../getstarted/ssh_guide/ssh_openssh.md), write the following with the proper IP address
|
||||
* ```
|
||||
ssh root@planetary_ip
|
||||
```
|
||||
|
||||
You now have an SSH connection access over the Planetary network to the client and server nodes of your Nomad cluster.
|
||||
|
||||
### SSH with WireGuard
|
||||
|
||||
To SSH with WireGuard, we first need to set the proper WireGuard configurations.
|
||||
|
||||
* Create a file named `wg.conf` in the directory `/etc/wireguard`
|
||||
* ```
|
||||
nano /etc/wireguard/wg.conf
|
||||
```
|
||||
|
||||
* Paste the content provided by the Terraform deployment in the file `wg.conf` and save it.
|
||||
* Note that you can use `terraform show` to see the Terraform output. The WireGuard configurations (`wg_config`) stands in between the two `EOT` instances.
|
||||
|
||||
* Start WireGuard on your local computer
|
||||
* ```
|
||||
wg-quick up wg
|
||||
```
|
||||
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the WireGuard IP of a node to make sure the connection is correct
|
||||
* ```
|
||||
ping wg_ip
|
||||
```
|
||||
|
||||
We are now ready to SSH into the client and server nodes with WireGuard.
|
||||
|
||||
* To SSH with WireGuard, write the following with the proper IP address:
|
||||
* ```
|
||||
ssh root@wg_ip
|
||||
```
|
||||
|
||||
You now have an SSH connection access over WireGuard to the client and server nodes of your Nomad cluster. For more information on connecting with WireGuard, read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
|
||||
|
||||
|
||||
|
||||
## Destroy the Nomad Deployment
|
||||
|
||||
If you want to destroy the Nomad deployment, write the following in the terminal:
|
||||
|
||||
* ```
|
||||
terraform destroy
|
||||
```
|
||||
* Then write `yes` to confirm.
|
||||
|
||||
Make sure that you are in the corresponding Terraform folder when writing this command.
|
||||
|
||||
|
||||
## Conclusion
|
||||
|
||||
You now have the basic knowledge to deploy a Nomad cluster on the TFGrid. Feel free to explore the many possibilities available that come with Nomad.
|
||||
|
||||
You can now use a Nomad cluster to deploy your workloads. For more information on this, read this documentation on [how to deploy a Redis workload on the Nomad cluster](https://developer.hashicorp.com/nomad/tutorials/get-started/gs-deploy-job).
|
||||
|
||||
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
@@ -0,0 +1,53 @@
|
||||
<h1> Terraform Provider </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Remarks](#remarks)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present the basics of the Terraform Provider.
|
||||
|
||||
## Example
|
||||
|
||||
``` terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
provider "grid" {
|
||||
mnemonics = "FROM THE CREATE TWIN STEP"
|
||||
network = grid network, one of: dev test qa main
|
||||
key_type = key type registered on substrate (ed25519 or sr25519)
|
||||
relay_url = example: "wss://relay.dev.grid.tf"
|
||||
rmb_timeout = timeout duration in seconds for rmb calls
|
||||
substrate_url = substrate url, example: "wss://tfchain.dev.grid.tf/ws"
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
should be recognizable as Env variables too
|
||||
|
||||
- `MNEMONICS`
|
||||
- `NETWORK`
|
||||
- `SUBSTRATE_URL`
|
||||
- `KEY_TYPE`
|
||||
- `RELAY_URL`
|
||||
- `RMB_TIMEOUT`
|
||||
|
||||
The *_URL variables can be used to override the dafault urls associated with the specified network
|
||||
|
||||
## Remarks
|
||||
|
||||
- Grid terraform provider is hosted on terraform registry [here](https://registry.terraform.io/providers/threefoldtech/grid/latest/docs?pollNotifications=true)
|
||||
- All provider input variables and their description can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/index.md)
|
||||
- Capitalized environment variables can be used instead of writing them in the provider (e.g. MNEMONICS)
|
@@ -0,0 +1,119 @@
|
||||
<h1> Terraform and Provisioner </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
- [Params docs](#params-docs)
|
||||
- [Requirements](#requirements)
|
||||
- [Connection Block](#connection-block)
|
||||
- [Provisioner Block](#provisioner-block)
|
||||
- [More Info](#more-info)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/external_provisioner/remote-exec_hello-world/main.tf), we will see how to deploy a VM and apply provisioner commands on it on the TFGrid.
|
||||
|
||||
## Example
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "myvm"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
nodes = [1]
|
||||
ip_range = "10.1.0.0/24"
|
||||
name = local.name
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
name = local.name
|
||||
node = 1
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-apps/grid3_ubuntu20.04-latest.flist"
|
||||
entrypoint = "/init.sh"
|
||||
cpu = 2
|
||||
memory = 1024
|
||||
env_vars = {
|
||||
SSH_KEY = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
planetary = true
|
||||
}
|
||||
connection {
|
||||
type = "ssh"
|
||||
user = "root"
|
||||
agent = true
|
||||
host = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"echo 'Hello world!' > /root/readme.txt"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Params docs
|
||||
|
||||
### Requirements
|
||||
|
||||
- the machine should have `ssh server` running
|
||||
- the machine should have `scp` installed
|
||||
|
||||
### Connection Block
|
||||
|
||||
- defines how we will connect to the deployed machine
|
||||
|
||||
``` terraform
|
||||
connection {
|
||||
type = "ssh"
|
||||
user = "root"
|
||||
agent = true
|
||||
host = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
```
|
||||
|
||||
type: defines the used service to connect to
|
||||
user: the connecting users
|
||||
agent: if used the provisoner will use the default key to connect to the remote machine
|
||||
host: the ip/host of the remote machine
|
||||
|
||||
### Provisioner Block
|
||||
|
||||
- defines the actual provisioner behaviour
|
||||
|
||||
``` terraform
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"echo 'Hello world!' > /root/readme.txt"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- remote-exec: the provisoner type we are willing to use can be remote, local or another type
|
||||
- inline: This is a list of command strings. They are executed in the order they are provided. This cannot be provided with script or scripts.
|
||||
- script: This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be provided with inline or scripts.
|
||||
- scripts: This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be provided with inline or script.
|
||||
|
||||
### More Info
|
||||
|
||||
A complete list of provisioner parameters can be found [here](https://www.terraform.io/language/resources/provisioners/remote-exec).
|
@@ -0,0 +1,55 @@
|
||||
<h1> Updating </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Updating with Terraform](#updating-with-terraform)
|
||||
- [Adjustments](#adjustments)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present ways to update using Terraform. Note that this is not fully supported.
|
||||
|
||||
Some of the updates are working, but the code is not finished, use at your own risk.
|
||||
|
||||
## Updating with Terraform
|
||||
|
||||
Updates are triggered by changing the deployments fields.
|
||||
So for example, if you have the following network resource:
|
||||
|
||||
```terraform
|
||||
resource "grid_network" "net" {
|
||||
nodes = [2]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
}
|
||||
```
|
||||
|
||||
Then decided to add a node:
|
||||
|
||||
```terraform
|
||||
resource "grid_network" "net" {
|
||||
nodes = [2, 4]
|
||||
ip_range = "10.1.0.0/16"
|
||||
name = "network"
|
||||
description = "newer network"
|
||||
}
|
||||
```
|
||||
|
||||
After calling `terraform apply`, the provider does the following:
|
||||
|
||||
- Add node 4 to the network.
|
||||
- Update the version of the workload.
|
||||
- Update the version of the deployment.
|
||||
- Update the hash in the contract (the contract id will stay the same)
|
||||
|
||||
## Adjustments
|
||||
|
||||
There are workloads that doesn't support in-place updates (e.g. Zmachines). To change them there are a couple of options (all performs destroy/create so data can be lost):
|
||||
|
||||
1. `terraform taint grid_deployment.d1` (next apply will destroy ALL workloads within grid_deployment.d1 and create a new deployment)
|
||||
2. `terraform destroy --target grid_deployment.d1 && terraform apply --target grid_deployment.d1` (same as above)
|
||||
3. Remove the vm, then execute a `terraform apply`, then add the vm with the new config (this performs two updates but keeps neighboring workloads inside the same deployment intact). (CAUTION: this could be done only if the vm is last one in the list of vms, otherwise undesired behavior will occur)
|
@@ -0,0 +1,280 @@
|
||||
<h1>SSH Into a 3Node with Wireguard</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
|
||||
- [Create the Terraform Files](#create-the-terraform-files)
|
||||
- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform)
|
||||
- [Set the Wireguard Connection](#set-the-wireguard-connection)
|
||||
- [SSH into the 3Node with Wireguard](#ssh-into-the-3node-with-wireguard)
|
||||
- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this ThreeFold Guide, we show how simple it is to deploy a micro VM on the ThreeFold Grid with Terraform and to make an SSH connection with Wireguard.
|
||||
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* [Install Terraform](../terraform_install.md)
|
||||
* [Install Wireguard](https://www.wireguard.com/install/)
|
||||
|
||||
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
|
||||
|
||||
## Find a 3Node with the ThreeFold Explorer
|
||||
|
||||
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid.
|
||||
|
||||
We show here how to find a suitable 3Node using the ThreeFold Explorer.
|
||||
|
||||
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) to find a 3Node
|
||||
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
|
||||
* For proper understanding, we give further information on some relevant columns:
|
||||
* `ID` refers to the node ID
|
||||
* `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
* `HRU` refers to HDD storage
|
||||
* `SRU` refers to SSD storage
|
||||
* `MRU` refers to RAM (memory)
|
||||
* `CRU` refers to virtual cores (vcores)
|
||||
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
|
||||
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. Here's what would work for our currernt situation.
|
||||
* `Free SRU (GB)`: 15
|
||||
* `Free MRU (GB)`: 1
|
||||
* `Total CRU (Cores)`: 1
|
||||
|
||||
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
|
||||
|
||||
|
||||
|
||||
## Create the Terraform Files
|
||||
|
||||
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file.
|
||||
|
||||
Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
|
||||
|
||||
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-ssh`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
|
||||
|
||||
Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on.
|
||||
|
||||
Now let's create the Terraform files.
|
||||
|
||||
* Open the terminal and go to the home directory
|
||||
* ```
|
||||
cd ~
|
||||
```
|
||||
|
||||
* Create the folder `terraform` and the subfolder `deployment-wg-ssh`:
|
||||
* ```
|
||||
mkdir -p terraform/deployment-wg-ssh
|
||||
```
|
||||
* ```
|
||||
cd terraform/deployment-wg-ssh
|
||||
```
|
||||
```
|
||||
* Create the `main.tf` file:
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the `main.tf` content and save the file.
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = "main"
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "tfvm"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [var.tfnodeid1]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
disks {
|
||||
name = "disk1"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid1
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
* Create the `credentials.auto.tfvars` file:
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content, set the node ID as well as your mnemonics and SSH public key, then save the file.
|
||||
* ```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
tfnodeid1 = "..."
|
||||
|
||||
size = "15"
|
||||
cpu = "1"
|
||||
memory = "512"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node server you wish to deploy on. Simply replace the three dots by the proper content.
|
||||
|
||||
|
||||
|
||||
## Deploy the Micro VM with Terraform
|
||||
|
||||
We now deploy the micro VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-ssh` containing the main and variables files.
|
||||
|
||||
* Initialize Terraform:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
|
||||
* Apply Terraform to deploy the micro VM:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
|
||||
|
||||
|
||||
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
|
||||
* ```
|
||||
terraform show
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Set the Wireguard Connection
|
||||
|
||||
To set the Wireguard connection, on your local computer, you will need to take the Terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT.
|
||||
|
||||
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
|
||||
|
||||
* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`.
|
||||
* ```
|
||||
nano /usr/local/etc/wireguard/wg.conf
|
||||
```
|
||||
* Paste the content between the two `EOT` displayed after you set `terraform apply`.
|
||||
|
||||
* Start the wireguard:
|
||||
* ```
|
||||
wg-quick up wg
|
||||
```
|
||||
|
||||
If you want to stop the Wireguard service, write the following on your terminal:
|
||||
|
||||
* ```
|
||||
wg-quick down wg
|
||||
```
|
||||
|
||||
> Note: If it doesn't work and you already did a Wireguard connection with the same file from Terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`.
|
||||
|
||||
As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the Wireguard connection is correct. Make sure to replace `vm_wg_ip` with the proper IP address:
|
||||
* ```
|
||||
ping vm_wg_ip
|
||||
```
|
||||
* Note that, with this Terraform deployment, the Wireguard IP address of the micro VM is named `node1_zmachine1_ip`
|
||||
|
||||
|
||||
## SSH into the 3Node with Wireguard
|
||||
|
||||
To SSH into the 3Node with Wireguard, simply write the following in the terminal with the proper Wireguard IP address:
|
||||
|
||||
```
|
||||
ssh root@vm_wg_ip
|
||||
```
|
||||
|
||||
You now have access into the VM over Wireguard SSH connection.
|
||||
|
||||
|
||||
|
||||
## Destroy the Terraform Deployment
|
||||
|
||||
If you want to destroy the Terraform deployment, write the following in the terminal:
|
||||
|
||||
* ```
|
||||
terraform destroy
|
||||
```
|
||||
* Then write `yes` to confirm.
|
||||
|
||||
Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-ssh`.
|
||||
|
||||
|
||||
|
||||
## Conclusion
|
||||
|
||||
In this simple ThreeFold Guide, you learned how to SSH into a 3Node with Wireguard and Terraform. Feel free to explore further Terraform and Wireguard.
|
||||
|
||||
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
@@ -0,0 +1,345 @@
|
||||
<h1>Deploy Micro VMs and Set a Wireguard VPN</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer)
|
||||
- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform)
|
||||
- [Deploy the Micro VMs with Terraform](#deploy-the-micro-vms-with-terraform)
|
||||
- [Set the Wireguard Connection](#set-the-wireguard-connection)
|
||||
- [SSH into the 3Node](#ssh-into-the-3node)
|
||||
- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this ThreeFold Guide, we will learn how to deploy two micro virtual machines (Ubuntu 22.04) with Terraform. The Terraform deployment will be composed of a virtual private network (VPN) using Wireguard. The two VMs will thus be connected in a private and secure network.
|
||||
|
||||
Note that this concept can be extended with more than two micro VMs. Once you understand this guide, you will be able to adjust and deploy your own personalized Wireguard VPN on the ThreeFold Grid.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* [Install Terraform](../terraform_install.md)
|
||||
* [Install Wireguard](https://www.wireguard.com/install/)
|
||||
|
||||
You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows).
|
||||
|
||||
|
||||
|
||||
## Find a 3Node with the ThreeFold Explorer
|
||||
|
||||
We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address.
|
||||
|
||||
We show here how to find a suitable 3Node using the ThreeFold Explorer.
|
||||
|
||||
* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net)
|
||||
* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID`
|
||||
* For proper understanding, we give further information on some relevant columns:
|
||||
* `ID` refers to the node ID
|
||||
* `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
* `HRU` refers to HDD storage
|
||||
* `SRU` refers to SSD storage
|
||||
* `MRU` refers to RAM (memory)
|
||||
* `CRU` refers to virtual cores (vcores)
|
||||
* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
|
||||
* At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
* For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
|
||||
* `Free SRU (GB)`: 15
|
||||
* `Free MRU (GB)`: 1
|
||||
* `Total CRU (Cores)`: 1
|
||||
* `Free Public IP`: 2
|
||||
* Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses.
|
||||
|
||||
Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files.
|
||||
|
||||
|
||||
|
||||
## Create a Two Servers Wireguard VPN with Terraform
|
||||
|
||||
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
|
||||
|
||||
To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file.
|
||||
|
||||
Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is.
|
||||
|
||||
On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-vpn`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`.
|
||||
|
||||
Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on.
|
||||
|
||||
Now let's create the Terraform files.
|
||||
|
||||
|
||||
* Open the terminal and go to the home directory
|
||||
* ```
|
||||
cd ~
|
||||
```
|
||||
|
||||
* Create the folder `terraform` and the subfolder `deployment-wg-vpn`:
|
||||
* ```
|
||||
mkdir -p terraform && cd $_
|
||||
```
|
||||
* ```
|
||||
mkdir deployment-wg-vpn && cd $_
|
||||
```
|
||||
* Create the `main.tf` file:
|
||||
* ```
|
||||
nano main.tf
|
||||
```
|
||||
|
||||
* Copy the `main.tf` content and save the file.
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
grid = {
|
||||
source = "threefoldtech/grid"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "mnemonics" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "SSH_KEY" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid1" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tfnodeid2" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "size" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "cpu" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "memory" {
|
||||
type = string
|
||||
}
|
||||
|
||||
provider "grid" {
|
||||
mnemonics = var.mnemonics
|
||||
network = "main"
|
||||
}
|
||||
|
||||
locals {
|
||||
name = "tfvm"
|
||||
}
|
||||
|
||||
resource "grid_network" "net1" {
|
||||
name = local.name
|
||||
nodes = [var.tfnodeid1, var.tfnodeid2]
|
||||
ip_range = "10.1.0.0/16"
|
||||
description = "newer network"
|
||||
add_wg_access = true
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d1" {
|
||||
disks {
|
||||
name = "disk1"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid1
|
||||
network_name = grid_network.net1.name
|
||||
vms {
|
||||
name = "vm1"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk1"
|
||||
mount_point = "/disk1"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "grid_deployment" "d2" {
|
||||
disks {
|
||||
name = "disk2"
|
||||
size = var.size
|
||||
}
|
||||
name = local.name
|
||||
node = var.tfnodeid2
|
||||
network_name = grid_network.net1.name
|
||||
|
||||
vms {
|
||||
name = "vm2"
|
||||
flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"
|
||||
cpu = var.cpu
|
||||
mounts {
|
||||
disk_name = "disk2"
|
||||
mount_point = "/disk2"
|
||||
}
|
||||
memory = var.memory
|
||||
entrypoint = "/sbin/zinit init"
|
||||
env_vars = {
|
||||
SSH_KEY = var.SSH_KEY
|
||||
}
|
||||
publicip = true
|
||||
planetary = true
|
||||
}
|
||||
}
|
||||
|
||||
output "wg_config" {
|
||||
value = grid_network.net1.access_wg_config
|
||||
}
|
||||
output "node1_zmachine1_ip" {
|
||||
value = grid_deployment.d1.vms[0].ip
|
||||
}
|
||||
output "node1_zmachine2_ip" {
|
||||
value = grid_deployment.d2.vms[0].ip
|
||||
}
|
||||
|
||||
output "ygg_ip1" {
|
||||
value = grid_deployment.d1.vms[0].ygg_ip
|
||||
}
|
||||
output "ygg_ip2" {
|
||||
value = grid_deployment.d2.vms[0].ygg_ip
|
||||
}
|
||||
|
||||
output "ipv4_vm1" {
|
||||
value = grid_deployment.d1.vms[0].computedip
|
||||
}
|
||||
|
||||
output "ipv4_vm2" {
|
||||
value = grid_deployment.d2.vms[0].computedip
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly.
|
||||
|
||||
* Create the `credentials.auto.tfvars` file:
|
||||
* ```
|
||||
nano credentials.auto.tfvars
|
||||
```
|
||||
|
||||
* Copy the `credentials.auto.tfvars` content and save the file.
|
||||
* ```
|
||||
mnemonics = "..."
|
||||
SSH_KEY = "..."
|
||||
|
||||
tfnodeid1 = "..."
|
||||
tfnodeid2 = "..."
|
||||
|
||||
size = "15"
|
||||
cpu = "1"
|
||||
memory = "512"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content.
|
||||
|
||||
Set the parameters for your VMs as you wish. The two servers will have the same parameters. For this example, we use the minimum parameters.
|
||||
|
||||
|
||||
## Deploy the Micro VMs with Terraform
|
||||
|
||||
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-vpn` containing the main and variables files.
|
||||
|
||||
* Initialize Terraform by writing the following in the terminal:
|
||||
* ```
|
||||
terraform init
|
||||
```
|
||||
* Apply the Terraform deployment:
|
||||
* ```
|
||||
terraform apply
|
||||
```
|
||||
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
|
||||
|
||||
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
|
||||
* ```
|
||||
terraform show
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Set the Wireguard Connection
|
||||
|
||||
To set the Wireguard connection, on your local computer, you will need to take the terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT.
|
||||
|
||||
For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md).
|
||||
|
||||
* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`.
|
||||
* ```
|
||||
nano /usr/local/etc/wireguard/wg.conf
|
||||
```
|
||||
* Paste the content between the two `EOT` displayed after you set `terraform apply`.
|
||||
|
||||
* Start the wireguard:
|
||||
* ```
|
||||
wg-quick up wg
|
||||
```
|
||||
|
||||
If you want to stop the Wireguard service, write the following on your terminal:
|
||||
|
||||
* ```
|
||||
wg-quick down wg
|
||||
```
|
||||
|
||||
> Note: If it doesn't work and you already did a Wireguard connection with the same file from terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`.
|
||||
|
||||
As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VMs to make sure the Wireguard connection is correct. Make sure to replace `wg_vm_ip` with the proper IP address for each VM:
|
||||
|
||||
* ```
|
||||
ping wg_vm_ip
|
||||
```
|
||||
|
||||
|
||||
|
||||
## SSH into the 3Node
|
||||
|
||||
You can now SSH into the 3Nodes with either Wireguard or IPv4.
|
||||
|
||||
To SSH with Wireguard, write the following with the proper IP address for each 3Node:
|
||||
|
||||
```
|
||||
ssh root@vm_wg_ip
|
||||
```
|
||||
|
||||
To SSH with IPv4, write the following for each 3Nodes:
|
||||
|
||||
```
|
||||
ssh root@vm_IPv4
|
||||
```
|
||||
|
||||
You now have an SSH connection access to the VMs over Wireguard and IPv4.
|
||||
|
||||
|
||||
|
||||
## Destroy the Terraform Deployment
|
||||
|
||||
If you want to destroy the Terraform deployment, write the following in the terminal:
|
||||
|
||||
* ```
|
||||
terraform destroy
|
||||
```
|
||||
* Then write `yes` to confirm.
|
||||
|
||||
Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-vpn`.
|
||||
|
||||
|
||||
|
||||
## Conclusion
|
||||
|
||||
In this ThreeFold Guide, we learned how easy it is to deploy a VPN with Wireguard and Terraform. You can adjust the parameters how you like and explore different possibilities.
|
||||
|
||||
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
Reference in New Issue
Block a user