manual: fixing parsing #70

Merged
mik-tf merged 34 commits from development_manual2 into development 2024-05-14 17:27:45 +00:00
8 changed files with 209 additions and 210 deletions
Showing only changes of commit 9d539a31f1 - Show all commits

View File

@ -102,19 +102,19 @@ Modify the variable files to take into account your own seed phras and SSH keys.
Open the terminal.
* Go to the home folder
* ```
```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-synced-db`:
* ```
```
mkdir -p terraform/deployment-synced-db
```
* ```
```
cd terraform/deployment-synced-db
```
* Create the `main.tf` file:
* ```
```
nano main.tf
```
@ -259,12 +259,12 @@ In this file, we name the first VM as `vm1` and the second VM as `vm2`. For ease
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2`is 10.1.4.2. This might be different during your own deployment. If so, change the codes in this guide accordingly.
* Create the `credentials.auto.tfvars` file:
* ```
```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
```
mnemonics = "..."
SSH_KEY = "..."
@ -285,19 +285,19 @@ Make sure to add your own seed phrase and SSH public key. You will also need to
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-synced-db` with the main and variables files.
* Initialize Terraform:
* ```
```
terraform init
```
* Apply Terraform to deploy the VPN:
* ```
```
terraform apply
```
After deployments, take note of the 3Nodes' IPv4 address. You will need those addresses to SSH into the 3Nodes.
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
* ```
```
terraform show
```
@ -306,7 +306,7 @@ Note that, at any moment, if you want to see the information on your Terraform d
### SSH into the 3Nodes
* To [SSH into the 3Nodes](ssh_guide.md), write the following while making sure to set the proper IP address for each VM:
* ```
```
ssh root@3node_IPv4_Address
```
@ -315,11 +315,11 @@ Note that, at any moment, if you want to see the information on your Terraform d
### Preparing the VMs for the Deployment
* Update and upgrade the system
* ```
```
apt update && sudo apt upgrade -y && sudo apt-get install apache2 -y
```
* After download, you might need to reboot the system for changes to be fully taken into account
* ```
```
reboot
```
* Reconnect to the VMs
@ -333,19 +333,19 @@ We now want to ping the VMs using Wireguard. This will ensure the connection is
First, we set Wireguard with the Terraform output.
* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/usr/local/etc/wireguard/wg.conf`.
* ```
```
nano /usr/local/etc/wireguard/wg.conf
```
* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The WireGuard output stands in between `EOT`.
* Start the WireGuard on your local computer:
* ```
```
wg-quick up wg
```
* To stop the wireguard service:
* ```
```
wg-quick down wg
```
@ -353,10 +353,10 @@ First, we set Wireguard with the Terraform output.
This should set everything properly.
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct:
* ```
```
ping 10.1.3.2
```
* ```
```
ping 10.1.4.2
```
@ -371,11 +371,11 @@ For more information on WireGuard, notably in relation to Windows, please read [
## Download MariaDB and Configure the Database
* Download the MariaDB server and client on both the master VM and the worker VM
* ```
```
apt install mariadb-server mariadb-client -y
```
* Configure the MariaDB database
* ```
```
nano /etc/mysql/mariadb.conf.d/50-server.cnf
```
* Do the following changes
@ -392,12 +392,12 @@ For more information on WireGuard, notably in relation to Windows, please read [
```
* Restart MariaDB
* ```
```
systemctl restart mysql
```
* Launch Mariadb
* ```
```
mysql
```
@ -406,7 +406,7 @@ For more information on WireGuard, notably in relation to Windows, please read [
## Create User with Replication Grant
* Do the following on both the master and the worker
* ```
```
CREATE USER 'repuser'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ;
FLUSH PRIVILEGES;
@ -429,17 +429,17 @@ For more information on WireGuard, notably in relation to Windows, please read [
### TF Template Worker Server Data
* Write the following in the Worker VM
* ```
```
CHANGE MASTER TO MASTER_HOST='10.1.3.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
```
start slave;
```
* ```
```
show slave status\G;
```
@ -448,17 +448,17 @@ For more information on WireGuard, notably in relation to Windows, please read [
### TF Template Master Server Data
* Write the following in the Master VM
* ```
```
CHANGE MASTER TO MASTER_HOST='10.1.4.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
```
start slave;
```
* ```
```
show slave status\G;
```
@ -503,71 +503,71 @@ We now set the MariaDB database. You should choose your own username and passwor
We will now install and set [GlusterFS](https://www.gluster.org/), a free and open-source software scalable network filesystem.
* Install GlusterFS on both the master and worker VMs
* ```
```
add-apt-repository ppa:gluster/glusterfs-7 -y && apt install glusterfs-server -y
```
* Start the GlusterFS service on both VMs
* ```
```
systemctl start glusterd.service && systemctl enable glusterd.service
```
* Set the master to worker probe IP on the master VM:
* ```
```
gluster peer probe 10.1.4.2
```
* See the peer status on the worker VM:
* ```
```
gluster peer status
```
* Set the master and worker IP address on the master VM:
* ```
```
gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force
```
* Start Gluster:
* ```
```
gluster volume start vol1
```
* Check the status on the worker VM:
* ```
```
gluster volume status
```
* Mount the server with the master IP on the master VM:
* ```
```
mount -t glusterfs 10.1.3.2:/vol1 /var/www
```
* See if the mount is there on the master VM:
* ```
```
df -h
```
* Mount the Server with the worker IP on the worker VM:
* ```
```
mount -t glusterfs 10.1.4.2:/vol1 /var/www
```
* See if the mount is there on the worker VM:
* ```
```
df -h
```
We now update the mount with the filse fstab on both master and worker.
* To prevent the mount from being aborted if the server reboot, write the following on both servers:
* ```
```
nano /etc/fstab
```
* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2):
* ```
```
10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2):
* ```
```
10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```

View File

@ -46,33 +46,33 @@ For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443).
We thus add the following rules:
* Allow SSH (port 22)
* ```
```
ufw allow ssh
```
* Allow HTTP (port 80)
* ```
```
ufw allow http
```
* Allow https (port 443)
* ```
```
ufw allow https
```
* Allow port 8443
* ```
```
ufw allow 8443
```
* Allow port 3478 for Nextcloud Talk
* ```
```
ufw allow 3478
```
* To enable the firewall, write the following:
* ```
```
ufw enable
```
* To see the current security rules, write the following:
* ```
```
ufw status verbose
```
@ -90,7 +90,7 @@ You now have enabled the firewall with proper security rules for your Nextcloud
* TTL: Automatic
* It might take up to 30 minutes to set the DNS properly.
* To check if the A record has been registered, you can use a common DNS checker:
* ```
```
https://dnschecker.org/#A/<domain-name>
```
@ -101,11 +101,11 @@ You now have enabled the firewall with proper security rules for your Nextcloud
For the rest of the guide, we follow the steps availabe on the Nextcloud website's tutorial [How to Install the Nextcloud All-in-One on Linux](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/).
* Install Docker
* ```
```
curl -fsSL get.docker.com | sudo sh
```
* Install Nextcloud AIO
* ```
```
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
@ -118,7 +118,7 @@ For the rest of the guide, we follow the steps availabe on the Nextcloud website
nextcloud/all-in-one:latest
```
* Reach the AIO interface on your browser:
* ```
```
https://<domain_name>:8443
```
* Example: `https://nextcloudwebsite.com:8443`

View File

@ -126,19 +126,19 @@ Modify the variable files to take into account your own seed phrase and SSH keys
Open the terminal.
* Go to the home folder
* ```
```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-nextcloud`:
* ```
```
mkdir -p terraform/deployment-nextcloud
```
* ```
```
cd terraform/deployment-nextcloud
```
* Create the `main.tf` file:
* ```
```
nano main.tf
```
@ -283,12 +283,12 @@ In this file, we name the first VM as `vm1` and the second VM as `vm2`. In the g
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly.
* Create the `credentials.auto.tfvars` file:
* ```
```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
```
mnemonics = "..."
SSH_KEY = "..."
@ -307,12 +307,12 @@ Make sure to add your own seed phrase and SSH public key. You will also need to
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-nextcloud` with the main and variables files.
* Initialize Terraform:
* ```
```
terraform init
```
* Apply Terraform to deploy the VPN:
* ```
```
terraform apply
```
@ -321,18 +321,18 @@ After deployments, take note of the 3nodes' IPv4 address. You will need those ad
### SSH into the 3nodes
* To [SSH into the 3nodes](ssh_guide.md), write the following:
* ```
```
ssh root@VM_IPv4_Address
```
### Preparing the VMs for the Deployment
* Update and upgrade the system
* ```
```
apt update && apt upgrade -y && apt-get install apache2 -y
```
* After download, reboot the system
* ```
```
reboot
```
* Reconnect to the VMs
@ -348,19 +348,19 @@ For more information on WireGuard, notably in relation to Windows, please read [
First, we set Wireguard with the Terraform output.
* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/etc/wireguard/wg.conf`.
* ```
```
nano /etc/wireguard/wg.conf
```
* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The Wireguard output stands in between `EOT`.
* Start Wireguard on your local computer:
* ```
```
wg-quick up wg
```
* To stop the wireguard service:
* ```
```
wg-quick down wg
```
@ -368,10 +368,10 @@ If it doesn't work and you already did a wireguard connection with the same file
This should set everything properly.
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct:
* ```
```
ping 10.1.3.2
```
* ```
```
ping 10.1.4.2
```
@ -384,11 +384,11 @@ If you correctly receive the packets from the two VMs, you know that the VPN is
## Download MariaDB and Configure the Database
* Download MariaDB's server and client on both VMs
* ```
```
apt install mariadb-server mariadb-client -y
```
* Configure the MariaDB database
* ```
```
nano /etc/mysql/mariadb.conf.d/50-server.cnf
```
* Do the following changes
@ -405,19 +405,19 @@ If you correctly receive the packets from the two VMs, you know that the VPN is
```
* Restart MariaDB
* ```
```
systemctl restart mysql
```
* Launch MariaDB
* ```
```
mysql
```
## Create User with Replication Grant
* Do the following on both VMs
* ```
```
CREATE USER 'repuser'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ;
FLUSH PRIVILEGES;
@ -436,33 +436,33 @@ If you correctly receive the packets from the two VMs, you know that the VPN is
### TF Template Worker Server Data
* Write the following in the worker VM
* ```
```
CHANGE MASTER TO MASTER_HOST='10.1.3.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
```
start slave;
```
* ```
```
show slave status\G;
```
### TF Template Master Server Data
* Write the following in the master VM
* ```
```
CHANGE MASTER TO MASTER_HOST='10.1.4.2',
MASTER_USER='repuser',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```
* ```
```
start slave;
```
* ```
```
show slave status\G;
```
@ -505,72 +505,72 @@ We now set the Nextcloud database. You should choose your own username and passw
We will now install and set [GlusterFS](https://www.gluster.org/), a free and open source software scalable network filesystem.
* Install GlusterFS on both the master and worker VMs
* ```
```
echo | add-apt-repository ppa:gluster/glusterfs-7 && apt install glusterfs-server -y
```
* Start the GlusterFS service on both VMs
* ```
```
systemctl start glusterd.service && systemctl enable glusterd.service
```
* Set the master to worker probe IP on the master VM:
* ```
```
gluster peer probe 10.1.4.2
```
* See the peer status on the worker VM:
* ```
```
gluster peer status
```
* Set the master and worker IP address on the master VM:
* ```
```
gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force
```
* Start GlusterFS on the master VM:
* ```
```
gluster volume start vol1
```
* Check the status on the worker VM:
* ```
```
gluster volume status
```
* Mount the server with the master IP on the master VM:
* ```
```
mount -t glusterfs 10.1.3.2:/vol1 /var/www
```
* See if the mount is there on the master VM:
* ```
```
df -h
```
* Mount the server with the worker IP on the worker VM:
* ```
```
mount -t glusterfs 10.1.4.2:/vol1 /var/www
```
* See if the mount is there on the worker VM:
* ```
```
df -h
```
We now update the mount with the filse fstab on both VMs.
* To prevent the mount from being aborted if the server reboots, write the following on both servers:
* ```
```
nano /etc/fstab
```
* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2):
* ```
```
10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2):
* ```
```
10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0
```
@ -579,14 +579,14 @@ We now update the mount with the filse fstab on both VMs.
# Install PHP and Nextcloud
* Install PHP and the PHP modules for Nextcloud on both the master and the worker:
* ```
```
apt install php -y && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y
```
We will now install Nextcloud. This is done only on the master VM.
* On both the master and worker VMs, go to the folder `/var/www`:
* ```
```
cd /var/www
```
@ -594,27 +594,27 @@ We will now install Nextcloud. This is done only on the master VM.
* See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/).
* We now download Nextcloud on the master VM.
* ```
```
wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip
```
You only need to download on the master VM, since you set a peer-to-peer connection, it will also be accessible on the worker VM.
* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress:
* ```
```
apt install p7zip-full -y
```
* ```
```
7z x nextcloud-27.0.1.zip -o/var/www/
```
* After the download, see if the Nextcloud file is there on the worker VM:
* ```
```
ls
```
* Then, we grant permissions to the folder. Do this on both the master VM and the worker VM.
* ```
```
chown www-data:www-data /var/www/nextcloud/ -R
```
@ -660,7 +660,7 @@ Note: When the master VM goes offline, after 5 minutes maximum DuckDNS will chan
We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`.
* On both the master and worker VMs, write the following:
* ```
```
nano /etc/apache2/sites-available/nextcloud.conf
```
@ -694,12 +694,12 @@ The file should look like this, with your own subdomain instead of `subdomain`:
```
* On both the master VM and the worker VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file:
* ```
```
a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl
```
* Then, reload and restart Apache:
* ```
```
systemctl reload apache2 && systemctl restart apache2
```
@ -710,20 +710,20 @@ The file should look like this, with your own subdomain instead of `subdomain`:
We now access Nextcloud over the public Internet.
* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain):
* ```
```
subdomain.duckdns.org
```
Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser.
* Choose a name and a password. For this guide, we use the following:
* ```
```
ncadmin
password1234
```
* Enter the Nextcloud Database information created with MariaDB and click install:
* ```
```
Database user: ncuser
Database password: password1234
Database name: nextcloud
@ -749,27 +749,27 @@ To enable HTTPS, first install `letsencrypt` with `certbot`:
Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/)
* See if you have the latest version of snap:
* ```
```
snap install core; snap refresh core
```
* Remove certbot-auto:
* ```
```
apt-get remove certbot
```
* Install certbot:
* ```
```
snap install --classic certbot
```
* Ensure that certbot can be run:
* ```
```
ln -s /snap/bin/certbot /usr/bin/certbot
```
* Then, install certbot-apache:
* ```
```
apt install python3-certbot-apache -y
```
@ -825,7 +825,7 @@ output "ipv4_vm1" {
```
* To add the HTTPS protection, write the following line on the master VM with your own subdomain:
* ```
```
certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org
```
@ -837,7 +837,7 @@ Note: You then need to redo the same process with the worker VM. This time, make
## Verify HTTPS Automatic Renewal
* Make a dry run of the certbot renewal to verify that it is correctly set up.
* ```
```
certbot renew --dry-run
```
@ -859,25 +859,25 @@ We thus add the following rules:
* Allow SSH (port 22)
* ```
```
ufw allow ssh
```
* Allow HTTP (port 80)
* ```
```
ufw allow http
```
* Allow https (port 443)
* ```
```
ufw allow https
```
* To enable the firewall, write the following:
* ```
```
ufw enable
```
* To see the current security rules, write the following:
* ```
```
ufw status verbose
```

View File

@ -112,19 +112,19 @@ Modify the variable files to take into account your own seed phrase and SSH keys
Open the terminal and follow those steps.
* Go to the home folder
* ```
```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-single-nextcloud`:
* ```
```
mkdir -p terraform/deployment-single-nextcloud
```
* ```
```
cd terraform/deployment-single-nextcloud
```
* Create the `main.tf` file:
* ```
```
nano main.tf
```
@ -226,12 +226,12 @@ output "ipv4_vm1" {
In this file, we name the full VM as `vm1`.
* Create the `credentials.auto.tfvars` file:
* ```
```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
```
mnemonics = "..."
SSH_KEY = "..."
@ -249,12 +249,12 @@ Make sure to add your own seed phrase and SSH public key. You will also need to
We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-single-nextcloud` with the main and variables files.
* Initialize Terraform:
* ```
```
terraform init
```
* Apply Terraform to deploy the full VM:
* ```
```
terraform apply
```
@ -263,18 +263,18 @@ After deployments, take note of the 3Node's IPv4 address. You will need this add
## SSH into the 3Node
* To [SSH into the 3Node](ssh_guide.md), write the following:
* ```
```
ssh root@VM_IPv4_Address
```
## Prepare the Full VM
* Update and upgrade the system
* ```
```
apt update && apt upgrade && apt-get install apache2
```
* After download, reboot the system
* ```
```
reboot
```
* Reconnect to the VM
@ -286,11 +286,11 @@ After deployments, take note of the 3Node's IPv4 address. You will need this add
## Download MariaDB and Configure the Database
* Download MariaDB's server and client
* ```
```
apt install mariadb-server mariadb-client
```
* Configure the MariaDB database
* ```
```
nano /etc/mysql/mariadb.conf.d/50-server.cnf
```
* Do the following changes
@ -307,12 +307,12 @@ After deployments, take note of the 3Node's IPv4 address. You will need this add
```
* Restart MariaDB
* ```
```
systemctl restart mysql
```
* Launch MariaDB
* ```
```
mysql
```
@ -345,14 +345,14 @@ We now set the Nextcloud database. You should choose your own username and passw
# Install PHP and Nextcloud
* Install PHP and the PHP modules for Nextcloud on both the master and the worker:
* ```
```
apt install php && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip
```
We will now install Nextcloud.
* On the full VM, go to the folder `/var/www`:
* ```
```
cd /var/www
```
@ -360,19 +360,17 @@ We will now install Nextcloud.
* See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/).
* We now download Nextcloud on the full VM.
* ```
```
wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip
```
* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress:
* ```
apt install p7zip-full
```
* ```
apt install p7zip-full
7z x nextcloud-27.0.1.zip -o/var/www/
```
* Then, we grant permissions to the folder.
* ```
```
chown www-data:www-data /var/www/nextcloud/ -R
```
@ -398,7 +396,7 @@ Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before
We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`.
* On full VM, write the following:
* ```
```
nano /etc/apache2/sites-available/nextcloud.conf
```
@ -432,12 +430,12 @@ The file should look like this, with your own subdomain instead of `subdomain`:
```
* On the full VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file:
* ```
```
a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl
```
* Then, reload and restart Apache:
* ```
```
systemctl reload apache2 && systemctl restart apache2
```
@ -448,20 +446,20 @@ The file should look like this, with your own subdomain instead of `subdomain`:
We now access Nextcloud over the public Internet.
* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain):
* ```
```
subdomain.duckdns.org
```
Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser.
* Choose a name and a password. For this guide, we use the following:
* ```
```
ncadmin
password1234
```
* Enter the Nextcloud Database information created with MariaDB and click install:
* ```
```
Database user: ncuser
Database password: password1234
Database name: nextcloud
@ -487,27 +485,27 @@ To enable HTTPS, first install `letsencrypt` with `certbot`:
Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/)
* See if you have the latest version of snap:
* ```
```
snap install core; snap refresh core
```
* Remove certbot-auto:
* ```
```
apt-get remove certbot
```
* Install certbot:
* ```
```
snap install --classic certbot
```
* Ensure that certbot can be run:
* ```
```
ln -s /snap/bin/certbot /usr/bin/certbot
```
* Then, install certbot-apache:
* ```
```
apt install python3-certbot-apache
```
@ -516,14 +514,14 @@ Install certbot by following the steps here: [https://certbot.eff.org/](https://
We now set the certbot with the DNS domain.
* To add the HTTPS protection, write the following line on the full VM with your own subdomain:
* ```
```
certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org
```
## Verify HTTPS Automatic Renewal
* Make a dry run of the certbot renewal to verify that it is correctly set up.
* ```
```
certbot renew --dry-run
```
@ -545,25 +543,25 @@ We thus add the following rules:
* Allow SSH (port 22)
* ```
```
ufw allow ssh
```
* Allow HTTP (port 80)
* ```
```
ufw allow http
```
* Allow https (port 443)
* ```
```
ufw allow https
```
* To enable the firewall, write the following:
* ```
```
ufw enable
```
* To see the current security rules, write the following:
* ```
```
ufw status verbose
```

View File

@ -246,17 +246,17 @@ output "fqdn" {
We now deploy the 2-node VPN with Terraform. Make sure that you are in the correct folder containing the main and variables files.
* Initialize Terraform:
* ```
```
terraform init
```
* Apply Terraform to deploy Nextcloud:
* ```
```
terraform apply
```
Note that, at any moment, if you want to see the information on your Terraform deployment, write the following:
* ```
```
terraform show
```
@ -274,19 +274,19 @@ Note that, at any moment, if you want to see the information on your Terraform d
We need to install a few things on the Nextcloud VM before going further.
* Update the Nextcloud VM
* ```
```
apt update
```
* Install ping on the Nextcloud VM if you want to test the VPN connection (Optional)
* ```
```
apt install iputils-ping -y
```
* Install Rsync on the Nextcloud VM
* ```
```
apt install rsync
```
* Install nano on the Nextcloud VM
* ```
```
apt install nano
```
* Install Cron on the Nextcloud VM
@ -295,19 +295,19 @@ We need to install a few things on the Nextcloud VM before going further.
# Prepare the VMs for the Rsync Daily Backup
* Test the VPN (Optional) with [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping)
* ```
```
ping <WireGuard_VM_IP_Address>
```
* Generate an SSH key pair on the Backup VM
* ```
```
ssh-keygen
```
* Take note of the public key in the Backup VM
* ```
```
cat ~/.ssh/id_rsa.pub
```
* Add the public key of the Backup VM in the Nextcloud VM
* ```
```
nano ~/.ssh/authorized_keys
```
@ -318,11 +318,11 @@ We need to install a few things on the Nextcloud VM before going further.
We now set a daily cron job that will make a backup between the Nextcloud VM and the Backup VM using Rsync.
* Open the crontab on the Backup VM
* ```
```
crontab -e
```
* Add the cron job at the end of the file
* ```
```
0 8 * * * rsync -avz --no-perms -O --progress --delete --log-file=/root/rsync_storage.log root@10.1.3.2:/mnt/backup/ /mnt/backup/
```

View File

@ -61,14 +61,14 @@ Also note that this deployment uses both the Planetary network and WireGuard.
We start by creating the main file for our Nomad cluster.
* Create a directory for your Terraform Nomad cluster
* ```
```
mkdir nomad
```
* ```
```
cd nomad
```
* Create the `main.tf` file
* ```
```
nano main.tf
```
@ -255,12 +255,12 @@ output "client2_planetary_ip" {
We create a credentials file that will contain the environment variables. This file should be in the same directory as the main file.
* Create the `credentials.auto.tfvars` file
* ```
```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file
* ```
```
mnemonics = "..."
SSH_KEY = "..."
@ -280,12 +280,12 @@ Make sure to replace the three dots by your own information for `mnemonics` and
We now deploy the Nomad Cluster with Terraform. Make sure that you are in the directory containing the `main.tf` file.
* Initialize Terraform
* ```
```
terraform init
```
* Apply Terraform to deploy the Nomad cluster
* ```
```
terraform apply
```
@ -300,7 +300,7 @@ Note that the IP addresses will be shown under `Outputs` after running the comma
### SSH with the Planetary Network
* To [SSH with the Planetary network](ssh_openssh.md), write the following with the proper IP address
* ```
```
ssh root@planetary_ip
```
@ -311,7 +311,7 @@ You now have an SSH connection access over the Planetary network to the client a
To SSH with WireGuard, we first need to set the proper WireGuard configurations.
* Create a file named `wg.conf` in the directory `/etc/wireguard`
* ```
```
nano /etc/wireguard/wg.conf
```
@ -319,18 +319,18 @@ To SSH with WireGuard, we first need to set the proper WireGuard configurations.
* Note that you can use `terraform show` to see the Terraform output. The WireGuard configurations (`wg_config`) stands in between the two `EOT` instances.
* Start WireGuard on your local computer
* ```
```
wg-quick up wg
```
* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the WireGuard IP of a node to make sure the connection is correct
* ```
```
ping wg_ip
```
We are now ready to SSH into the client and server nodes with WireGuard.
* To SSH with WireGuard, write the following with the proper IP address:
* ```
```
ssh root@wg_ip
```

View File

@ -70,20 +70,19 @@ Modify the variable file to take into account your own seed phras and SSH keys.
Now let's create the Terraform files.
* Open the terminal and go to the home directory
* ```
```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-wg-ssh`:
* ```
```
mkdir -p terraform/deployment-wg-ssh
```
* ```
```
cd terraform/deployment-wg-ssh
```
```
* Create the `main.tf` file:
* ```
```
nano main.tf
```
@ -173,12 +172,12 @@ output "node1_zmachine1_ip" {
```
* Create the `credentials.auto.tfvars` file:
* ```
```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content, set the node ID as well as your mnemonics and SSH public key, then save the file.
* ```
```
mnemonics = "..."
SSH_KEY = "..."
@ -198,12 +197,12 @@ Make sure to add your own seed phrase and SSH public key. You will also need to
We now deploy the micro VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-ssh` containing the main and variables files.
* Initialize Terraform:
* ```
```
terraform init
```
* Apply Terraform to deploy the micro VM:
* ```
```
terraform apply
```
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
@ -264,10 +263,11 @@ You now have access into the VM over Wireguard SSH connection.
If you want to destroy the Terraform deployment, write the following in the terminal:
* ```
```
terraform destroy
```
* Then write `yes` to confirm.
Then write `yes` to confirm.
Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-ssh`.

View File

@ -74,19 +74,19 @@ Now let's create the Terraform files.
* Open the terminal and go to the home directory
* ```
```
cd ~
```
* Create the folder `terraform` and the subfolder `deployment-wg-vpn`:
* ```
```
mkdir -p terraform && cd $_
```
* ```
```
mkdir deployment-wg-vpn && cd $_
```
* Create the `main.tf` file:
* ```
```
nano main.tf
```
@ -229,12 +229,12 @@ output "ipv4_vm2" {
In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly.
* Create the `credentials.auto.tfvars` file:
* ```
```
nano credentials.auto.tfvars
```
* Copy the `credentials.auto.tfvars` content and save the file.
* ```
```
mnemonics = "..."
SSH_KEY = "..."
@ -256,17 +256,17 @@ Set the parameters for your VMs as you wish. The two servers will have the same
We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-vpn` containing the main and variables files.
* Initialize Terraform by writing the following in the terminal:
* ```
```
terraform init
```
* Apply the Terraform deployment:
* ```
```
terraform apply
```
* Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment.
Note that, at any moment, if you want to see the information on your Terraform deployments, write the following:
* ```
```
terraform show
```
@ -279,19 +279,19 @@ To set the Wireguard connection, on your local computer, you will need to take t
For more information on WireGuard, notably in relation to Windows, please read [this documentation](ssh_wireguard.md).
* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`.
* ```
```
nano /usr/local/etc/wireguard/wg.conf
```
* Paste the content between the two `EOT` displayed after you set `terraform apply`.
* Start the wireguard:
* ```
```
wg-quick up wg
```
If you want to stop the Wireguard service, write the following on your terminal:
* ```
```
wg-quick down wg
```
@ -299,7 +299,7 @@ If you want to stop the Wireguard service, write the following on your terminal:
As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VMs to make sure the Wireguard connection is correct. Make sure to replace `wg_vm_ip` with the proper IP address for each VM:
* ```
```
ping wg_vm_ip
```
@ -329,10 +329,11 @@ You now have an SSH connection access to the VMs over Wireguard and IPv4.
If you want to destroy the Terraform deployment, write the following in the terminal:
* ```
```
terraform destroy
```
* Then write `yes` to confirm.
Then write `yes` to confirm.
Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-vpn`.