From 6c76ffbfe5e75e3a46db007ab9063a31cb7cabb7 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 10:49:26 -0400 Subject: [PATCH 01/34] manual, test parsing hero mdbook vs mdbook --- .../grid3_javascript_installation.md | 30 ++++++++++++++----- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/collections/developers/javascript/grid3_javascript_installation.md b/collections/developers/javascript/grid3_javascript_installation.md index 1fcd64e..99e3943 100644 --- a/collections/developers/javascript/grid3_javascript_installation.md +++ b/collections/developers/javascript/grid3_javascript_installation.md @@ -55,18 +55,34 @@ yarn add @threefold/grid_client To use the Grid Client locally, clone the repository then install the Grid Client: - Clone the repository - - ```bash + ```bash git clone https://github.com/threefoldtech/tfgrid-sdk-ts ``` - Install the Grid Client - With yarn - - ```bash - yarn install - ``` + ```bash + yarn install + ``` - With npm - - ```bash - npm install - ``` + ```bash + npm install + ``` + +--- + +- Clone the repository +```bash + git clone https://github.com/threefoldtech/tfgrid-sdk-ts +``` +- Install the Grid Client + - With yarn +```bash + yarn install +``` + - With npm +```bash + npm install +``` > Note: In the directory **grid_client/scripts**, we provided a set of scripts to test the Grid Client. -- 2.40.1 From 8ac3e56679b6c34490b31da1dd8adb52e2cf8db7 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:13:17 -0400 Subject: [PATCH 02/34] manual, parsing devs --- .../grid3_javascript_installation.md | 32 ++++--------------- 1 file changed, 7 insertions(+), 25 deletions(-) diff --git a/collections/developers/javascript/grid3_javascript_installation.md b/collections/developers/javascript/grid3_javascript_installation.md index 99e3943..bac53cf 100644 --- a/collections/developers/javascript/grid3_javascript_installation.md +++ b/collections/developers/javascript/grid3_javascript_installation.md @@ -54,34 +54,16 @@ yarn add @threefold/grid_client To use the Grid Client locally, clone the repository then install the Grid Client: -- Clone the repository - ```bash - git clone https://github.com/threefoldtech/tfgrid-sdk-ts - ``` -- Install the Grid Client - - With yarn - ```bash - yarn install - ``` - - With npm - ```bash - npm install - ``` - ---- - - Clone the repository ```bash git clone https://github.com/threefoldtech/tfgrid-sdk-ts ``` -- Install the Grid Client - - With yarn +- Install the Grid Client with yarn or npm ```bash - yarn install +yarn install ``` - - With npm ```bash - npm install +npm install ``` > Note: In the directory **grid_client/scripts**, we provided a set of scripts to test the Grid Client. @@ -110,11 +92,11 @@ Make sure to set the client configuration properly before using the Grid Client. The easiest way to test the installation is to run the following command with either yarn or npm to generate the Grid Client documentation: * With yarn - * ``` + ``` yarn run serve-docs ``` * With npm - * ``` + ``` npm run serve-docs ``` @@ -127,11 +109,11 @@ You can explore the Grid Client by testing the different scripts proposed in **g - Update your customized deployments specs if needed - Run using [ts-node](https://www.npmjs.com/ts-node) - With yarn - - ```bash + ```bash yarn run ts-node --project tsconfig-node.json scripts/zdb.ts ``` - With npx - - ```bash + ```bash npx ts-node --project tsconfig-node.json scripts/zdb.ts ``` -- 2.40.1 From 4757a8f2916f314716757f086b85f1a1a2b98a4f Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:15:27 -0400 Subject: [PATCH 03/34] manual, parsing devs --- .../javascript/grid3_javascript_installation.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/collections/developers/javascript/grid3_javascript_installation.md b/collections/developers/javascript/grid3_javascript_installation.md index bac53cf..bf3b099 100644 --- a/collections/developers/javascript/grid3_javascript_installation.md +++ b/collections/developers/javascript/grid3_javascript_installation.md @@ -109,13 +109,13 @@ You can explore the Grid Client by testing the different scripts proposed in **g - Update your customized deployments specs if needed - Run using [ts-node](https://www.npmjs.com/ts-node) - With yarn - ```bash - yarn run ts-node --project tsconfig-node.json scripts/zdb.ts - ``` + ```bash + yarn run ts-node --project tsconfig-node.json scripts/zdb.ts + ``` - With npx - ```bash - npx ts-node --project tsconfig-node.json scripts/zdb.ts - ``` + ```bash + npx ts-node --project tsconfig-node.json scripts/zdb.ts + ``` ## Reference API -- 2.40.1 From 567c558129f1e73124ade95ab6917eee8dd4ef8b Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:17:07 -0400 Subject: [PATCH 04/34] manual, parsing devs --- .../javascript/grid3_javascript_installation.md | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/collections/developers/javascript/grid3_javascript_installation.md b/collections/developers/javascript/grid3_javascript_installation.md index bf3b099..2bce32f 100644 --- a/collections/developers/javascript/grid3_javascript_installation.md +++ b/collections/developers/javascript/grid3_javascript_installation.md @@ -107,15 +107,13 @@ The easiest way to test the installation is to run the following command with ei You can explore the Grid Client by testing the different scripts proposed in **grid_client/scripts**. - Update your customized deployments specs if needed -- Run using [ts-node](https://www.npmjs.com/ts-node) - - With yarn - ```bash - yarn run ts-node --project tsconfig-node.json scripts/zdb.ts - ``` - - With npx - ```bash - npx ts-node --project tsconfig-node.json scripts/zdb.ts - ``` +- Run using [ts-node](https://www.npmjs.com/ts-node) with yarn or npx + ```bash + yarn run ts-node --project tsconfig-node.json scripts/zdb.ts + ``` + ```bash + npx ts-node --project tsconfig-node.json scripts/zdb.ts + ``` ## Reference API -- 2.40.1 From 13586b10f080dbc8bf22cf791e569d4c8fe87a1b Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:20:17 -0400 Subject: [PATCH 05/34] manual, parsing devs go --- collections/developers/go/grid3_go_installation.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/collections/developers/go/grid3_go_installation.md b/collections/developers/go/grid3_go_installation.md index c07008c..6ca7085 100644 --- a/collections/developers/go/grid3_go_installation.md +++ b/collections/developers/go/grid3_go_installation.md @@ -22,19 +22,19 @@ Make sure that you have at least Go 1.19 installed on your machine. ## Steps * Create a new directory - * ```bash + ```bash mkdir tf_go_client ``` * Change directory - * ```bash + ```bash cd tf_go_client ``` * Creates a **go.mod** file to track the code's dependencies - * ```bash + ```bash go mod init main ``` * Install the Grid3 Go Client - * ```bash + ```bash go get github.com/threefoldtech/tfgrid-sdk-go/grid-client ``` -- 2.40.1 From 2a6d648401fef7227f412fed903817f9c19ad7ff Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:27:10 -0400 Subject: [PATCH 06/34] manual, parsing devs tfcmd --- collections/developers/tfcmd/tfcmd_basics.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/collections/developers/tfcmd/tfcmd_basics.md b/collections/developers/tfcmd/tfcmd_basics.md index 8816eea..406d0f4 100644 --- a/collections/developers/tfcmd/tfcmd_basics.md +++ b/collections/developers/tfcmd/tfcmd_basics.md @@ -21,11 +21,11 @@ TFCMD is available as binaries. Make sure to download the latest release and to An easy way to use TFCMD is to download and extract the TFCMD binaries to your path. - Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) - - ``` + ``` wget ``` - Extract the binaries - - ``` + ``` tar -xvf ``` - Move `tfcmd` to any `$PATH` directory: -- 2.40.1 From 422308e09583e7a9d5baf78cc1f243883784f394 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:28:36 -0400 Subject: [PATCH 07/34] manual, parsing devs tfcmd --- collections/developers/tfcmd/tfcmd_basics.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/collections/developers/tfcmd/tfcmd_basics.md b/collections/developers/tfcmd/tfcmd_basics.md index 406d0f4..8816eea 100644 --- a/collections/developers/tfcmd/tfcmd_basics.md +++ b/collections/developers/tfcmd/tfcmd_basics.md @@ -21,11 +21,11 @@ TFCMD is available as binaries. Make sure to download the latest release and to An easy way to use TFCMD is to download and extract the TFCMD binaries to your path. - Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) - ``` + - ``` wget ``` - Extract the binaries - ``` + - ``` tar -xvf ``` - Move `tfcmd` to any `$PATH` directory: -- 2.40.1 From 80fbabab10de2ed301f22778b871eb5cb08fc335 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:30:30 -0400 Subject: [PATCH 08/34] manual, parsing devs tfcmd --- collections/developers/tfcmd/tfcmd_basics.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/collections/developers/tfcmd/tfcmd_basics.md b/collections/developers/tfcmd/tfcmd_basics.md index 8816eea..2edecb3 100644 --- a/collections/developers/tfcmd/tfcmd_basics.md +++ b/collections/developers/tfcmd/tfcmd_basics.md @@ -21,17 +21,17 @@ TFCMD is available as binaries. Make sure to download the latest release and to An easy way to use TFCMD is to download and extract the TFCMD binaries to your path. - Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) - - ``` - wget - ``` + ``` + wget + ``` - Extract the binaries - - ``` - tar -xvf - ``` + ``` + tar -xvf + ``` - Move `tfcmd` to any `$PATH` directory: - ```bash - mv tfcmd /usr/local/bin - ``` + ``` + mv tfcmd /usr/local/bin + ``` ## Login -- 2.40.1 From ca99c4cda4262257b5cd8d4ec95deaf74dc91684 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:33:41 -0400 Subject: [PATCH 09/34] manual, parsing devs tfrobot --- collections/developers/tfrobot/tfrobot_installation.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/collections/developers/tfrobot/tfrobot_installation.md b/collections/developers/tfrobot/tfrobot_installation.md index deec2b8..f020750 100644 --- a/collections/developers/tfrobot/tfrobot_installation.md +++ b/collections/developers/tfrobot/tfrobot_installation.md @@ -23,14 +23,14 @@ To install TFROBOT, simply download and extract the TFROBOT binaries to your pat cd tfgrid-sdk-go ``` - Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) - - ``` + ``` wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v0.14.4/tfgrid-sdk-go_Linux_x86_64.tar.gz ``` - Extract the binaries - - ``` + ``` tar -xvf tfgrid-sdk-go_Linux_x86_64.tar.gz ``` - Move `tfrobot` to any `$PATH` directory: - ```bash + ``` mv tfrobot /usr/local/bin ``` \ No newline at end of file -- 2.40.1 From 38f65801adb038fe24a97a07f220541f6e1dc6c0 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:54:28 -0400 Subject: [PATCH 10/34] manual, parsing devs flist --- .../flist_debian_case_study.md | 50 +++++++++---------- .../developers/flist/flist_hub/zos_hub.md | 8 +-- collections/developers/proxy/proxy.md | 16 +++--- 3 files changed, 36 insertions(+), 38 deletions(-) diff --git a/collections/developers/flist/flist_case_studies/flist_debian_case_study.md b/collections/developers/flist/flist_case_studies/flist_debian_case_study.md index 3777433..4a8c1a1 100644 --- a/collections/developers/flist/flist_case_studies/flist_debian_case_study.md +++ b/collections/developers/flist/flist_case_studies/flist_debian_case_study.md @@ -223,27 +223,27 @@ You now have access to the Docker Hub from your local computer. We will then pro * Make sure the Docker Daemon is running * Build the docker container * Template: - * ``` - docker build -t / . - ``` + ``` + docker build -t / . + ``` * Example: - * ``` - docker build -t username/debian12 . - ``` + ``` + docker build -t username/debian12 . + ``` * Push the docker container to the [Docker Hub](https://hub.docker.com/) * Template: - * ``` - docker push / - ``` + ``` + docker push / + ``` * Example: - * ``` - docker push username/debian12 - ``` + ``` + docker push username/debian12 + ``` * You should now see your docker image on the [Docker Hub](https://hub.docker.com/) when you go into the menu option `My Profile`. * Note that you can access this link quickly with the following template: - * ``` - https://hub.docker.com/u/ - ``` + ``` + https://hub.docker.com/u/ + ``` @@ -265,13 +265,13 @@ We will now convert the Docker image into a Zero-OS flist. This part is so easy * Under `Name`, you will see all your available flists. * Right-click on the flist you want and select `Copy Clean Link`. This URL will be used when deploying on the ThreeFold Playground. We show below the template and an example of what the flist URL looks like. * Template: - * ``` - https://hub.grid.tf/<3BOT_name.3bot>/--.flist - ``` + ``` + https://hub.grid.tf/<3BOT_name.3bot>/--.flist + ``` * Example: - * ``` - https://hub.grid.tf/idrnd.3bot/username-debian12-latest.flist - ``` + ``` + https://hub.grid.tf/idrnd.3bot/username-debian12-latest.flist + ``` @@ -283,16 +283,14 @@ We will now convert the Docker image into a Zero-OS flist. This part is so easy * Choose your parameters (name, VM specs, etc.). * Under `flist`, paste the Debian flist from the TF Hub you copied previously. * Make sure the entrypoint is as follows: - * ``` - /sbin/zinit init - ``` + ``` + /sbin/zinit init + ``` * Choose a 3Node to deploy on * Click `Deploy` That's it! You can now SSH into your Debian deployment and change the world one line of code at a time! -* - ## Conclusion In this case study, we've seen the overall process of creating a new flist to deploy a Debian workload on a Micro VM on the ThreeFold Playground. diff --git a/collections/developers/flist/flist_hub/zos_hub.md b/collections/developers/flist/flist_hub/zos_hub.md index 8f79973..6b14f5a 100644 --- a/collections/developers/flist/flist_hub/zos_hub.md +++ b/collections/developers/flist/flist_hub/zos_hub.md @@ -118,25 +118,25 @@ See example below. The main template to request information from the API is the following: -```bash +``` curl -H "Authorization: bearer " https://hub.grid.tf/api/flist/me/ -X ``` For example, if we take the command `DELETE` of the previous section and we want to delete the flist `example-latest.flist` with the API Token `abc12`, we would write the following line: -```bash +``` curl -H "Authorization: bearer abc12" https://hub.grid.tf/api/flist/me/example-latest.flist -X DELETE ``` As another template example, if we wanted to rename the flist `current-name-latest.flist` to `new-name-latest.flist`, we would use the following template: -```bash +``` curl -H "Authorization: bearer " https://hub.grid.tf/api/flist/me//rename/ -X GET ``` To upload an flist to the ZOS Hub, you would use the following template: -```bash +``` curl -H "Authorization: bearer " -X POST -F file=@my-local-archive.tar.gz \ https://hub.grid.tf/api/flist/me/upload ``` \ No newline at end of file diff --git a/collections/developers/proxy/proxy.md b/collections/developers/proxy/proxy.md index 8962912..25d5c10 100644 --- a/collections/developers/proxy/proxy.md +++ b/collections/developers/proxy/proxy.md @@ -60,33 +60,33 @@ To start the services for development or testing make sure first you have all th - Clone this repo - ```bash + ``` git clone https://github.com/threefoldtech/tfgrid-sdk-go.git cd tfgrid-sdk-go/grid-proxy ``` - The `Makefile` has all that you need to deal with Db, Explorer, Tests, and Docs. - ```bash + ``` make help # list all the available subcommands. ``` - For a quick test explorer server. - ```bash + ``` make all-start e= ``` Now you can access the server at `http://localhost:8080` - Run the tests - ```bash + ``` make test-all ``` - Generate docs. - ```bash + ``` make docs ``` @@ -108,7 +108,7 @@ For more illustrations about the commands needed to work on the project, see the - You can either build the project: - ```bash + ``` make build chmod +x cmd/proxy_server/server \ && mv cmd/proxy_server/server /usr/local/bin/gridproxy-server @@ -117,7 +117,7 @@ For more illustrations about the commands needed to work on the project, see the - Or download a release: Check the [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) page and edit the next command with the chosen version. - ```bash + ``` wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v1.6.7-rc2/tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \ && tar -xzf tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \ && chmod +x server \ @@ -128,7 +128,7 @@ For more illustrations about the commands needed to work on the project, see the - Create the service file - ```bash + ``` cat << EOF > /etc/systemd/system/gridproxy-server.service [Unit] Description=grid proxy server -- 2.40.1 From fb521d5446052a557b2ae542182ce5028b86cb74 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 11:57:07 -0400 Subject: [PATCH 11/34] manual, parsing devs flist nc --- .../flist_nextcloud_case_study.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md b/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md index fce105f..a7049be 100644 --- a/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md +++ b/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md @@ -616,25 +616,25 @@ You now have access to the Docker Hub from your local computer. We will then pro * Make sure the Docker Daemon is running * Build the docker container (note that, while the tag is optional, it can help to track different versions) * Template: - * ``` + ``` docker build -t /: . ``` * Example: - * ``` + ``` docker build -t dockerhubuser/nextcloudaio . ``` * Push the docker container to the [Docker Hub](https://hub.docker.com/) * Template: - * ``` + ``` docker push / ``` * Example: - * ``` + ``` docker push dockerhubuser/nextcloudaio ``` * You should now see your docker image on the [Docker Hub](https://hub.docker.com/) when you go into the menu option `My Profile`. * Note that you can access this link quickly with the following template: - * ``` + ``` https://hub.docker.com/u/ ``` @@ -656,11 +656,11 @@ We will now convert the Docker image into a Zero-OS flist. * Under `Name`, you will see all your available flists. * Right-click on the flist you want and select `Copy Clean Link`. This URL will be used when deploying on the ThreeFold Playground. We show below the template and an example of what the flist URL looks like. * Template: - * ``` + ``` https://hub.grid.tf/<3BOT_name.3bot>/--.flist ``` * Example: - * ``` + ``` threefoldtech-nextcloudaio-latest.flist ``` @@ -843,7 +843,7 @@ We now deploy Nextcloud with Terraform. Make sure that you are in the correct fo ``` Note that, at any moment, if you want to see the information on your Terraform deployment, write the following: - * ``` + ``` terraform show ``` -- 2.40.1 From 678ca4b8422977466a685f361bde6992a4f18427 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:00:27 -0400 Subject: [PATCH 12/34] manual, parsing devs flist nc --- .../flist_nextcloud_case_study.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md b/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md index a7049be..78b9897 100644 --- a/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md +++ b/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md @@ -833,19 +833,20 @@ output "fqdn" { We now deploy Nextcloud with Terraform. Make sure that you are in the correct folder containing the main and variables files. * Initialize Terraform: - * ``` - terraform init - ``` + ``` + terraform init + ``` * Apply Terraform to deploy Nextcloud: - * ``` - terraform apply - ``` + ``` + terraform apply + ``` Note that, at any moment, if you want to see the information on your Terraform deployment, write the following: - ``` - terraform show - ``` + +``` +terraform show +``` ## Nextcloud Setup -- 2.40.1 From bc773126e098f45cd97ecd67beb89983aa17b911 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:09:00 -0400 Subject: [PATCH 13/34] manual, devs, set internals toc --- collections/developers/internals/internals.md | 23 ++++++++++++++++--- .../developers/internals/rmb/rmb_intro.md | 20 ++++++++-------- .../developers/internals/rmb/rmb_specs.md | 9 ++++---- 3 files changed, 34 insertions(+), 18 deletions(-) diff --git a/collections/developers/internals/internals.md b/collections/developers/internals/internals.md index ce29a8c..44034d0 100644 --- a/collections/developers/internals/internals.md +++ b/collections/developers/internals/internals.md @@ -4,16 +4,33 @@ We present in this section of the developers book a partial list of system compo

Table of Contents

-- [Reliable Message Bus (RMB)](rmb_toc.md) +- [Reliable Message Bus - RMB](rmb_toc.md) - [Introduction to RMB](rmb_intro.md) - [RMB Specs](rmb_specs.md) - [RMB Peer](peer.md) - [RMB Relay](relay.md) - -- [ZOS](zos_readme.md) +- [Zero-OS](zos_readme.md) - [Manual](manual.md) - [Workload Types](workload_types.md) - [Internal Modules](internals.md) + - [Identity](identity_readme.md) + - [Node ID Generation](identity.md) + - [Node Upgrade](upgrade.md) + - [Node](node_readme.md) + - [Storage](storage_readme.md) + - [Network](network_readme.md) + - [Introduction](introduction.md) + - [Definitions](definitions.md) + - [Mesh](mesh.md) + - [Setup](setup_farm_network.md) + - [Flist](flist_readme.md) + - [Container](container_readme.md) + - [VM](vmd_readme.md) + - [Provision](provision_readme.md) - [Capacity](capacity.md) - [Performance Monitor Package](performance.md) + - [Public IPs Validation Task](publicips.md) + - [CPUBenchmark](cpubench.md) + - [IPerf](iperf.md) + - [Health Check](healthcheck.md) - [API](api.md) \ No newline at end of file diff --git a/collections/developers/internals/rmb/rmb_intro.md b/collections/developers/internals/rmb/rmb_intro.md index 23ec314..6062195 100644 --- a/collections/developers/internals/rmb/rmb_intro.md +++ b/collections/developers/internals/rmb/rmb_intro.md @@ -14,7 +14,7 @@ - [Building](#building) - [Running tests](#running-tests) -*** +--- ## What is RMB @@ -27,7 +27,7 @@ Out of the box RMB provides the following: - Support for 3rd party hosted relays. Anyone can host a relay and people can use it safely since there is no way messages can be inspected while using e2e. That's similar to `home` servers by `matrix` ![layout](img/layout.png) -*** + ## Why RMB is developed by ThreefoldTech to create a global network of nodes that are available to host capacity. Each node will act like a single bot where you can ask to host your capacity. This enforced a unique set of requirements: @@ -45,17 +45,17 @@ Starting from this we came up with a more detailed requirements: - Then each message then can be signed by the `bot` keys, hence make it easy to verify the identity of the sender of a message. This is done both ways. - To support federation (using 3rd party relays) we needed to add e2e encryption to make sure messages that are surfing the public internet can't be sniffed - e2e encryption is done by deriving an encryption key from the same identity seed, and share the public key on `tfchain` hence it's available to everyone to use -*** + ## Specifications For details about protocol itself please check the [specs](rmb_specs.md). -*** + ## How to Use RMB There are many ways to use `rmb` because it was built for `bots` and software to communicate. Hence, there is no mobile app for it for example, but instead a set of libraries where you can use to connect to the network, make chitchats with other bots then exit. Or you can keep the connection forever to answer other bots requests if you are providing a service. -*** + ## Libraries If there is a library in your preferred language, then you are in luck! Simply follow the library documentations to implement a service bot, or to make requests to other bots. @@ -64,14 +64,14 @@ If there is a library in your preferred language, then you are in luck! Simply f - Golang [rmb-sdk-go](https://github.com/threefoldtech/rmb-sdk-go) - Typescript [rmb-sdk-ts](https://github.com/threefoldtech/rmb-sdk-ts) -*** + ### No Known Libraries If there are no library in your preferred language, here's what you can do: - Implement a library in your preferred language - If it's too much to do all the signing, verification, e2e in your language then use `rmb-peer` -*** + ## What is rmb-peer think of `rmb-peer` as a gateway that stands between you and the `relay`. `rmb-peer` uses your mnemonics (your identity secret key) to assume your identity and it connects to the relay on your behalf, it maintains the connection forever and takes care of @@ -85,11 +85,11 @@ Then it provide a simple (plain-text) api over `redis`. means to send messages ( > More details can be found [here](rmb_specs.md) -*** + ## Download Please check the latest [releases](https://github.com/threefoldtech/rmb-rs/releases) normally you only need the `rmb-peer` binary, unless you want to host your own relay. -*** + ## Building ```bash @@ -97,7 +97,7 @@ git clone git@github.com:threefoldtech/rmb-rs.git cd rmb-rs cargo build --release --target=x86_64-unknown-linux-musl ``` -*** + ## Running tests While inside the repository diff --git a/collections/developers/internals/rmb/rmb_specs.md b/collections/developers/internals/rmb/rmb_specs.md index 2d28cbe..c6878ea 100644 --- a/collections/developers/internals/rmb/rmb_specs.md +++ b/collections/developers/internals/rmb/rmb_specs.md @@ -15,7 +15,7 @@ - [End2End Encryption](#end2end-encryption) - [Rate Limiting](#rate-limiting) -*** +--- # Introduction @@ -51,7 +51,7 @@ On the relay, the relay checks federation information set on the envelope and th When the relay receive a message that is destined to a `local` connected client, it queue it for delivery. The relay can maintain a queue of messages per twin to a limit. If the twin does not come back online to consume queued messages, the relay will start to drop messages for that specific twin client. Once a twin come online and connect to its peer, the peer will receive all queued messages. the messages are pushed over the web-socket as they are received. the client then can decide how to handle them (a message can be a request or a response). A message type can be inspected as defined by the schema. -*** + # Overview of the Operation of RMB Relay ![relay](img/relay.png) @@ -201,7 +201,6 @@ A response message is defined as follows this is what is sent as a response by a Your bot (server) need to make sure to set `destination` to the same value as the incoming request `source` -The > this response is what is pushed to `msgbus.system.reply` ```rust @@ -223,7 +222,7 @@ pub struct JsonOutgoingResponse { pub error: Option, } ``` -*** + # End2End Encryption Relay is totally opaque to the messages. Our implementation of the relay does not poke into messages except for the routing attributes (source, and destinations addresses, and federation information). But since the relay is designed to be hosted by other 3rd parties (hence federation) you should @@ -246,7 +245,7 @@ As you already understand e2e is completely up to the peers to implement, and ev - derive the same shared key - `shared = ecdh(B.sk, A.pk)` - `plain-data = aes-gcm.decrypt(shared-key, nonce, encrypted)` -*** + # Rate Limiting To avoid abuse of the server, and prevent DoS attacks on the relay, a rate limiter is used to limit the number of clients' requests.\ -- 2.40.1 From 075e8453eafaf813a11dc0832833c5d4937f876f Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:11:12 -0400 Subject: [PATCH 14/34] manual, devs, set internals toc --- collections/developers/internals/internals.md | 23 +++---------------- 1 file changed, 3 insertions(+), 20 deletions(-) diff --git a/collections/developers/internals/internals.md b/collections/developers/internals/internals.md index 44034d0..ce29a8c 100644 --- a/collections/developers/internals/internals.md +++ b/collections/developers/internals/internals.md @@ -4,33 +4,16 @@ We present in this section of the developers book a partial list of system compo

Table of Contents

-- [Reliable Message Bus - RMB](rmb_toc.md) +- [Reliable Message Bus (RMB)](rmb_toc.md) - [Introduction to RMB](rmb_intro.md) - [RMB Specs](rmb_specs.md) - [RMB Peer](peer.md) - [RMB Relay](relay.md) -- [Zero-OS](zos_readme.md) + +- [ZOS](zos_readme.md) - [Manual](manual.md) - [Workload Types](workload_types.md) - [Internal Modules](internals.md) - - [Identity](identity_readme.md) - - [Node ID Generation](identity.md) - - [Node Upgrade](upgrade.md) - - [Node](node_readme.md) - - [Storage](storage_readme.md) - - [Network](network_readme.md) - - [Introduction](introduction.md) - - [Definitions](definitions.md) - - [Mesh](mesh.md) - - [Setup](setup_farm_network.md) - - [Flist](flist_readme.md) - - [Container](container_readme.md) - - [VM](vmd_readme.md) - - [Provision](provision_readme.md) - [Capacity](capacity.md) - [Performance Monitor Package](performance.md) - - [Public IPs Validation Task](publicips.md) - - [CPUBenchmark](cpubench.md) - - [IPerf](iperf.md) - - [Health Check](healthcheck.md) - [API](api.md) \ No newline at end of file -- 2.40.1 From 1fe25a93fbba7f5b7d784b1540d6dd6c3c66be9d Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:16:59 -0400 Subject: [PATCH 15/34] manual, devs, grid deployment --- .../developers/grid_deployment/snapshots.md | 96 +++++++++---------- collections/developers/internals/internals.md | 2 +- 2 files changed, 49 insertions(+), 49 deletions(-) diff --git a/collections/developers/grid_deployment/snapshots.md b/collections/developers/grid_deployment/snapshots.md index f88da46..a0da310 100644 --- a/collections/developers/grid_deployment/snapshots.md +++ b/collections/developers/grid_deployment/snapshots.md @@ -65,30 +65,30 @@ You can set a cron job to execute a script running rsync to create the snapshots - First download the script. - Main net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh +``` - Test net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh +``` - Dev net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh +``` - Set the permissions of the script - ``` - chmod +x create_snapshot.sh - ``` +``` +chmod +x create_snapshot.sh +``` - Make sure to a adjust the snapshot creation script for your specific deployment - Set a cron job - ``` - crontab -e - ``` +``` +crontab -e +``` - Here is an example of a cron job where we execute the script every day at 1 AM and send the logs to `/var/log/snapshots/snapshots-cron.log`. - ```sh - 0 1 * * * sh /opt/snapshots/create-snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1 - ``` +```sh +0 1 * * * sh /opt/snapshots/create-snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1 +``` ### Start All the Services @@ -96,25 +96,25 @@ You can start all services by running the provided scripts. - Download the script. - Main net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh +``` - Test net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh +``` - Dev net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh +``` - Set the permissions of the script - ``` - chmod +x startall.sh - ``` +``` +chmod +x startall.sh +``` - Run the script to start all services via docker engine. - ``` - ./startall.sh - ``` +``` +./startall.sh +``` ### Stop All the Services @@ -122,25 +122,25 @@ You can stop all services by running the provided scripts. - Download the script. - Main net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh +``` - Test net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh +``` - Dev net - ``` - wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh - ``` +``` +wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh +``` - Set the permissions of the script - ``` - chmod +x stopall.sh - ``` +``` +chmod +x stopall.sh +``` - Run the script to stop all services via docker engine. - ``` - ./stopall.sh - ``` +``` +./stopall.sh +``` ## Expose the Snapshots with Rsync diff --git a/collections/developers/internals/internals.md b/collections/developers/internals/internals.md index ce29a8c..dd1d888 100644 --- a/collections/developers/internals/internals.md +++ b/collections/developers/internals/internals.md @@ -4,7 +4,7 @@ We present in this section of the developers book a partial list of system compo

Table of Contents

-- [Reliable Message Bus (RMB)](rmb_toc.md) +- [Reliable Message Bus - RMB](rmb_toc.md) - [Introduction to RMB](rmb_intro.md) - [RMB Specs](rmb_specs.md) - [RMB Peer](peer.md) -- 2.40.1 From 44884956472d6dca2d7fa19a5f999c8d455b350d Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:19:23 -0400 Subject: [PATCH 16/34] manual, devs, grid deployment --- .../developers/grid_deployment/snapshots.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/collections/developers/grid_deployment/snapshots.md b/collections/developers/grid_deployment/snapshots.md index a0da310..e1ea730 100644 --- a/collections/developers/grid_deployment/snapshots.md +++ b/collections/developers/grid_deployment/snapshots.md @@ -64,15 +64,15 @@ You can use the start script to start all services and then set a cron job to ex You can set a cron job to execute a script running rsync to create the snapshots and generate logs at a given interval. - First download the script. - - Main net +- Main net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh ``` - - Test net +- Test net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh ``` - - Dev net +- Dev net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh ``` @@ -95,15 +95,15 @@ crontab -e You can start all services by running the provided scripts. - Download the script. - - Main net +- Main net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh ``` - - Test net +- Test net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh ``` - - Dev net +- Dev net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh ``` @@ -121,15 +121,15 @@ chmod +x startall.sh You can stop all services by running the provided scripts. - Download the script. - - Main net +- Main net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh ``` - - Test net +- Test net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh ``` - - Dev net +- Dev net ``` wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh ``` -- 2.40.1 From b8323a081521796f96f83f9b31cd61b3cfb13b2e Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:22:08 -0400 Subject: [PATCH 17/34] manual, farmers, bootstrap img --- .../img/dashboard_bootstrap_farm.png | Bin 0 -> 9259 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 collections/farmers/3node_building/img/dashboard_bootstrap_farm.png diff --git a/collections/farmers/3node_building/img/dashboard_bootstrap_farm.png b/collections/farmers/3node_building/img/dashboard_bootstrap_farm.png new file mode 100644 index 0000000000000000000000000000000000000000..82f8759aec2c5d540a9967dc88cded1e8f36e25d GIT binary patch literal 9259 zcmd6NcT`hf*Jczeir7GZAVozG5KyE@7m*TrO&~N;iiF-Flpv@m2q;xr0BL~`dNZMj zC`d195Snxd1{4TL3Cv~Y{d3l=@15^k-%M6k?#W$u-F41Bd++Dj&pvMr^)=a8xmZCU z5F1$Qu`vjAga(Y6r%nLBKAlon;Bw685!mz;@Ci9({|5N~kFUD9uZf4FZ-AY*1IWqU z!_7g==b5*IgS*dj58qXm7G>Ze;lqd2ydCU(oju(7O`Y8wK&H?A_$4LzH5{JuOWu`~ z;=d~;e@|Neo(#XCUYPGC2@r@M1b(b)8kk8R4+!LNm}vjAn`COtfBpJ()kjAK9-?!% zZsdfYI_m$}kix-@`yMThU%_ESMer-jWlfPhkB=N#YNEX4Jo@sbfWY;pthN0M-u!|O z(f@qi^0Bs=?+J38mu-_Dd5a{gx55u0> znVFeV`VA21A*0mY-M!YD1q6D)g6Xs`p>u&iU;goY2V7WgqK|;?-i`Qg8dl3V59Qan z0OoRf!g}hIC)xi32$X~H4P^m6d2&n|1UjK%`0ox2+xeRMnwo)S+}`{zT8s;@qr}&* zQ4?HOIdg9xoLH;F2^a7iC{zvA?wD6y)91Z?^2>u$a4Qq8APT{R2LX3Ab+OPb6j>Fz zGE2+0q|h)dc$Zvx5ITCnXeNCy*Y1t-wFkzksEIPW0)M=I5~t`73`#$qE(Z=~EJklv z$?-s&4M7jsf{6`+Rtqyl1}9^US=bHaZiS|P%&8CXFb>)DlFQT&GA})xkT4x5BExTf z^CnEBsK(r&oLNw(bJ8<*#$M9u(mvHKoGUP}Yu@*HSIKdN?_3DItILxoUc7QtYV~(m z>|f^;M`J$^wz3Y0xZzXx zI)SBlo?x2~Tt|NDF79Y#1*j$7rcCm1ydvE>6rR3^- z3hX@GZOakvD~Jj|A-&}xgL;ixGGRGvaQ+=p)Yzpc$57AxQqkeMeohr^OdrMS=nP^k zq03rfWtefLLHdSw{R^(Gby=>7n$PX(wPff?|K-nqI=)UZ=1tTY#CY9r?(ako^e)WL z^bJ($7T^;M`tpX`(} zfA=qc@K6I95`Bi1gvZxEcX8Zq*IMe_;&mMdfA`1lS2*VGjVnoEb&yRXE@5%70Nfrj2V4P5ug^*&U zy#Ji-wQKfMk{?*cS{1Hylm>BV$k^_I-YJtyDOFVoDd~O2_db1l<0tx7$>uQ@)X_X;NVND5z9Fi9YO>*#0Ls9!w@5vVlQ%66ZaV#S@qS9Cv>EDn8g6RW2{E(fd=_TH4YXU4LQ;MQx2DM02lmL|7Uahi~L z!=ry=_2G751pvTZ>FB#VhJdlRjpAjBku!zq()RBl`Pay6#5NepTkEutWd8#d(3SX{ zcIhP`N_}hi%e#K>$7Yl`X(e*$<^6CZsMM8`0Vfsl8}3*7>a48T?Kgt2mb5~u5Gqpz zJN(Hz8<&+n?>H4)o-OW~GW2CWY?^S@P*7pjz%)Yaz0%62XN zhy#KjxXyE42UC4+m=I*dDOE{o3z;Xv*ERpAv0ntb70;KhA|P8_C;eVAf8s5Fo-bw5*Z>KqEqfE}R?qTzICg2~Xz zue0>NzJZC%JAkWrvYqn)ASv~TYOKr9%f-!F>#uSJS2((ag&R>G1yf>sEK)Kh$@yzd zE26EVv_ih9$y^`;c9^tRq`FEkXHHLx^Tcpfj!`#QQYN}}>cdX(mMSbGrF3!U1rDQb zsnqxsrOYFxO^K2vh<&&R^Zfh80b4KtDU4kb##O7Aas>ss1a;YQjJN@NTCQOEO$48B z?xU~Fd-+R&I$$ne{Ha$FPzhiUeP5y1U#_wRwgY+I z8_sK&F4lW!Sm5$f^d*xPco#&JiQuB-BYpWyM{j{1#%6A}w*#!skayVFM1-Kru#3-6le7BQj-4|lX1k-kAjFi`bE`sI^n8m zx`kBnsS>k=2>H8$+B|#>)^_9aKc=T_M(03Ko?visT-j)e*lLfdcl_8fVw&F1FCTBm25zmw4(3l%4`wbV zb8wRU!C%ZedPjA7qu3289YXLGra|`oQoX6`Yk{sej>P(iRu|j1pM1#(stODWViwoM zG?*fk?pvIm>@nGcs&pDTV(8xCn1sBZ!WsNlu~hj(F%RC-Z`shyN-SY1X|!QVGU|kF zE2|f&>Z8LN<3s4WKJIJ3T9$m;ol?t_^aC%jTj~+w>0PANrwVtTzLCC*m;8>Gs9Lu! z%%l&WU}89vk7e_ybWG-|30Cl&Bx#hHe&o-D|YfDc)$XN-n>bP z{>^~C9WVPf=GsebvCO8AUhgpTAL(bo%F_(2ce4I4RZ1=L`?u(g5ge`0Ptjs(7CTc- zVTJ;UBl(Efh|9qXTI0W0op)C1G?S5cW)k`^sQA*tB&O`;JhHS%o(lGnl<3^>V%$JW3{_(BX6ewMh^#cv zhj~tYhFV4D^n<0LiCbd7kFG2dD$Lfan?34aa~0P6qgdXLEvA6;nC?N{d|lVex|LOk zdO!j)x!JLJ2PZ~cv^&17hJ3v0ZS zsMmy~+y3f;_b^==PWU4mfSzHg*GUj``-t@;OcKf@=FxNLR=7`x*7mDc=+Xvr=ZpYn zy2_-HDx%OmWVf6(HYQm>norrVdNu4(tb7e53OrM@R)EB85jB$=o_r=ZZS5)A#;Xc&A>{iiMAL_r0(YUAN#>oo#97(l@F~4*u054&xB)UU!RGW@0dz zk%04x8e@i#=$*UL3#iG?pcKBI6j^r2FYCNn?g=fxr{!^uG32KG=ICnu2I{vl??|cv zmr(zS{Vv@HIeG`%`CYz8pM4@S{=8+4b`vgY%YDXO=R#jz6<&Q#{k;E=HBI&mMa;Ij zV4J-q$Ny40+9--F(j}kEc3;_aW$H2mv0^9vrcqxm%A{tx|GpJ_#a)XY+D3-LWx1O2 z#>k^)35ggr9g1^Xi*gbcy;p5l2P4tnf}^idp^mbc1kQ4DZh?y*)1`kYhI`d)@Q*9l zcy^!}j$Ae0mUb(x+Iz%f)nL5r6mmNl>>Qh#C;ohK32W5*i7{!NA^OVqRr#fv4epqueUah$VDkxG^fn?3Sc5E~DMH}>l-@XRNe%a5`C8c|U-`q4;6 zh!vk7{Hh3^kW1=FU`)R*kk3SJu4?L;>k2ZCoLEB-K8NJX&aEF<5Y7Z`3hFPO zsfq5H8febEqvuYrh%fzE#PWFrhFDj^g-+EY9@JIiPRNq~Oy6KS5Lj=`p$1eCXN>_` z!_%XbD+E0#rKc?|ppQYhR6{@XIL7pF#^^d$(_Wn@2GWhNkg$a&#MQHQX?sI0`s%O= z#&$nm(tj12=RZgTZb{+E+ejF0VRolp>4gdLwHW@W)AYF6X+?{lkf)`W5rX?;{QODC@Hn0aNQD-4ZFD?13k2yQ{Xzbz(Lu z69+YbnQ;1P`P>sb)y++e3nMxh|9g-Q8%XnY4%wha$N}>p$>+93=)uOzL#*itSS2Jl z9TX>ITCt(_8xebdWYZOvgu%w6cn#srl|k6%S6>IZPJ)5fdxSx#sXpG?#Won4Kn$BU zuf3-9*u`eM+*o$SlsM2#+kCw3{lIN*J@)Fb8q;Zl09# z>SW*X{(Y6Fr<=UKoN`#cPcim@0;7`p9`z|>p+A9SPd<~i3!ndakL&KjrQ~6WvISZz z*=xrwLmw#SkA8RJgPJSZHj#ZQx~`pTYOqTa_!sL2m>kY!Fo2qxcnjcT;2{&9VA_aZ z%X6fD-Nd4_X}K?7J~qucy)gt;r5(L7_$7m3OgRE1YqC1&txkMSvsQq3Nuy9|@B{Tj z*n^%A+}SPO{(XeHH`#A(aJ@@M-|S(GWAr~hc`{9mx673gPA<RnoTsRogIPBadQcPMS3NY+8k5!N_(W^+T0UK-6n^*bgjt$aQ8{StUCaXo&E<(e zTYbEmTvS|F^^2CB#@`=ZeWUCA!nz7uSRbI)3jOFy@m`|Wsl|33S!XjGU5 zooYF8K)9%W_E)Y_^%W6`&x%(sUMVvLC=-q<6ry1JR;AoCtLQyoGjI$dRxw8qHI=WioE7|K!zR@wI;@`S9e9b|kVij8OQz?OnX-M=$!d zma}UmLSSui^vFc<_>!R#CgV_uaS)CY05`Aja!poYC#au|YE^E{Srp<>jJ(LVl;{{AXMR zpS7IrvO5ro*=i z>0eVcc9m{AB}G)=K*PIG*%IWl6tiu(@>VqZkm%yb!mO+4`* z*U!-i6y$d)0|68lB*m6L2A|R6PF~Y2P1mJ)9=O4W@G*+qe>?5PTP{{jPqt6b3$|38 zr*GGJ_difRZN;0wT)iG%;9E>Dtzz0J7qG0ds;E)deVM;YF#1?F=IzXKka-%7e=cG5P?< ztGOxKG~_j(6rsq|*KcHHxF{EiAPnzfc$9y2NYsek*AoN-@LQ&8cLBb*B(~n!@z^nQ z>?7Lqu0#jNP>}*Nzb3P4ZEGv*f%KjFn#VwSJSN7Oj8YUh`HUYeVN7n0O#aFr)aw}Z zcN@LS$4dk@a}H9jxS{R6E*cB{L+$OrYq*spW&u=PXy2%EMpJZU3Qt-d>FgT*48c6dyy9Dw%Z+DO{n^#Fd4#2 zGg~N^yH|A=<^?hqu9SsMpJ^qrVFKIT<5iT=0}XYswO<(R$q)RRX?kAlRQu)qn>a3C+y+g3H?JCIHKg;1iX2>%TE~_rB?c!;fAFDh z-%xg+Gho@B-WilF@oj028 z)KDv)7b`W^`_$;Y6Jv6UFy8bxm?O!IsY~zgOb}pVM?nl%;^RbU525g#DGlJ@1l~Z+lF-++H(Rb}!GsjYj|)!sPvM-U$iTZB0j) zG5pNHg1VNnL7+@buoo@e@5q(koxcOYRJIWTZd((u`H13aY4kSJKWYpq9Uo085TZn4 z8ybdeS_Fdr?m`gO>9*(^>>n6o4O!CDQuR>Sc@d0n(|H0W$59X|EdNr+$oXFNZ-gL-dZNam3(t2=v@%2rJ0j>1;GfkUS`TQ zJ>#!$Y3!$GlT%Jk-4vaNj?R0A#ld7x83WL(ROQ_)DsW497=toCvyOQ;lkRWvmRwY{ zSX>)}X-u`%107Hc-x2Sl7;XuGR+Ka`)@IfLN83 zGQmKl0VkFL!P-iHW(urotgToYKJ!$*wwC+ys4+mX1d=B`Z@F1E9wMV63$0t>b`I66 z}HD~fW^v%07O!d`cZ1?X(Q5Q zj7J~EI4u;%OktjIiOKN}OA2Foey6@7HKmwe$>fMwx*3;jqf_Rx2T)D){AF1?%~n=( zc3YD$d%N0f=|%A}tE)qLEwcVFwM^vdl{BgdN@@V|t`X*hs#9h=Acq0&$GWjW-p{;4 zL5|!~@46#yT0Ab!CtJU^oyA#Z9l?z$E=MMZ_7Yn7EaUi0UY55ctm&Jk*R?H~A_miB z>s-D*mFXrt#a{k<2xe4)1m>g*2;k3;fbt8dGhg{ell+*t;srvBS#@hu?Rxb{0s&xt ziu8GaShE923=NIMS9%tq6yfh`KYCa$X$736oXk}{rXoZxQZhc4Lop2X!Q_& zkkpP<(qS^2l+r~VDcU^N6nT`i)T%JY&0F#tf2&+`w%VqS(X&H}-n>vEx_`a8Eq&Sy zz_uOzj89Xmjdw7q3l7%jZtjfECsR>rh5aXD$=%~8;DLmBuJJn>9-F)+0Cy#62G^Md zv-!PhtTGP8c|?Ux=HlI>pBD=%htV!vEK7^@7~DXq34rm{aTq`x4#mosd3B42y0M6_ zq`PFa7 zORBnJOL-C0ZPi9;-igy*KSP>{XYacI#rAkdoJ%*px`vtO(TCBSj+lerdz$mQ@lQk@ zat~YAd{EE>i8<6z4y$39A^IIpaLvz3=aL7Y`rvsh26O!~Tc}HVijpmJPVu{tckNSZ z25s-hF=;-s?UKS4{CoDnt?U7Qw!h1KQX9Qjq=NSn6Y`ZN5ALw5@Wt-^HUq*ixe=a7 zZro`(0}J4G3_&mN_!e&n>snP&5O7@mgS85%AU1yIR6;39@4&`In(q|a3_*R&tcjLLa1=?J%XsyzH>n64I zMLc)A^KiVa)rR5|-NNv2T$_O)(B2L?JPQzxS#ED|y64>XE2h~=s_$2ydiHb5f=YvU z*4OX%EU|09u|=N}Q@??Ycfs`~y*#o@>#Z{`l~pEvn_1qdv8x-1nOC9~7ir<2gpnz} zv{{`@p3qwl#hhjB-nIAcXCKBj5T$Tx3!XaWmAJ_?WY`OUDB`&07qBX3Te$U=V1 zK9qb}`rDY<0=&R-BCWVA-oO;&)izvxLi~M3eJU)LyAZfVObKvARa$?i^XV)|I7pn9 zp1vbyvb0x_&=9(~jW*fZN$KLdHv*7p4l*|K4)02>^*K*cWoi;_5r?I&_31hYUVGqT zSws*9z9xZh#$InG-V)Rn45$}16)xlK8Dk+y3-)S#-fw+*{cU(GLw0p_7 zYs#}>pWU>>Ri@is^7DRt`V8kMka+rttN3K~{K>0MUu-sB0f!Z=k_gCXlC{{2G|{}0}~ ac_62=FAHz9C Date: Tue, 14 May 2024 12:24:06 -0400 Subject: [PATCH 18/34] manual, farmers, gpu --- collections/farmers/3node_building/2_bootstrap_image.md | 4 ++-- collections/farmers/3node_building/gpu_farming.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/collections/farmers/3node_building/2_bootstrap_image.md b/collections/farmers/3node_building/2_bootstrap_image.md index 9234242..eed1aa2 100644 --- a/collections/farmers/3node_building/2_bootstrap_image.md +++ b/collections/farmers/3node_building/2_bootstrap_image.md @@ -7,7 +7,7 @@ - [Burn the Zero-OS Bootstrap Image](#burn-the-zero-os-bootstrap-image) - [CD/DVD BIOS](#cddvd-bios) - [USB Key BIOS+UEFI](#usb-key-biosuefi) - - [BalenaEtcher (MAC, Linux, Windows)](#balenaetcher-mac-linux-windows) + - [BalenaEtcher - MAC, Linux, Windows](#balenaetcher---mac-linux-windows) - [CLI (Linux)](#cli-linux) - [Rufus (Windows)](#rufus-windows) - [Additional Information (Optional)](#additional-information-optional) @@ -70,7 +70,7 @@ For the BIOS **ISO** image, download the file and burn it on a DVD. There are many ways to burn the bootstrap image on a USB key. The easiest way that works for all operating systems is to use BalenaEtcher. We also provide other methods. -#### BalenaEtcher (MAC, Linux, Windows) +#### BalenaEtcher - MAC, Linux, Windows For **MAC**, **Linux** and **Windows**, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. This will work for the option **EFI IMG** for UEFI boot, and with the option **USB** for BIOS boot. Simply follow the steps presented to you and make sure you select the bootstrap image file you downloaded previously. diff --git a/collections/farmers/3node_building/gpu_farming.md b/collections/farmers/3node_building/gpu_farming.md index f0096d8..35cbf8e 100644 --- a/collections/farmers/3node_building/gpu_farming.md +++ b/collections/farmers/3node_building/gpu_farming.md @@ -35,7 +35,7 @@ We cover the basic steps to install the GPU on your 3Node. * Install the GPU on the server * Note: You might need to move or remove some pieces of your server to make room for the GPU * (Optional) Boot the 3Node with a Linux distro (e.g. Ubuntu) and use the terminal to check if the GPU is recognized by the system - * ``` + ``` sudo lshw -C Display ``` * Output example with an AMD Radeon (on the line `product: ...`) -- 2.40.1 From f6c654db48c5ca0c741d2df27d35f628569807a9 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:28:35 -0400 Subject: [PATCH 19/34] manual, farmers, minting receipts --- collections/farmers/3node_building/minting_receipts.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/collections/farmers/3node_building/minting_receipts.md b/collections/farmers/3node_building/minting_receipts.md index 76633d3..fbba3de 100644 --- a/collections/farmers/3node_building/minting_receipts.md +++ b/collections/farmers/3node_building/minting_receipts.md @@ -37,7 +37,9 @@ The ThreeFold Alpha minting tool will present the following information for each - TFT Farmed - Payout Address - \ No newline at end of file +- Payout Address \ No newline at end of file -- 2.40.1 From ef8576b061b76501b2e467d1d1c3d2e36ebcf9fb Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:33:54 -0400 Subject: [PATCH 21/34] manual, sysadmins, ssh --- .../farmers/farmerbot/farmerbot_quick.md | 2 +- .../getstarted/ssh_guide/ssh_openssh.md | 36 +++++++++---------- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/collections/farmers/farmerbot/farmerbot_quick.md b/collections/farmers/farmerbot/farmerbot_quick.md index 8094789..8ec6945 100644 --- a/collections/farmers/farmerbot/farmerbot_quick.md +++ b/collections/farmers/farmerbot/farmerbot_quick.md @@ -142,7 +142,7 @@ Once you've verified that the Farmerbot runs properly, you can stop the Farmerbo It is highly recommended to set a Ubuntu systemd service to keep the Farmerbot running after exiting the VM. * Create the service file - * ``` + ``` nano /etc/systemd/system/farmerbot.service ``` * Set the Farmerbot systemd service diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md b/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md index 4762cf2..82170db 100644 --- a/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md @@ -50,13 +50,13 @@ The main steps for the whole process are the following: Here are the steps to SSH into a 3Node with IPv4 on Linux. * To create the SSH key pair, write in the terminal - * ``` + ``` ssh-keygen ``` * Save in default location * Write a password (optional) * To see the public key, write in the terminal - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Select and copy the public key when needed @@ -72,7 +72,7 @@ Here are the steps to SSH into a 3Node with IPv4 on Linux. * To SSH into the VM once the 3Node is deployed * Copy the IPv4 address * Open the terminal, write the following with the deployment address and write **yes** to confirm - * ``` + ``` ssh root@IPv4_address ``` @@ -92,13 +92,13 @@ Here are the steps to SSH into a 3Node with the Planetary Network on Linux. * Disconnect your VPN if you have one * In the connector, click `Connect` * To create the SSH key pair, write in the terminal - * ``` + ``` ssh-keygen ``` * Save in default location * Write a password (optional) * To see the public key, write in the terminal - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Select and copy the public key when needed @@ -114,7 +114,7 @@ Here are the steps to SSH into a 3Node with the Planetary Network on Linux. * To SSH into the VM once the 3Node is deployed * Copy the Planetary Network address * Open the terminal, write the following with the deployment address and write **yes** to confirm - * ``` + ``` ssh root@planetary_network_address ``` @@ -129,13 +129,13 @@ You now have an SSH connection on Linux with the Planetary Network. Here are the steps to SSH into a 3Node with IPv4 on MAC. * To create the SSH key pair, in the terminal write - * ``` + ``` ssh-keygen ``` * Save in default location * Write a password (optional) * To see the public key, write in the terminal - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Select and copy the public key when needed @@ -151,7 +151,7 @@ Here are the steps to SSH into a 3Node with IPv4 on MAC. * To SSH into the VM once the 3Node is deployed * Copy the IPv4 address * Open the terminal, write the following with the deployment address and write **yes** to confirm - * ``` + ``` ssh root@IPv4_address ``` @@ -170,13 +170,13 @@ Here are the steps to SSH into a 3Node with the Planetary Network on MAC. * Disconnect your VPN if you have one * In the connector, click `Connect` * To create the SSH key pair, write in the terminal - * ``` + ``` ssh-keygen ``` * Save in default location * Write a password (optional) * To see the public key, write in the terminal - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Select and copy the public key when needed @@ -192,7 +192,7 @@ Here are the steps to SSH into a 3Node with the Planetary Network on MAC. * To SSH into the VM once the 3Node is deployed * Copy the Planetary Network address * Open the terminal, write the following with the deployment address and write **yes** to confirm - * ``` + ``` ssh root@planetary_network_address ``` @@ -214,13 +214,13 @@ You now have an SSH connection on MAC with the Planetary Network. * Search OpenSSH * Install OpenSSH Client and OpenSSH Server * To create the SSH key pair, open `PowerShell` and write - * ``` + ``` ssh-keygen ``` * Save in default location * Write a password (optional) * To see the public key, write in `PowerShell` - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Select and copy the public key when needed @@ -236,7 +236,7 @@ You now have an SSH connection on MAC with the Planetary Network. * To SSH into the VM once the 3Node is deployed * Copy the IPv4 address * Open `PowerShell`, write the following with the deployment address and write **yes** to confirm - * ``` + ``` ssh root@IPv4_address ``` @@ -262,13 +262,13 @@ You now have an SSH connection on Window with IPv4. * Search OpenSSH * Install OpenSSH Client and OpenSSH Server * To create the SSH key pair, open `PowerShell` and write - * ``` + ``` ssh-keygen ``` * Save in default location * Write a password (optional) * To see the public key, write in `PowerShell` - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Select and copy the public key when needed @@ -284,7 +284,7 @@ You now have an SSH connection on Window with IPv4. * To SSH into the VM once the 3Node is deployed * Copy the Planetary Network address * Open `PowerShell`, write the following with the deployment address and write **yes** to confirm - * ``` + ``` ssh root@planetary_network_address ``` -- 2.40.1 From 2ac97d936c2660c25eaafb240113079a98a6477c Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:35:37 -0400 Subject: [PATCH 22/34] manual, sysadmins, wg --- .../getstarted/ssh_guide/ssh_wireguard.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md b/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md index 69fa20a..91cc986 100644 --- a/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md @@ -69,19 +69,19 @@ To set the WireGuard connection on Linux or MAC, create a WireGuard configuratio * Copy the content **WireGuard Config** from the Dashboard **Details** window * Paste the content to a file with the extension `.conf` (e.g. **wg.conf**) in the directory `/etc/wireguard` - * ``` + ``` sudo nano /etc/wireguard/wg.conf ``` * Start WireGuard with the command **wg-quick** and, as a parameter, pass the configuration file without the extension (e.g. *wg.conf -> wg*) - * ``` + ``` wg-quick up wg ``` * Note that you can also specify a config file by path, stored in any location - * ``` + ``` wg-quick up /etc/wireguard/wg.conf ``` * If you want to stop the WireGuard service, you can write the following in the terminal - * ``` + ``` wg-quick down wg ``` @@ -105,7 +105,7 @@ To set the WireGuard connection on Windows, add and activate a tunnel with the W As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the WireGuard connection is properly established. Make sure to replace `VM_WireGuard_IP` with the proper WireGuard IP address: * Ping the deployment - * ``` + ``` ping VM_WireGuard_IP ``` @@ -116,7 +116,7 @@ As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-th To SSH into the deployment with Wireguard, use the **WireGuard IP** shown in the Dashboard **Details** window. * SSH into the deployment - * ``` + ``` ssh root@VM_WireGuard_IP ``` -- 2.40.1 From 3b2e0f0241d82479c9a3696285e41d955be4f846 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:37:07 -0400 Subject: [PATCH 23/34] manual, sysadmins, gui --- .../cockpit_guide/cockpit_guide.md | 54 +++++++++---------- .../guacamole_guide/guacamole_guide.md | 40 +++++++------- .../xrdp_guide/xrdp_guide.md | 50 ++++++++--------- 3 files changed, 72 insertions(+), 72 deletions(-) diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md index fb7bae4..2686e18 100644 --- a/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md +++ b/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md @@ -34,38 +34,38 @@ To start, you must [deploy and SSH into a full VM](ssh_guide.md). * With an IPv4 Address * After deployment, copy the IPv4 address * Connect into the VM via SSH - * ``` + ``` ssh root@VM_IPv4_address ``` * Create a new user with root access * Here we use `newuser` as an example - * ``` + ``` adduser newuser ``` * To see the directory of the new user - * ``` + ``` ls /home ``` * Give sudo capacity to the new user - * ``` + ``` usermod -aG sudo newuser ``` * Make the new user accessible by SSH - * ``` + ``` su - newuser ``` - * ``` + ``` mkdir ~/.ssh ``` - * ``` + ``` nano ~/.ssh/authorized_keys ``` * add the authorized public key in the file, then save and quit * Exit the VM and reconnect with the new user - * ``` + ``` exit ``` - * ``` + ``` ssh newuser@VM_IPv4_address ``` @@ -74,11 +74,11 @@ To start, you must [deploy and SSH into a full VM](ssh_guide.md). ## Set the VM and Install Cockpit * Update and upgrade the VM - * ``` + ``` sudo apt update -y && sudo apt upgrade -y && sudo apt-get update -y ``` * Install Cockpit - * ``` + ``` . /etc/os-release && sudo apt install -t ${UBUNTU_CODENAME}-backports cockpit -y ``` @@ -89,24 +89,24 @@ To start, you must [deploy and SSH into a full VM](ssh_guide.md). We now change the system daemon that manages network configurations. We will be using [NetworkManager](https://networkmanager.dev/) instead of [networkd](https://wiki.archlinux.org/title/systemd-networkd). This will give us further possibilities on Cockpit. * Install NetworkManager. Note that it might already be installed. - * ``` + ``` sudo apt install network-manager -y ``` * Update the `.yaml` file * Go to netplan's directory - * ``` + ``` cd /etc/netplan ``` * Search for the proper `.yaml` file name - * ``` + ``` ls -l ``` * Update the `.yaml` file - * ``` + ``` sudo nano 50-cloud-init.yaml ``` * Add the following lines under `network:` - * ``` + ``` version: 2 renderer: NetworkManager ``` @@ -114,22 +114,22 @@ We now change the system daemon that manages network configurations. We will be * Remove `version: 2` at the bottom of the file * Save and exit the file * Disable networkd and enable NetworkManager - * ``` + ``` sudo systemctl disable systemd-networkd ``` - * ``` + ``` sudo systemctl enable NetworkManager ``` * Apply netplan to set NetworkManager - * ``` + ``` sudo netplan apply ``` * Reboot the system to load the new kernel and to properly set NetworkManager - * ``` + ``` sudo reboot ``` * Reconnect to the VM - * ``` + ``` ssh newuser@VM_IPv4_address ``` @@ -139,24 +139,24 @@ We now change the system daemon that manages network configurations. We will be We now set a firewall. We note that [ufw](https://wiki.ubuntu.com/UncomplicatedFirewall) is not compatible with Cockpit and for this reason, we will be using [firewalld](https://firewalld.org/). * Install firewalld - * ``` + ``` sudo apt install firewalld -y ``` * Add Cockpit to firewalld - * ``` + ``` sudo firewall-cmd --add-service=cockpit ``` - * ``` + ``` sudo firewall-cmd --add-service=cockpit --permanent ``` * See if Cockpit is available - * ``` + ``` sudo firewall-cmd --info-service=cockpit ``` * See the status of firewalld - * ``` + ``` sudo firewall-cmd --state ``` @@ -165,7 +165,7 @@ We now set a firewall. We note that [ufw](https://wiki.ubuntu.com/UncomplicatedF ## Access Cockpit * On your web browser, write the following URL with the proper VM IPv4 address - * ``` + ``` VM_IPv4_Address:9090 ``` * Enter the username and password of the root-access user diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md index 6a6738e..f0381ff 100644 --- a/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md +++ b/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md @@ -37,30 +37,30 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th * Minimum storage: 15Gb * After deployment, note the VM IPv4 address * Connect to the VM via SSH - * ``` + ``` ssh root@VM_IPv4_address ``` * Once connected, create a new user with root access (for this guide we use "newuser") - * ``` + ``` adduser newuser ``` * You should now see the new user directory - * ``` + ``` ls /home ``` * Give sudo capacity to the new user - * ``` + ``` usermod -aG sudo newuser ``` * Make the new user accessible by SSH - * ``` + ``` su - newuser ``` - * ``` + ``` mkdir ~/.ssh ``` * Add authorized public key in the file and save it - * ``` + ``` nano ~/.ssh/authorized_keys ``` * Exit the VM and reconnect with the new user @@ -70,21 +70,21 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th ## SSH with Root-Access User, Install Prerequisites and Apache Guacamole * SSH into the VM - * ``` + ``` ssh newuser@VM_IPv4_address ``` * Update and upgrade Ubuntu - * ``` + ``` sudo apt update && sudo apt upgrade -y && sudo apt-get install software-properties-common -y ``` * Download and run Apache Guacamole - * ``` + ``` wget -O guac-install.sh https://git.io/fxZq5 ``` - * ``` + ``` chmod +x guac-install.sh ``` - * ``` + ``` sudo ./guac-install.sh ``` @@ -93,11 +93,11 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th ## Access Apache Guacamole and Create Admin-Access User * On your local computer, open a browser and write the following URL with the proper IPv4 address - * ``` + ``` https://VM_IPv4_address:8080/guacamole ``` * On Guacamole, enter the following for both the username and the password - * ``` + ``` guacadmin ``` * Download the [TOTP](https://totp.app/) app on your Android or iOS @@ -120,23 +120,23 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th ## Download the Desktop Environment and Run xrdp * Download a Ubuntu desktop environment on the VM - * ``` + ``` sudo apt install tasksel -y && sudo apt install lightdm -y ``` * Choose lightdm * Run tasksel and choose `ubuntu desktop` - * ``` + ``` sudo tasksel ``` * Download and run xrdp - * ``` + ``` wget https://c-nergy.be/downloads/xRDP/xrdp-installer-1.4.6.zip ``` - * ``` + ``` unzip xrdp-installer-1.4.6.zip ``` - * ``` + ``` bash xrdp-installer-1.4.6.sh ``` @@ -146,7 +146,7 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th * Create an RDP connection on Guacamole * Open Guacamole - * ``` + ``` http://VM_IPv4_address:8080/guacamole/ ``` * Go to Settings diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md index 24f2389..8b47a0a 100644 --- a/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md +++ b/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md @@ -31,107 +31,107 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th * With an IPv4 Address * After deployment, copy the IPv4 address * To SSH into the VM, write in the terminal - * ``` + ``` ssh root@VM_IPv4_address ``` * Once connected, update, upgrade and install the desktop environment * Update - * ``` + ``` sudo apt update -y && sudo apt upgrade -y ``` * Install a light-weight desktop environment (Xfce) - * ``` + ``` sudo apt install xfce4 xfce4-goodies -y ``` * Create a user with root access - * ``` + ``` adduser newuser ``` - * ``` + ``` ls /home ``` * You should see the newuser directory * Give sudo capacity to newuser - * ``` + ``` usermod -aG sudo newuser ``` * Make newuser accessible by SSH - * ``` + ``` su - newuser ``` - * ``` + ``` mkdir ~/.ssh ``` - * ``` + ``` nano ~/.ssh/authorized_keys ``` * add authorized public key in file and save * Exit the VM and reconnect with new user - * ``` + ``` exit ``` * Reconnect to the VM terminal and install XRDP - * ``` + ``` ssh newuser@VM_IPv4_address ``` * Install XRDP - * ``` + ``` sudo apt install xrdp -y ``` * Check XRDP status - * ``` + ``` sudo systemctl status xrdp ``` * If not running, run manually: - * ``` + ``` sudo systemctl start xrdp ``` * If needed, configure xrdp (optional) - * ``` + ``` sudo nano /etc/xrdp/xrdp.ini ``` * Create a session with root-access user Move to home directory * Go to home directory of root-access user - * ``` + ``` cd ~ ``` * Create session - * ``` + ``` echo "xfce4-session" | tee .xsession ``` * Restart the server - * ``` + ``` sudo systemctl restart xrdp ``` * Find your local computer IP address * On your local computer terminal, write - * ``` + ``` curl ifconfig.me ``` * On the VM terminal, allow client computer port to the firewall (ufw) - * ``` + ``` sudo ufw allow from your_local_ip/32 to any port 3389 ``` * Allow SSH connection to your firewall - * ``` + ``` sudo ufw allow ssh ``` * Verify status of the firewall - * ``` + ``` sudo ufw status ``` * If not active, do the following: - * ``` + ``` sudo ufw disable ``` - * ``` + ``` sudo ufw enable ``` * Then the ufw status should show changes - * ``` + ``` sudo ufw status ``` -- 2.40.1 From f0683c7c46fd5f52dd68d29d916737a4722b3993 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:42:19 -0400 Subject: [PATCH 24/34] manual, sysadmins, gui --- .../cockpit_guide/cockpit_guide.md | 89 +++++++++---------- .../guacamole_guide/guacamole_guide.md | 14 +-- 2 files changed, 50 insertions(+), 53 deletions(-) diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md index 2686e18..e9805d3 100644 --- a/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md +++ b/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md @@ -39,48 +39,45 @@ To start, you must [deploy and SSH into a full VM](ssh_guide.md). ``` * Create a new user with root access * Here we use `newuser` as an example - ``` - adduser newuser - ``` +``` +adduser newuser +``` * To see the directory of the new user - ``` - ls /home - ``` +``` +ls /home +``` * Give sudo capacity to the new user - ``` - usermod -aG sudo newuser - ``` +``` +usermod -aG sudo newuser +``` * Make the new user accessible by SSH - ``` - su - newuser - ``` - ``` - mkdir ~/.ssh - ``` - ``` - nano ~/.ssh/authorized_keys - ``` - * add the authorized public key in the file, then save and quit - * Exit the VM and reconnect with the new user - ``` - exit - ``` - ``` - ssh newuser@VM_IPv4_address - ``` +``` +su - newuser +mkdir ~/.ssh +nano ~/.ssh/authorized_keys +``` +* Add the authorized public key in the file, then save and quit +* Exit the VM +``` +exit +``` + * Reconnect with the new user +``` +ssh newuser@VM_IPv4_address +``` ## Set the VM and Install Cockpit * Update and upgrade the VM - ``` - sudo apt update -y && sudo apt upgrade -y && sudo apt-get update -y - ``` +``` +sudo apt update -y && sudo apt upgrade -y && sudo apt-get update -y +``` * Install Cockpit - ``` - . /etc/os-release && sudo apt install -t ${UBUNTU_CODENAME}-backports cockpit -y - ``` +``` +. /etc/os-release && sudo apt install -t ${UBUNTU_CODENAME}-backports cockpit -y +``` @@ -94,23 +91,23 @@ We now change the system daemon that manages network configurations. We will be ``` * Update the `.yaml` file * Go to netplan's directory - ``` - cd /etc/netplan - ``` +``` +cd /etc/netplan +``` * Search for the proper `.yaml` file name - ``` - ls -l - ``` +``` +ls -l +``` * Update the `.yaml` file - ``` - sudo nano 50-cloud-init.yaml - ``` +``` +sudo nano 50-cloud-init.yaml +``` * Add the following lines under `network:` - ``` - version: 2 - renderer: NetworkManager - ``` - * Note that these two lines should be aligned with `ethernets:` + ``` + version: 2 + renderer: NetworkManager + ``` +* Note that these two lines should be aligned with `ethernets:` * Remove `version: 2` at the bottom of the file * Save and exit the file * Disable networkd and enable NetworkManager diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md index f0381ff..9eb5fcb 100644 --- a/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md +++ b/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md @@ -120,14 +120,14 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th ## Download the Desktop Environment and Run xrdp * Download a Ubuntu desktop environment on the VM - ``` - sudo apt install tasksel -y && sudo apt install lightdm -y - ``` - * Choose lightdm +``` +sudo apt install tasksel -y && sudo apt install lightdm -y +``` +* Choose lightdm * Run tasksel and choose `ubuntu desktop` - ``` - sudo tasksel - ``` + ``` + sudo tasksel + ``` * Download and run xrdp ``` -- 2.40.1 From bd835ef00ba752033fe94d27b63965a7f376cea4 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:43:45 -0400 Subject: [PATCH 25/34] manual, sysadmins, gui --- .../xrdp_guide/xrdp_guide.md | 102 +++++++++--------- 1 file changed, 51 insertions(+), 51 deletions(-) diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md index 8b47a0a..444e96a 100644 --- a/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md +++ b/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md @@ -36,44 +36,44 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th ``` * Once connected, update, upgrade and install the desktop environment * Update - ``` - sudo apt update -y && sudo apt upgrade -y - ``` +``` +sudo apt update -y && sudo apt upgrade -y +``` * Install a light-weight desktop environment (Xfce) - ``` - sudo apt install xfce4 xfce4-goodies -y - ``` +``` +sudo apt install xfce4 xfce4-goodies -y +``` * Create a user with root access - ``` - adduser newuser - ``` - ``` - ls /home - ``` - * You should see the newuser directory +``` +adduser newuser +``` +``` +ls /home +``` + * You should see the newuser directory * Give sudo capacity to newuser - ``` - usermod -aG sudo newuser - ``` + ``` + usermod -aG sudo newuser + ``` * Make newuser accessible by SSH - ``` - su - newuser - ``` - ``` - mkdir ~/.ssh - ``` - ``` - nano ~/.ssh/authorized_keys - ``` - * add authorized public key in file and save + ``` + su - newuser + ``` + ``` + mkdir ~/.ssh + ``` + ``` + nano ~/.ssh/authorized_keys + ``` + * add authorized public key in file and save * Exit the VM and reconnect with new user - ``` - exit - ``` +``` +exit +``` * Reconnect to the VM terminal and install XRDP - ``` - ssh newuser@VM_IPv4_address - ``` +``` +ssh newuser@VM_IPv4_address +``` * Install XRDP ``` sudo apt install xrdp -y @@ -83,9 +83,9 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th sudo systemctl status xrdp ``` * If not running, run manually: - ``` - sudo systemctl start xrdp - ``` +``` +sudo systemctl start xrdp +``` * If needed, configure xrdp (optional) ``` sudo nano /etc/xrdp/xrdp.ini @@ -93,9 +93,9 @@ If you are new to the Threefold ecosystem and you want to deploy workloads on th * Create a session with root-access user Move to home directory * Go to home directory of root-access user - ``` - cd ~ - ``` +``` +cd ~ +``` * Create session ``` echo "xfce4-session" | tee .xsession @@ -107,9 +107,9 @@ Move to home directory * Find your local computer IP address * On your local computer terminal, write - ``` - curl ifconfig.me - ``` +``` +curl ifconfig.me +``` * On the VM terminal, allow client computer port to the firewall (ufw) ``` @@ -124,16 +124,16 @@ Move to home directory sudo ufw status ``` * If not active, do the following: - ``` - sudo ufw disable - ``` - ``` - sudo ufw enable - ``` +``` +sudo ufw disable +``` +``` +sudo ufw enable +``` * Then the ufw status should show changes - ``` - sudo ufw status - ``` +``` +sudo ufw status +``` ## Client Side: Install Remote Desktop Connection for Windows, MAC or Linux @@ -149,7 +149,7 @@ Simply download the app, open it and write the IPv4 address of the VM. You then * [Remote Desktop Connection app](https://apps.microsoft.com/store/detail/microsoft-remote-desktop/9WZDNCRFJ3PS?hl=en-ca&gl=ca&rtc=1) * MAC * Download in app store - * [Microsoft Remote Desktop Connection app](https://apps.apple.com/ca/app/microsoft-remote-desktop/id1295203466?mt=12) +* [Microsoft Remote Desktop Connection app](https://apps.apple.com/ca/app/microsoft-remote-desktop/id1295203466?mt=12) * Linux * [Remmina RDP Client](https://remmina.org/) -- 2.40.1 From 982bd66987356797a6e7fe2a099330ea7a6454cc Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:47:52 -0400 Subject: [PATCH 26/34] manual, sysadmins, terra --- .../resources/terraform_qsfs_on_microvm.md | 48 +++++++++---------- .../terraform/terraform_basics.md | 10 ++-- .../terraform/terraform_full_vm.md | 16 +++---- 3 files changed, 37 insertions(+), 37 deletions(-) diff --git a/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md b/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md index dcc2a18..b655166 100644 --- a/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md +++ b/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md @@ -65,31 +65,31 @@ We present two different methods to create the Terraform files. In the first met Creating the Terraform files is very straightforward. We want to clone the repository `terraform-provider-grid` locally and run some simple commands to properly set and start the deployment. * Clone the repository `terraform-provider-grid` - * ``` + ``` git clone https://github.com/threefoldtech/terraform-provider-grid ``` * Go to the subdirectory containing the examples - * ``` + ``` cd terraform-provider-grid/examples/resources/qsfs ``` * Set your own mnemonics (replace `mnemonics words` with your own mnemonics) - * ``` + ``` export MNEMONICS="mnemonics words" ``` * Set the network (replace `network` by the desired network, e.g. `dev`, `qa`, `test` or `main`) - * ``` + ``` export NETWORK="network" ``` * Initialize the Terraform deployment - * ``` + ``` terraform init ``` * Apply the Terraform deployment - * ``` + ``` terraform apply ``` * At any moment, you can destroy the deployment with the following line - * ``` + ``` terraform destroy ``` @@ -100,21 +100,21 @@ When using this method, you might need to change some parameters within the `mai For this method, we use two files to deploy with Terraform. The first file contains the environment variables (**credentials.auto.tfvars**) and the second file contains the parameters to deploy our workloads (**main.tf**). To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file, but only the file **credentials.auto.tfvars**. * Open the terminal and go to the home directory (optional) - * ``` + ``` cd ~ ``` * Create the folder `terraform` and the subfolder `deployment-qsfs-microvm`: - * ``` - mkdir -p terraform && cd $_ - ``` - * ``` - mkdir deployment-qsfs-microvm && cd $_ - ``` + ``` + mkdir -p terraform && cd $_ + ``` + ``` + mkdir deployment-qsfs-microvm && cd $_ + ``` * Create the `main.tf` file: - * ``` - nano main.tf - ``` + ``` + nano main.tf + ``` * Copy the `main.tf` content and save the file. @@ -274,12 +274,12 @@ output "ygg_ip" { Note that we named the VM as **vm1**. * Create the `credentials.auto.tfvars` file: - * ``` - nano credentials.auto.tfvars - ``` + ``` + nano credentials.auto.tfvars + ``` * Copy the `credentials.auto.tfvars` content and save the file. - * ```terraform + ```terraform # Network network = "main" @@ -311,17 +311,17 @@ For the section QSFS Parameters, you can decide on how many VMs your data will b We now deploy the QSFS deployment with Terraform. Make sure that you are in the correct folder `terraform/deployment-qsfs-microvm` containing the main and variables files. * Initialize Terraform by writing the following in the terminal: - * ``` + ``` terraform init ``` * Apply the Terraform deployment: - * ``` + ``` terraform apply ``` * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: - * ``` + ``` terraform show ``` diff --git a/collections/system_administrators/terraform/terraform_basics.md b/collections/system_administrators/terraform/terraform_basics.md index 9d6e34e..a302399 100644 --- a/collections/system_administrators/terraform/terraform_basics.md +++ b/collections/system_administrators/terraform/terraform_basics.md @@ -59,15 +59,15 @@ There are two options when it comes to finding a node to deploy on. You can use We cover the basic preparations beforing explaining the main file. - Make a directory for your project - - ``` + ``` mkdir myfirstproject ``` - Change directory - - ``` + ``` cd myfirstproject ``` - Create a main file and insert content - - ``` + ``` nano main.tf ``` @@ -109,11 +109,11 @@ provider "grid" { When writing the main file, you can decide to leave a variable content empty. In this case you can export the variable content as environment variables. * Export your mnemonics - * ``` + ``` export MNEMONICS="..." ``` * Export the network - * ``` + ``` export NETWORK="..." ``` diff --git a/collections/system_administrators/terraform/terraform_full_vm.md b/collections/system_administrators/terraform/terraform_full_vm.md index 860fc76..161858f 100644 --- a/collections/system_administrators/terraform/terraform_full_vm.md +++ b/collections/system_administrators/terraform/terraform_full_vm.md @@ -94,20 +94,20 @@ Open the terminal. - Go to the home folder - - ``` + ``` cd ~ ``` - Create the folder `terraform` and the subfolder `deployment-full-vm`: - - ``` + ``` mkdir -p terraform/deployment-full-vm ``` - - ``` + ``` cd terraform/deployment-full-vm ``` - Create the `main.tf` file: - - ``` + ``` nano main.tf ``` @@ -210,7 +210,7 @@ In this file, we name the VM as `vm1`. - Create the `credentials.auto.tfvars` file: - - ``` + ``` nano credentials.auto.tfvars ``` @@ -239,12 +239,12 @@ We now deploy the full VM with Terraform. Make sure that you are in the correct - Initialize Terraform: - - ``` + ``` terraform init ``` - Apply Terraform to deploy the full VM: - - ``` + ``` terraform apply ``` @@ -255,7 +255,7 @@ After deployments, take note of the 3Node' IPv4 address. You will need this addr ## SSH into the 3Node - To [SSH into the 3Node](ssh_guide.md), write the following: - - ``` + ``` ssh root@VM_IPv4_Address ``` -- 2.40.1 From 9d539a31f161a6cd8ddfdeffcd0a5995b1b44f14 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:55:11 -0400 Subject: [PATCH 27/34] manual, sysadmins, terra nc --- .../terraform_mariadb_synced_databases.md | 84 +++++------ .../advanced/terraform_nextcloud_aio.md | 22 +-- .../advanced/terraform_nextcloud_redundant.md | 132 +++++++++--------- .../advanced/terraform_nextcloud_single.md | 78 +++++------ .../advanced/terraform_nextcloud_vpn.md | 26 ++-- .../terraform/advanced/terraform_nomad.md | 24 ++-- .../advanced/terraform_wireguard_ssh.md | 22 +-- .../advanced/terraform_wireguard_vpn.md | 31 ++-- 8 files changed, 209 insertions(+), 210 deletions(-) diff --git a/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md b/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md index bf911db..3a184ec 100644 --- a/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md +++ b/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md @@ -102,19 +102,19 @@ Modify the variable files to take into account your own seed phras and SSH keys. Open the terminal. * Go to the home folder - * ``` + ``` cd ~ ``` * Create the folder `terraform` and the subfolder `deployment-synced-db`: - * ``` + ``` mkdir -p terraform/deployment-synced-db ``` - * ``` + ``` cd terraform/deployment-synced-db ``` * Create the `main.tf` file: - * ``` + ``` nano main.tf ``` @@ -259,12 +259,12 @@ In this file, we name the first VM as `vm1` and the second VM as `vm2`. For ease In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2`is 10.1.4.2. This might be different during your own deployment. If so, change the codes in this guide accordingly. * Create the `credentials.auto.tfvars` file: - * ``` + ``` nano credentials.auto.tfvars ``` * Copy the `credentials.auto.tfvars` content and save the file. - * ``` + ``` mnemonics = "..." SSH_KEY = "..." @@ -285,19 +285,19 @@ Make sure to add your own seed phrase and SSH public key. You will also need to We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-synced-db` with the main and variables files. * Initialize Terraform: - * ``` + ``` terraform init ``` * Apply Terraform to deploy the VPN: - * ``` + ``` terraform apply ``` After deployments, take note of the 3Nodes' IPv4 address. You will need those addresses to SSH into the 3Nodes. Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: - * ``` + ``` terraform show ``` @@ -306,7 +306,7 @@ Note that, at any moment, if you want to see the information on your Terraform d ### SSH into the 3Nodes * To [SSH into the 3Nodes](ssh_guide.md), write the following while making sure to set the proper IP address for each VM: - * ``` + ``` ssh root@3node_IPv4_Address ``` @@ -315,11 +315,11 @@ Note that, at any moment, if you want to see the information on your Terraform d ### Preparing the VMs for the Deployment * Update and upgrade the system - * ``` + ``` apt update && sudo apt upgrade -y && sudo apt-get install apache2 -y ``` * After download, you might need to reboot the system for changes to be fully taken into account - * ``` + ``` reboot ``` * Reconnect to the VMs @@ -333,19 +333,19 @@ We now want to ping the VMs using Wireguard. This will ensure the connection is First, we set Wireguard with the Terraform output. * On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/usr/local/etc/wireguard/wg.conf`. - * ``` + ``` nano /usr/local/etc/wireguard/wg.conf ``` * Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The WireGuard output stands in between `EOT`. * Start the WireGuard on your local computer: - * ``` + ``` wg-quick up wg ``` * To stop the wireguard service: - * ``` + ``` wg-quick down wg ``` @@ -353,10 +353,10 @@ First, we set Wireguard with the Terraform output. This should set everything properly. * As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct: - * ``` + ``` ping 10.1.3.2 ``` - * ``` + ``` ping 10.1.4.2 ``` @@ -371,11 +371,11 @@ For more information on WireGuard, notably in relation to Windows, please read [ ## Download MariaDB and Configure the Database * Download the MariaDB server and client on both the master VM and the worker VM - * ``` + ``` apt install mariadb-server mariadb-client -y ``` * Configure the MariaDB database - * ``` + ``` nano /etc/mysql/mariadb.conf.d/50-server.cnf ``` * Do the following changes @@ -392,12 +392,12 @@ For more information on WireGuard, notably in relation to Windows, please read [ ``` * Restart MariaDB - * ``` + ``` systemctl restart mysql ``` * Launch Mariadb - * ``` + ``` mysql ``` @@ -406,7 +406,7 @@ For more information on WireGuard, notably in relation to Windows, please read [ ## Create User with Replication Grant * Do the following on both the master and the worker - * ``` + ``` CREATE USER 'repuser'@'%' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ; FLUSH PRIVILEGES; @@ -429,17 +429,17 @@ For more information on WireGuard, notably in relation to Windows, please read [ ### TF Template Worker Server Data * Write the following in the Worker VM - * ``` + ``` CHANGE MASTER TO MASTER_HOST='10.1.3.2', MASTER_USER='repuser', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=328; ``` - * ``` + ``` start slave; ``` - * ``` + ``` show slave status\G; ``` @@ -448,17 +448,17 @@ For more information on WireGuard, notably in relation to Windows, please read [ ### TF Template Master Server Data * Write the following in the Master VM - * ``` + ``` CHANGE MASTER TO MASTER_HOST='10.1.4.2', MASTER_USER='repuser', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=328; ``` - * ``` + ``` start slave; ``` - * ``` + ``` show slave status\G; ``` @@ -503,71 +503,71 @@ We now set the MariaDB database. You should choose your own username and passwor We will now install and set [GlusterFS](https://www.gluster.org/), a free and open-source software scalable network filesystem. * Install GlusterFS on both the master and worker VMs - * ``` + ``` add-apt-repository ppa:gluster/glusterfs-7 -y && apt install glusterfs-server -y ``` * Start the GlusterFS service on both VMs - * ``` + ``` systemctl start glusterd.service && systemctl enable glusterd.service ``` * Set the master to worker probe IP on the master VM: - * ``` + ``` gluster peer probe 10.1.4.2 ``` * See the peer status on the worker VM: - * ``` + ``` gluster peer status ``` * Set the master and worker IP address on the master VM: - * ``` + ``` gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force ``` * Start Gluster: - * ``` + ``` gluster volume start vol1 ``` * Check the status on the worker VM: - * ``` + ``` gluster volume status ``` * Mount the server with the master IP on the master VM: - * ``` + ``` mount -t glusterfs 10.1.3.2:/vol1 /var/www ``` * See if the mount is there on the master VM: - * ``` + ``` df -h ``` * Mount the Server with the worker IP on the worker VM: - * ``` + ``` mount -t glusterfs 10.1.4.2:/vol1 /var/www ``` * See if the mount is there on the worker VM: - * ``` + ``` df -h ``` We now update the mount with the filse fstab on both master and worker. * To prevent the mount from being aborted if the server reboot, write the following on both servers: - * ``` + ``` nano /etc/fstab ``` * Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2): - * ``` + ``` 10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 ``` * Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2): - * ``` + ``` 10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 ``` diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md index 87dcf9b..290b988 100644 --- a/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md @@ -46,33 +46,33 @@ For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). We thus add the following rules: * Allow SSH (port 22) - * ``` + ``` ufw allow ssh ``` * Allow HTTP (port 80) - * ``` + ``` ufw allow http ``` * Allow https (port 443) - * ``` + ``` ufw allow https ``` * Allow port 8443 - * ``` + ``` ufw allow 8443 ``` * Allow port 3478 for Nextcloud Talk - * ``` + ``` ufw allow 3478 ``` * To enable the firewall, write the following: - * ``` + ``` ufw enable ``` * To see the current security rules, write the following: - * ``` + ``` ufw status verbose ``` @@ -90,7 +90,7 @@ You now have enabled the firewall with proper security rules for your Nextcloud * TTL: Automatic * It might take up to 30 minutes to set the DNS properly. * To check if the A record has been registered, you can use a common DNS checker: - * ``` + ``` https://dnschecker.org/#A/ ``` @@ -101,11 +101,11 @@ You now have enabled the firewall with proper security rules for your Nextcloud For the rest of the guide, we follow the steps availabe on the Nextcloud website's tutorial [How to Install the Nextcloud All-in-One on Linux](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). * Install Docker - * ``` + ``` curl -fsSL get.docker.com | sudo sh ``` * Install Nextcloud AIO - * ``` + ``` sudo docker run \ --sig-proxy=false \ --name nextcloud-aio-mastercontainer \ @@ -118,7 +118,7 @@ For the rest of the guide, we follow the steps availabe on the Nextcloud website nextcloud/all-in-one:latest ``` * Reach the AIO interface on your browser: - * ``` + ``` https://:8443 ``` * Example: `https://nextcloudwebsite.com:8443` diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md index b052848..cfec0b4 100644 --- a/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md @@ -126,19 +126,19 @@ Modify the variable files to take into account your own seed phrase and SSH keys Open the terminal. * Go to the home folder - * ``` + ``` cd ~ ``` * Create the folder `terraform` and the subfolder `deployment-nextcloud`: - * ``` + ``` mkdir -p terraform/deployment-nextcloud ``` - * ``` + ``` cd terraform/deployment-nextcloud ``` * Create the `main.tf` file: - * ``` + ``` nano main.tf ``` @@ -283,12 +283,12 @@ In this file, we name the first VM as `vm1` and the second VM as `vm2`. In the g In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly. * Create the `credentials.auto.tfvars` file: - * ``` + ``` nano credentials.auto.tfvars ``` * Copy the `credentials.auto.tfvars` content and save the file. - * ``` + ``` mnemonics = "..." SSH_KEY = "..." @@ -307,12 +307,12 @@ Make sure to add your own seed phrase and SSH public key. You will also need to We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-nextcloud` with the main and variables files. * Initialize Terraform: - * ``` + ``` terraform init ``` * Apply Terraform to deploy the VPN: - * ``` + ``` terraform apply ``` @@ -321,18 +321,18 @@ After deployments, take note of the 3nodes' IPv4 address. You will need those ad ### SSH into the 3nodes * To [SSH into the 3nodes](ssh_guide.md), write the following: - * ``` + ``` ssh root@VM_IPv4_Address ``` ### Preparing the VMs for the Deployment * Update and upgrade the system - * ``` + ``` apt update && apt upgrade -y && apt-get install apache2 -y ``` * After download, reboot the system - * ``` + ``` reboot ``` * Reconnect to the VMs @@ -348,19 +348,19 @@ For more information on WireGuard, notably in relation to Windows, please read [ First, we set Wireguard with the Terraform output. * On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/etc/wireguard/wg.conf`. - * ``` + ``` nano /etc/wireguard/wg.conf ``` * Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The Wireguard output stands in between `EOT`. * Start Wireguard on your local computer: - * ``` + ``` wg-quick up wg ``` * To stop the wireguard service: - * ``` + ``` wg-quick down wg ``` @@ -368,10 +368,10 @@ If it doesn't work and you already did a wireguard connection with the same file This should set everything properly. * As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct: - * ``` + ``` ping 10.1.3.2 ``` - * ``` + ``` ping 10.1.4.2 ``` @@ -384,11 +384,11 @@ If you correctly receive the packets from the two VMs, you know that the VPN is ## Download MariaDB and Configure the Database * Download MariaDB's server and client on both VMs - * ``` + ``` apt install mariadb-server mariadb-client -y ``` * Configure the MariaDB database - * ``` + ``` nano /etc/mysql/mariadb.conf.d/50-server.cnf ``` * Do the following changes @@ -405,19 +405,19 @@ If you correctly receive the packets from the two VMs, you know that the VPN is ``` * Restart MariaDB - * ``` + ``` systemctl restart mysql ``` * Launch MariaDB - * ``` + ``` mysql ``` ## Create User with Replication Grant * Do the following on both VMs - * ``` + ``` CREATE USER 'repuser'@'%' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ; FLUSH PRIVILEGES; @@ -436,33 +436,33 @@ If you correctly receive the packets from the two VMs, you know that the VPN is ### TF Template Worker Server Data * Write the following in the worker VM - * ``` + ``` CHANGE MASTER TO MASTER_HOST='10.1.3.2', MASTER_USER='repuser', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=328; ``` - * ``` + ``` start slave; ``` - * ``` + ``` show slave status\G; ``` ### TF Template Master Server Data * Write the following in the master VM - * ``` + ``` CHANGE MASTER TO MASTER_HOST='10.1.4.2', MASTER_USER='repuser', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=328; ``` - * ``` + ``` start slave; ``` - * ``` + ``` show slave status\G; ``` @@ -505,72 +505,72 @@ We now set the Nextcloud database. You should choose your own username and passw We will now install and set [GlusterFS](https://www.gluster.org/), a free and open source software scalable network filesystem. * Install GlusterFS on both the master and worker VMs - * ``` + ``` echo | add-apt-repository ppa:gluster/glusterfs-7 && apt install glusterfs-server -y ``` * Start the GlusterFS service on both VMs - * ``` + ``` systemctl start glusterd.service && systemctl enable glusterd.service ``` * Set the master to worker probe IP on the master VM: - * ``` + ``` gluster peer probe 10.1.4.2 ``` * See the peer status on the worker VM: - * ``` + ``` gluster peer status ``` * Set the master and worker IP address on the master VM: - * ``` + ``` gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force ``` * Start GlusterFS on the master VM: - * ``` + ``` gluster volume start vol1 ``` * Check the status on the worker VM: - * ``` + ``` gluster volume status ``` * Mount the server with the master IP on the master VM: - * ``` + ``` mount -t glusterfs 10.1.3.2:/vol1 /var/www ``` * See if the mount is there on the master VM: - * ``` + ``` df -h ``` * Mount the server with the worker IP on the worker VM: - * ``` + ``` mount -t glusterfs 10.1.4.2:/vol1 /var/www ``` * See if the mount is there on the worker VM: - * ``` + ``` df -h ``` We now update the mount with the filse fstab on both VMs. * To prevent the mount from being aborted if the server reboots, write the following on both servers: - * ``` + ``` nano /etc/fstab ``` * Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2): - * ``` + ``` 10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 ``` * Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2): - * ``` + ``` 10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 ``` @@ -579,14 +579,14 @@ We now update the mount with the filse fstab on both VMs. # Install PHP and Nextcloud * Install PHP and the PHP modules for Nextcloud on both the master and the worker: - * ``` + ``` apt install php -y && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y ``` We will now install Nextcloud. This is done only on the master VM. * On both the master and worker VMs, go to the folder `/var/www`: - * ``` + ``` cd /var/www ``` @@ -594,27 +594,27 @@ We will now install Nextcloud. This is done only on the master VM. * See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/). * We now download Nextcloud on the master VM. - * ``` + ``` wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip ``` You only need to download on the master VM, since you set a peer-to-peer connection, it will also be accessible on the worker VM. * Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress: - * ``` + ``` apt install p7zip-full -y ``` - * ``` + ``` 7z x nextcloud-27.0.1.zip -o/var/www/ ``` * After the download, see if the Nextcloud file is there on the worker VM: - * ``` + ``` ls ``` * Then, we grant permissions to the folder. Do this on both the master VM and the worker VM. - * ``` + ``` chown www-data:www-data /var/www/nextcloud/ -R ``` @@ -660,7 +660,7 @@ Note: When the master VM goes offline, after 5 minutes maximum DuckDNS will chan We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`. * On both the master and worker VMs, write the following: - * ``` + ``` nano /etc/apache2/sites-available/nextcloud.conf ``` @@ -694,12 +694,12 @@ The file should look like this, with your own subdomain instead of `subdomain`: ``` * On both the master VM and the worker VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file: - * ``` + ``` a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl ``` * Then, reload and restart Apache: - * ``` + ``` systemctl reload apache2 && systemctl restart apache2 ``` @@ -710,20 +710,20 @@ The file should look like this, with your own subdomain instead of `subdomain`: We now access Nextcloud over the public Internet. * Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain): - * ``` + ``` subdomain.duckdns.org ``` Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser. * Choose a name and a password. For this guide, we use the following: - * ``` + ``` ncadmin password1234 ``` * Enter the Nextcloud Database information created with MariaDB and click install: - * ``` + ``` Database user: ncuser Database password: password1234 Database name: nextcloud @@ -749,27 +749,27 @@ To enable HTTPS, first install `letsencrypt` with `certbot`: Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/) * See if you have the latest version of snap: - * ``` + ``` snap install core; snap refresh core ``` * Remove certbot-auto: - * ``` + ``` apt-get remove certbot ``` * Install certbot: - * ``` + ``` snap install --classic certbot ``` * Ensure that certbot can be run: - * ``` + ``` ln -s /snap/bin/certbot /usr/bin/certbot ``` * Then, install certbot-apache: - * ``` + ``` apt install python3-certbot-apache -y ``` @@ -825,7 +825,7 @@ output "ipv4_vm1" { ``` * To add the HTTPS protection, write the following line on the master VM with your own subdomain: - * ``` + ``` certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org ``` @@ -837,7 +837,7 @@ Note: You then need to redo the same process with the worker VM. This time, make ## Verify HTTPS Automatic Renewal * Make a dry run of the certbot renewal to verify that it is correctly set up. - * ``` + ``` certbot renew --dry-run ``` @@ -859,25 +859,25 @@ We thus add the following rules: * Allow SSH (port 22) - * ``` + ``` ufw allow ssh ``` * Allow HTTP (port 80) - * ``` + ``` ufw allow http ``` * Allow https (port 443) - * ``` + ``` ufw allow https ``` * To enable the firewall, write the following: - * ``` + ``` ufw enable ``` * To see the current security rules, write the following: - * ``` + ``` ufw status verbose ``` diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md index 9c54dea..48e206a 100644 --- a/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md @@ -112,19 +112,19 @@ Modify the variable files to take into account your own seed phrase and SSH keys Open the terminal and follow those steps. * Go to the home folder - * ``` + ``` cd ~ ``` * Create the folder `terraform` and the subfolder `deployment-single-nextcloud`: - * ``` + ``` mkdir -p terraform/deployment-single-nextcloud ``` - * ``` + ``` cd terraform/deployment-single-nextcloud ``` * Create the `main.tf` file: - * ``` + ``` nano main.tf ``` @@ -226,12 +226,12 @@ output "ipv4_vm1" { In this file, we name the full VM as `vm1`. * Create the `credentials.auto.tfvars` file: - * ``` + ``` nano credentials.auto.tfvars ``` * Copy the `credentials.auto.tfvars` content and save the file. - * ``` + ``` mnemonics = "..." SSH_KEY = "..." @@ -249,12 +249,12 @@ Make sure to add your own seed phrase and SSH public key. You will also need to We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-single-nextcloud` with the main and variables files. * Initialize Terraform: - * ``` + ``` terraform init ``` * Apply Terraform to deploy the full VM: - * ``` + ``` terraform apply ``` @@ -263,18 +263,18 @@ After deployments, take note of the 3Node's IPv4 address. You will need this add ## SSH into the 3Node * To [SSH into the 3Node](ssh_guide.md), write the following: - * ``` + ``` ssh root@VM_IPv4_Address ``` ## Prepare the Full VM * Update and upgrade the system - * ``` + ``` apt update && apt upgrade && apt-get install apache2 ``` * After download, reboot the system - * ``` + ``` reboot ``` * Reconnect to the VM @@ -286,11 +286,11 @@ After deployments, take note of the 3Node's IPv4 address. You will need this add ## Download MariaDB and Configure the Database * Download MariaDB's server and client - * ``` + ``` apt install mariadb-server mariadb-client ``` * Configure the MariaDB database - * ``` + ``` nano /etc/mysql/mariadb.conf.d/50-server.cnf ``` * Do the following changes @@ -307,12 +307,12 @@ After deployments, take note of the 3Node's IPv4 address. You will need this add ``` * Restart MariaDB - * ``` + ``` systemctl restart mysql ``` * Launch MariaDB - * ``` + ``` mysql ``` @@ -345,14 +345,14 @@ We now set the Nextcloud database. You should choose your own username and passw # Install PHP and Nextcloud * Install PHP and the PHP modules for Nextcloud on both the master and the worker: - * ``` + ``` apt install php && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip ``` We will now install Nextcloud. * On the full VM, go to the folder `/var/www`: - * ``` + ``` cd /var/www ``` @@ -360,19 +360,17 @@ We will now install Nextcloud. * See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/). * We now download Nextcloud on the full VM. - * ``` + ``` wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip ``` * Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress: - * ``` - apt install p7zip-full ``` - * ``` + apt install p7zip-full 7z x nextcloud-27.0.1.zip -o/var/www/ ``` * Then, we grant permissions to the folder. - * ``` + ``` chown www-data:www-data /var/www/nextcloud/ -R ``` @@ -398,7 +396,7 @@ Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`. * On full VM, write the following: - * ``` + ``` nano /etc/apache2/sites-available/nextcloud.conf ``` @@ -432,12 +430,12 @@ The file should look like this, with your own subdomain instead of `subdomain`: ``` * On the full VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file: - * ``` + ``` a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl ``` * Then, reload and restart Apache: - * ``` + ``` systemctl reload apache2 && systemctl restart apache2 ``` @@ -448,20 +446,20 @@ The file should look like this, with your own subdomain instead of `subdomain`: We now access Nextcloud over the public Internet. * Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain): - * ``` + ``` subdomain.duckdns.org ``` Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser. * Choose a name and a password. For this guide, we use the following: - * ``` + ``` ncadmin password1234 ``` * Enter the Nextcloud Database information created with MariaDB and click install: - * ``` + ``` Database user: ncuser Database password: password1234 Database name: nextcloud @@ -487,27 +485,27 @@ To enable HTTPS, first install `letsencrypt` with `certbot`: Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/) * See if you have the latest version of snap: - * ``` + ``` snap install core; snap refresh core ``` * Remove certbot-auto: - * ``` + ``` apt-get remove certbot ``` * Install certbot: - * ``` + ``` snap install --classic certbot ``` * Ensure that certbot can be run: - * ``` + ``` ln -s /snap/bin/certbot /usr/bin/certbot ``` * Then, install certbot-apache: - * ``` + ``` apt install python3-certbot-apache ``` @@ -516,14 +514,14 @@ Install certbot by following the steps here: [https://certbot.eff.org/](https:// We now set the certbot with the DNS domain. * To add the HTTPS protection, write the following line on the full VM with your own subdomain: - * ``` + ``` certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org ``` ## Verify HTTPS Automatic Renewal * Make a dry run of the certbot renewal to verify that it is correctly set up. - * ``` + ``` certbot renew --dry-run ``` @@ -545,25 +543,25 @@ We thus add the following rules: * Allow SSH (port 22) - * ``` + ``` ufw allow ssh ``` * Allow HTTP (port 80) - * ``` + ``` ufw allow http ``` * Allow https (port 443) - * ``` + ``` ufw allow https ``` * To enable the firewall, write the following: - * ``` + ``` ufw enable ``` * To see the current security rules, write the following: - * ``` + ``` ufw status verbose ``` diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md index 2eb4ccf..e8968ef 100644 --- a/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md @@ -246,17 +246,17 @@ output "fqdn" { We now deploy the 2-node VPN with Terraform. Make sure that you are in the correct folder containing the main and variables files. * Initialize Terraform: - * ``` + ``` terraform init ``` * Apply Terraform to deploy Nextcloud: - * ``` + ``` terraform apply ``` Note that, at any moment, if you want to see the information on your Terraform deployment, write the following: - * ``` + ``` terraform show ``` @@ -274,19 +274,19 @@ Note that, at any moment, if you want to see the information on your Terraform d We need to install a few things on the Nextcloud VM before going further. * Update the Nextcloud VM - * ``` + ``` apt update ``` * Install ping on the Nextcloud VM if you want to test the VPN connection (Optional) - * ``` + ``` apt install iputils-ping -y ``` * Install Rsync on the Nextcloud VM - * ``` + ``` apt install rsync ``` * Install nano on the Nextcloud VM - * ``` + ``` apt install nano ``` * Install Cron on the Nextcloud VM @@ -295,19 +295,19 @@ We need to install a few things on the Nextcloud VM before going further. # Prepare the VMs for the Rsync Daily Backup * Test the VPN (Optional) with [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) - * ``` + ``` ping ``` * Generate an SSH key pair on the Backup VM - * ``` + ``` ssh-keygen ``` * Take note of the public key in the Backup VM - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Add the public key of the Backup VM in the Nextcloud VM - * ``` + ``` nano ~/.ssh/authorized_keys ``` @@ -318,11 +318,11 @@ We need to install a few things on the Nextcloud VM before going further. We now set a daily cron job that will make a backup between the Nextcloud VM and the Backup VM using Rsync. * Open the crontab on the Backup VM - * ``` + ``` crontab -e ``` * Add the cron job at the end of the file - * ``` + ``` 0 8 * * * rsync -avz --no-perms -O --progress --delete --log-file=/root/rsync_storage.log root@10.1.3.2:/mnt/backup/ /mnt/backup/ ``` diff --git a/collections/system_administrators/terraform/advanced/terraform_nomad.md b/collections/system_administrators/terraform/advanced/terraform_nomad.md index 13fbde5..a0ff206 100644 --- a/collections/system_administrators/terraform/advanced/terraform_nomad.md +++ b/collections/system_administrators/terraform/advanced/terraform_nomad.md @@ -61,14 +61,14 @@ Also note that this deployment uses both the Planetary network and WireGuard. We start by creating the main file for our Nomad cluster. * Create a directory for your Terraform Nomad cluster - * ``` + ``` mkdir nomad ``` - * ``` + ``` cd nomad ``` * Create the `main.tf` file - * ``` + ``` nano main.tf ``` @@ -255,12 +255,12 @@ output "client2_planetary_ip" { We create a credentials file that will contain the environment variables. This file should be in the same directory as the main file. * Create the `credentials.auto.tfvars` file - * ``` + ``` nano credentials.auto.tfvars ``` * Copy the `credentials.auto.tfvars` content and save the file - * ``` + ``` mnemonics = "..." SSH_KEY = "..." @@ -280,12 +280,12 @@ Make sure to replace the three dots by your own information for `mnemonics` and We now deploy the Nomad Cluster with Terraform. Make sure that you are in the directory containing the `main.tf` file. * Initialize Terraform - * ``` + ``` terraform init ``` * Apply Terraform to deploy the Nomad cluster - * ``` + ``` terraform apply ``` @@ -300,7 +300,7 @@ Note that the IP addresses will be shown under `Outputs` after running the comma ### SSH with the Planetary Network * To [SSH with the Planetary network](ssh_openssh.md), write the following with the proper IP address - * ``` + ``` ssh root@planetary_ip ``` @@ -311,7 +311,7 @@ You now have an SSH connection access over the Planetary network to the client a To SSH with WireGuard, we first need to set the proper WireGuard configurations. * Create a file named `wg.conf` in the directory `/etc/wireguard` - * ``` + ``` nano /etc/wireguard/wg.conf ``` @@ -319,18 +319,18 @@ To SSH with WireGuard, we first need to set the proper WireGuard configurations. * Note that you can use `terraform show` to see the Terraform output. The WireGuard configurations (`wg_config`) stands in between the two `EOT` instances. * Start WireGuard on your local computer - * ``` + ``` wg-quick up wg ``` * As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the WireGuard IP of a node to make sure the connection is correct - * ``` + ``` ping wg_ip ``` We are now ready to SSH into the client and server nodes with WireGuard. * To SSH with WireGuard, write the following with the proper IP address: - * ``` + ``` ssh root@wg_ip ``` diff --git a/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md b/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md index b357ad1..6c5e6c1 100644 --- a/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md +++ b/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md @@ -70,20 +70,19 @@ Modify the variable file to take into account your own seed phras and SSH keys. Now let's create the Terraform files. * Open the terminal and go to the home directory - * ``` + ``` cd ~ ``` * Create the folder `terraform` and the subfolder `deployment-wg-ssh`: - * ``` + ``` mkdir -p terraform/deployment-wg-ssh ``` - * ``` + ``` cd terraform/deployment-wg-ssh ``` - ``` * Create the `main.tf` file: - * ``` + ``` nano main.tf ``` @@ -173,12 +172,12 @@ output "node1_zmachine1_ip" { ``` * Create the `credentials.auto.tfvars` file: - * ``` + ``` nano credentials.auto.tfvars ``` * Copy the `credentials.auto.tfvars` content, set the node ID as well as your mnemonics and SSH public key, then save the file. - * ``` + ``` mnemonics = "..." SSH_KEY = "..." @@ -198,12 +197,12 @@ Make sure to add your own seed phrase and SSH public key. You will also need to We now deploy the micro VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-ssh` containing the main and variables files. * Initialize Terraform: - * ``` + ``` terraform init ``` * Apply Terraform to deploy the micro VM: - * ``` + ``` terraform apply ``` * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. @@ -264,10 +263,11 @@ You now have access into the VM over Wireguard SSH connection. If you want to destroy the Terraform deployment, write the following in the terminal: -* ``` + ``` terraform destroy ``` - * Then write `yes` to confirm. + +Then write `yes` to confirm. Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-ssh`. diff --git a/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md b/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md index 7c6d3ef..e9cbdaa 100644 --- a/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md +++ b/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md @@ -74,19 +74,19 @@ Now let's create the Terraform files. * Open the terminal and go to the home directory - * ``` + ``` cd ~ ``` * Create the folder `terraform` and the subfolder `deployment-wg-vpn`: - * ``` + ``` mkdir -p terraform && cd $_ ``` - * ``` + ``` mkdir deployment-wg-vpn && cd $_ ``` * Create the `main.tf` file: - * ``` + ``` nano main.tf ``` @@ -229,12 +229,12 @@ output "ipv4_vm2" { In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly. * Create the `credentials.auto.tfvars` file: - * ``` + ``` nano credentials.auto.tfvars ``` * Copy the `credentials.auto.tfvars` content and save the file. - * ``` + ``` mnemonics = "..." SSH_KEY = "..." @@ -256,17 +256,17 @@ Set the parameters for your VMs as you wish. The two servers will have the same We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-vpn` containing the main and variables files. * Initialize Terraform by writing the following in the terminal: - * ``` + ``` terraform init ``` * Apply the Terraform deployment: - * ``` + ``` terraform apply ``` * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: - * ``` + ``` terraform show ``` @@ -279,19 +279,19 @@ To set the Wireguard connection, on your local computer, you will need to take t For more information on WireGuard, notably in relation to Windows, please read [this documentation](ssh_wireguard.md). * Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`. - * ``` + ``` nano /usr/local/etc/wireguard/wg.conf ``` * Paste the content between the two `EOT` displayed after you set `terraform apply`. * Start the wireguard: - * ``` + ``` wg-quick up wg ``` If you want to stop the Wireguard service, write the following on your terminal: -* ``` + ``` wg-quick down wg ``` @@ -299,7 +299,7 @@ If you want to stop the Wireguard service, write the following on your terminal: As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VMs to make sure the Wireguard connection is correct. Make sure to replace `wg_vm_ip` with the proper IP address for each VM: -* ``` + ``` ping wg_vm_ip ``` @@ -329,10 +329,11 @@ You now have an SSH connection access to the VMs over Wireguard and IPv4. If you want to destroy the Terraform deployment, write the following in the terminal: -* ``` + ``` terraform destroy ``` - * Then write `yes` to confirm. + +Then write `yes` to confirm. Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-vpn`. -- 2.40.1 From d51327929768c736d77674c6b2c9e586e0d81a5d Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 12:57:47 -0400 Subject: [PATCH 28/34] manual, sysadmins, pulumi --- .../pulumi/pulumi_examples.md | 14 +++++++------- .../system_administrators/pulumi/pulumi_install.md | 6 +++--- .../terraform/advanced/terraform_nextcloud_vpn.md | 4 +++- 3 files changed, 13 insertions(+), 11 deletions(-) diff --git a/collections/system_administrators/pulumi/pulumi_examples.md b/collections/system_administrators/pulumi/pulumi_examples.md index ee8ab14..9910d49 100644 --- a/collections/system_administrators/pulumi/pulumi_examples.md +++ b/collections/system_administrators/pulumi/pulumi_examples.md @@ -25,11 +25,11 @@ There are a few things to set up before exploring Pulumi. Since we will be using * [Install Pulumi](pulumi_install.md) on your machine * Clone the **Pulumi-ThreeFold** repository - * ``` + ``` git clone https://github.com/threefoldtech/pulumi-threefold ``` * Change directory - * ``` + ``` cd ./pulumi-threefold ``` @@ -38,15 +38,15 @@ There are a few things to set up before exploring Pulumi. Since we will be using You can export the environment variables before deploying workloads. * Export the network (**dev**, **qa**, **test**, **main**). Note that we are using the **dev** network by default. - * ``` + ``` export NETWORK="Enter the network" ``` * Export your mnemonics. - * ``` + ``` export MNEMONIC="Enter the mnemonics" ``` * Export the SSH_KEY (public key). - * ``` + ``` export SSH_KEY="Enter the public Key" ``` @@ -65,11 +65,11 @@ The different examples that work simply by running **make run** are the followin We give an example with **virtual_machine**. * Go to the directory **virtual_machine** - * ``` + ``` cd examples/virtual_machine ``` * Deploy the Pulumi workload with **make** - * ``` + ``` make run ``` diff --git a/collections/system_administrators/pulumi/pulumi_install.md b/collections/system_administrators/pulumi/pulumi_install.md index 93262a8..531c591 100644 --- a/collections/system_administrators/pulumi/pulumi_install.md +++ b/collections/system_administrators/pulumi/pulumi_install.md @@ -17,15 +17,15 @@ To install Pulumi, simply follow the steps provided in the [Pulumi documentation ## Installation * Install on Linux - * ``` + ``` curl -fsSL https://get.pulumi.com | sh ``` * Install on MAC - * ``` + ``` brew install pulumi/tap/pulumi ``` * Install on Windows - * ``` + ``` choco install pulumi ``` diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md index e8968ef..3d6843a 100644 --- a/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md @@ -290,7 +290,9 @@ We need to install a few things on the Nextcloud VM before going further. apt install nano ``` * Install Cron on the Nextcloud VM - * apt install cron + ``` + apt install cron + ``` # Prepare the VMs for the Rsync Daily Backup -- 2.40.1 From c54a6ab037a10ff453ec3507ebe9495f00f1fc18 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 13:00:03 -0400 Subject: [PATCH 29/34] manual, sysadmins, cli --- .../computer_it_basics/cli_scripts_basics.md | 257 +++++++++--------- 1 file changed, 129 insertions(+), 128 deletions(-) diff --git a/collections/system_administrators/computer_it_basics/cli_scripts_basics.md b/collections/system_administrators/computer_it_basics/cli_scripts_basics.md index dec08b2..c6f591b 100644 --- a/collections/system_administrators/computer_it_basics/cli_scripts_basics.md +++ b/collections/system_administrators/computer_it_basics/cli_scripts_basics.md @@ -24,6 +24,7 @@ - [Become the superuser (su) on Linux](#become-the-superuser-su-on-linux) - [Exit a session](#exit-a-session) - [Know the current user](#know-the-current-user) + - [See the path of a package](#see-the-path-of-a-package) - [Set the path of a package](#set-the-path-of-a-package) - [See the current path](#see-the-current-path-1) - [Find the current shell](#find-the-current-shell) @@ -127,11 +128,11 @@ You can also set a number of counts with `-c` on Linux and MAC and `-n` on Windo Here are the steps to install [Go](https://go.dev/). * Install go - * ``` + ``` sudo apt install golang-go ``` * Verify that go is properly installed - * ``` + ``` go version ``` @@ -142,19 +143,19 @@ Here are the steps to install [Go](https://go.dev/). Follow those steps to install [Brew](https://brew.sh/) * Installation command from Brew: - * ``` + ``` /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` * Add the path to the **.profile** directory. Replace by your username. - * ``` + ``` echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> /home//.profile ``` * Evaluation the following: - * ``` + ``` eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" ``` * Verify the installation - * ``` + ``` brew doctor ``` @@ -163,27 +164,27 @@ Follow those steps to install [Brew](https://brew.sh/) ### Brew basic commands * To update brew in general: - * ``` + ``` brew update ``` * To update a specific package: - * ``` + ``` brew update ``` * To install a package: - * ``` + ``` brew install ``` * To uninstall a package: - * ``` + ``` brew uninstall ``` * To search a package: - * ``` + ``` brew search ``` * [Uninstall Brew](https://github.com/homebrew/install#uninstall-homebrew) - * ``` + ``` /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)" ``` @@ -194,11 +195,11 @@ Follow those steps to install [Brew](https://brew.sh/) Installing Terraform with Brew is very simple by following the [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). * Compile HashiCorp software on Homebrew's infrastructure - * ``` + ``` brew tap hashicorp/tap ``` * Install Terraform - * ``` + ``` brew install hashicorp/tap/terraform ``` @@ -207,27 +208,27 @@ Installing Terraform with Brew is very simple by following the [Terraform docume ### Yarn basic commands * Add a package - * ``` + ``` yarn add ``` * Initialize the development of a package - * ``` + ``` yarn init ``` * Install all the dependencies in the **package.json** file - * ``` + ``` yarn install ``` * Publish a package to a package manager - * ``` + ``` yarn publish ``` * Remove unused package from the current package - * ``` + ``` yarn remove ``` * Clean the cache - * ``` + ``` yarn cache clean ``` @@ -260,11 +261,11 @@ ls -ld .?* You can use **tree** to display the files and organization of a directory: * General command - * ``` + ``` tree ``` * View hidden files - * ``` + ``` tree -a ``` @@ -336,10 +337,10 @@ which On MAC and Linux, you can use **coreutils** and **realpath** from Brew: -* ``` + ``` brew install coreutils ``` -* ``` + ``` realpath file_name ``` @@ -350,11 +351,11 @@ On MAC and Linux, you can use **coreutils** and **realpath** from Brew: You can use either command: * Option 1 - * ``` + ``` sudo -i ``` * Option 2 - * ``` + ``` sudo -s ``` @@ -364,10 +365,10 @@ You can use either command: You can use either command depending on your shell: -* ``` + ``` exit ``` -* ``` + ``` logout ``` @@ -377,7 +378,7 @@ You can use either command depending on your shell: You can use the following command: -* ``` + ``` whoami ``` @@ -387,7 +388,7 @@ You can use the following command: To see the path of a package, you can use the following command: -* ``` + ``` whereis ``` @@ -414,11 +415,11 @@ pwd ### Find the current shell * Compact version - * ``` + ``` echo $SHELL ``` * Detailed version - * ``` + ``` ls -l /proc/$$/exe ``` @@ -427,35 +428,35 @@ pwd ### SSH into Remote Server * Create SSH key pair - * ``` + ``` ssh-keygen ``` * Install openssh-client on the local computer* - * ``` + ``` sudo apt install openssh-client ``` * Install openssh-server on the remote computer* - * ``` + ``` sudo apt install openssh-server ``` * Copy public key - * ``` + ``` cat ~/.ssh/id_rsa.pub ``` * Create the ssh directory on the remote computer - * ``` + ``` mkdir ~/.ssh ``` * Add public key in the file **authorized_keys** on the remote computer - * ``` + ``` nano ~/.ssh/authorized_keys ``` * Check openssh-server status - * ``` + ``` sudo service ssh status ``` * SSH into the remote machine - * ``` + ``` ssh @ ``` @@ -468,11 +469,11 @@ To enable remote login on a MAC, [read this section](#enable-remote-login-on-mac ### Replace a string by another string in a text file * Replace one string by another (e.g. **old_string**, **new_string**) - * ``` + ``` sed -i 's/old_string/new_string/g' / ``` * Use environment variables (double quotes) - * ``` + ``` sed -i "s/old_string/$env_variable/g" / ``` @@ -529,11 +530,11 @@ date You can use [Dig](https://man.archlinux.org/man/dig.1) to gather DNS information of a website * Template - * ``` + ``` dig ``` * Example - * ``` + ``` dig threefold.io ``` @@ -546,31 +547,31 @@ You can also use online tools such as [DNS Checker](https://dnschecker.org/). We present one of many ways to partition and mount a disk. * Create partition with [gparted](https://gparted.org/) - * ``` + ``` sudo gparted ``` * Find the disk you want to mount (e.g. **sdb**) - * ``` + ``` sudo fdisk -l ``` * Create a directory to mount the disk to - * ``` + ``` sudo mkdir /mnt/disk ``` * Open fstab - * ``` + ``` sudo nano /etc/fstab ``` * Append the following to the fstab with the proper disk path (e.g. **/dev/sdb**) and mount point (e.g. **/mnt/disk**) - * ``` + ``` /dev/sdb /mnt/disk ext4 defaults 0 0 ``` * Mount the disk - * ``` + ``` sudo mount /mnt/disk ``` * Add permissions (as needed) - * ``` + ``` sudo chmod -R 0777 /mnt/disk ``` @@ -583,36 +584,36 @@ We present one of many ways to partition and mount a disk. You can use [gocryptfs](https://github.com/rfjakob/gocryptfs) to encrypt files. * Install gocryptfs - * ``` + ``` apt install gocryptfs ``` * Create a vault directory (e.g. **vaultdir**) and a mount directory (e.g. **mountdir**) - * ``` + ``` mkdir vaultdir mountdir ``` * Initiate the vault - * ``` + ``` gocryptfs -init vaultdir ``` * Mount the mount directory with the vault - * ``` + ``` gocryptfs vaultdir mountdir ``` * You can now create files in the folder. For example: - * ``` + ``` touch mountdir/test.txt ``` * The new file **test.txt** is now encrypted in the vault - * ``` + ``` ls vaultdir ``` * To unmount the mountedvault folder: * Option 1 - * ``` + ``` fusermount -u mountdir ``` * Option 2 - * ``` + ``` rmdir mountdir ``` @@ -623,27 +624,27 @@ To encrypt files, you can use [Veracrypt](https://www.veracrypt.fr/en/Home.html) * Veracrypt GUI * Download the package - * ``` + ``` wget https://launchpad.net/veracrypt/trunk/1.25.9/+download/veracrypt-1.25.9-Ubuntu-22.04-amd64.deb ``` * Install the package - * ``` + ``` dpkg -i ./veracrypt-1.25.9-Ubuntu-22.04-amd64.deb ``` * Veracrypt console only * Download the package - * ``` + ``` wget https://launchpad.net/veracrypt/trunk/1.25.9/+download/veracrypt-console-1.25.9-Ubuntu-22.04-amd64.deb ``` * Install the package - * ``` + ``` dpkg -i ./veracrypt-console-1.25.9-Ubuntu-22.04-amd64.deb ``` You can visit [Veracrypt download page](https://www.veracrypt.fr/en/Downloads.html) to get the newest releases. * To run Veracrypt - * ``` + ``` veracrypt ``` * Veracrypt documentation is very complete. To begin using the application, visit the [Beginner's Tutorial](https://www.veracrypt.fr/en/Beginner%27s%20Tutorial.html). @@ -661,11 +662,11 @@ ifconfig ### See identity and info of IP address * See abuses related to an IP address: - * ``` + ``` https://www.abuseipdb.com/check/ ``` * See general information of an IP address: - * ``` + ``` https://www.whois.com/whois/ ``` @@ -674,124 +675,124 @@ ifconfig ### ip basic commands * Manage and display the state of all network - * ``` + ``` ip link ``` * Display IP Addresses and property information (abbreviation of address) - * ``` + ``` ip addr ``` * Display and alter the routing table - * ``` + ``` ip route ``` * Manage and display multicast IP addresses - * ``` + ``` ip maddr ``` * Show neighbour object - * ``` + ``` ip neigh ``` * Display a list of commands and arguments for each subcommand - * ``` + ``` ip help ``` * Add an address * Template - * ``` + ``` ip addr add ``` * Example: set IP address to device **enp0** - * ``` + ``` ip addr add 192.168.3.4/24 dev enp0 ``` * Delete an address * Template - * ``` + ``` ip addr del ``` * Example: set IP address to device **enp0** - * ``` + ``` ip addr del 192.168.3.4/24 dev enp0 ``` * Alter the status of an interface * Template - * ``` + ``` ip link set ``` * Example 1: Bring interface online (here device **em2**) - * ``` + ``` ip link set em2 up ``` * Example 2: Bring interface offline (here device **em2**) - * ``` + ``` ip link set em2 down ``` * Add a multicast address * Template - * ``` + ``` ip maddr add ``` * Example : set IP address to device **em2** - * ``` + ``` ip maddr add 33:32:00:00:00:01 dev em2 ``` * Delete a multicast address * Template - * ``` + ``` ip maddr del ``` * Example: set IP address to device **em2** - * ``` + ``` ip maddr del 33:32:00:00:00:01 dev em2 ``` * Add a routing table entry * Template - * ``` + ``` ip route add ``` * Example 1: Add a default route (for all addresses) via a local gateway - * ``` + ``` ip route add default via 192.168.1.1 dev em1 ``` * Example 2: Add a route to 192.168.3.0/24 via the gateway at 192.168.3.2 - * ``` + ``` ip route add 192.168.3.0/24 via 192.168.3.2 ``` * Example 3: Add a route to 192.168.1.0/24 that can be reached on device em1 - * ``` + ``` ip route add 192.168.1.0/24 dev em1 ``` * Delete a routing table entry * Template - * ``` + ``` ip route delete ``` * Example: Delete the route for 192.168.1.0/24 via the gateway at 192.168.1.1 - * ``` + ``` ip route delete 192.168.1.0/24 via 192.168.1.1 ``` * Replace, or add, a route * Template - * ``` + ``` ip route replace ``` * Example: Replace the defined route for 192.168.1.0/24 to use device em1 - * ``` + ``` ip route replace 192.168.1.0/24 dev em1 ``` * Display the route an address will take * Template - * ``` + ``` ip route get ``` * Example: Display the route taken for IP 192.168.18.25 - * ``` + ``` ip route replace 192.168.18.25/24 dev enp0 ``` @@ -804,23 +805,23 @@ References: https://www.commandlinux.com/man-page/man8/ip.8.html ### Display socket statistics * Show all sockets - * ``` + ``` ss -a ``` * Show detailed socket information - * ``` + ``` ss -e ``` * Show timer information - * ``` + ``` ss -o ``` * Do not resolve address - * ``` + ``` ss -n ``` * Show process using the socket - * ``` + ``` ss -p ``` @@ -833,19 +834,19 @@ References: https://www.commandlinux.com/man-page/man8/ss.8.html ### Query or control network driver and hardware settings * Display ring buffer for a device (e.g. **eth0**) - * ``` + ``` ethtool -g eth0 ``` * Display driver information for a device (e.g. **eth0**) - * ``` + ``` ethtool -i eth0 ``` * Identify eth0 by sight, e.g. by causing LEDs to blink on the network port - * ``` + ``` ethtool -p eth0 ``` * Display network and driver statistics for a device (e.g. **eth0**) - * ``` + ``` ethtool -S eth0 ``` @@ -866,21 +867,21 @@ cat /sys/class/net//carrier ### Add IP address to hardware port (ethernet) * Find ethernet port ID on both computers - * ``` + ``` ip a ``` * Add IP address (DHCO or static) * Computer 1 - * ``` + ``` ip addr add /24 dev ``` * Computer 2 - * ``` + ``` ip addr add /24 dev ``` * [Ping](#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the address to confirm connection - * ``` + ``` ping ``` @@ -918,11 +919,11 @@ You can use the following template when you set an IP address manually: You can use the following template to add arguments when running a script: * Option 1 - * ``` + ``` ./example_script.sh arg1 arg2 ``` * Option 2 - * ``` + ``` sh example_script.sh "arg1" "arg2" ``` @@ -930,16 +931,16 @@ You can use the following template to add arguments when running a script: * Write a script * File: `example_script.sh` - * ```bash + ```bash #!/bin/sh echo $@ ``` * Give permissions - * ```bash + ```bash chmod +x ./example_script.sh ``` * Run the script with arguments - * ```bash + ```bash sh example_script.sh arg1 arg2 ``` @@ -947,7 +948,7 @@ You can use the following template to add arguments when running a script: ### Iterate over arguments * Write the script - * ```bash + ```bash # iterate_script.sh #!/bin/bash for i; do @@ -955,16 +956,16 @@ You can use the following template to add arguments when running a script: done ``` * Give permissions - * ``` + ``` chmod +x ./iterate_script.sh ``` * Run the script with arguments - * ``` + ``` sh iterate_script.sh arg1 arg2 ``` * The following script is equivalent - * ```bash + ```bash # iterate_script.sh #/bin/bash for i in $*; do @@ -977,7 +978,7 @@ You can use the following template to add arguments when running a script: ### Count lines in files given as arguments * Write the script - * ```bash + ```bash # count_lines.sh #!/bin/bash for i in $*; do @@ -986,11 +987,11 @@ You can use the following template to add arguments when running a script: done ``` * Give permissions - * ``` + ``` chmod +x ./count_lines.sh ``` * Run the script with arguments (files). Here we use the script itself as an example. - * ``` + ``` sh count_lines.sh count_lines.sh ``` @@ -999,14 +1000,14 @@ You can use the following template to add arguments when running a script: ### Find path of a file * Write the script - * ```bash + ```bash # find.sh #!/bin/bash find / -iname $1 2> /dev/null ``` * Run the script - * ``` + ``` sh find.sh ``` @@ -1015,13 +1016,13 @@ You can use the following template to add arguments when running a script: ### Print how many arguments are passed in a script * Write the script - * ```bash + ```bash # print_qty_args.sh #!/bin/bash echo This script was passed $# arguments ``` * Run the script - * ``` + ``` sh print_qty_args.sh ``` @@ -1050,7 +1051,7 @@ Note that the Terraform documentation also covers other methods to install Terra * Option 1: * Use the following command line: - * ``` + ``` systemsetup -setremotelogin on ``` * Option 2 @@ -1063,7 +1064,7 @@ Note that the Terraform documentation also covers other methods to install Terra * Open **Finder** \> **Go** \> **Go to Folder** * Paste this path - * ``` + ``` ~/Library/Caches ``` @@ -1087,15 +1088,15 @@ To install Chocolatey on Windows, we follow the [official Chocolatey website](ht * Run PowerShell as Administrator * Check if **Get-ExecutionPolicy** is restricted - * ``` + ``` Get-ExecutionPolicy ``` * If it is restricted, run the following command: - * ``` + ``` Set-ExecutionPolicy AllSigned ``` * Install Chocolatey - * ``` + ``` Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) ``` * Note: You might need to restart PowerShell to use Chocolatey @@ -1107,7 +1108,7 @@ To install Chocolatey on Windows, we follow the [official Chocolatey website](ht Once you've installed Chocolatey on Windows, installing Terraform is as simple as can be: * Install Terraform with Chocolatey - * ``` + ``` choco install terraform ``` -- 2.40.1 From 8f74e1e9bfb5f85ff2fa121286db3f2a32343a45 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 13:02:02 -0400 Subject: [PATCH 30/34] manual, sysadmins, computer basics --- .../computer_it_basics/docker_basics.md | 60 ++++++------ .../computer_it_basics/file_transfer.md | 34 +++---- .../computer_it_basics/git_github_basics.md | 96 ++++++++++--------- 3 files changed, 98 insertions(+), 92 deletions(-) diff --git a/collections/system_administrators/computer_it_basics/docker_basics.md b/collections/system_administrators/computer_it_basics/docker_basics.md index ede603f..105804a 100644 --- a/collections/system_administrators/computer_it_basics/docker_basics.md +++ b/collections/system_administrators/computer_it_basics/docker_basics.md @@ -70,16 +70,16 @@ sudo sh get-docker.sh To completely remove docker from your machine, you can follow these steps: * List the docker packages - * ``` + ``` dpkg -l | grep -i docker ``` * Purge and autoremove docker - * ``` + ``` apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli docker-compose-plugin apt-get autoremove -y --purge docker-engine docker docker.io docker-ce docker-compose-plugin ``` * Remove the docker files and folders - * ``` + ``` rm -rf /var/lib/docker /etc/docker rm /etc/apparmor.d/docker groupdel docker @@ -93,11 +93,11 @@ You can also use the command **whereis docker** to see if any Docker folders and ### List containers * List only running containers - * ``` + ``` docker ps ``` * List all containers (running + stopped) - * ``` + ``` docker ps -a ``` @@ -108,15 +108,15 @@ You can also use the command **whereis docker** to see if any Docker folders and To pull an image from [Docker Hub](https://hub.docker.com/): * Pull an image - * ``` + ``` docker pull ``` * Pull an image with the tag - * ``` + ``` docker pull :tag ``` * Pull all tags of an image - * ``` + ``` docker pull -a ``` @@ -127,15 +127,15 @@ To pull an image from [Docker Hub](https://hub.docker.com/): To pull an image to [Docker Hub](https://hub.docker.com/): * Push an image - * ``` + ``` docker push ``` * Push an image with the tag - * ``` + ``` docker push :tag ``` * Push all tags of an image - * ``` + ``` docker pull -a ``` @@ -144,11 +144,11 @@ To pull an image to [Docker Hub](https://hub.docker.com/): ### Inspect and pull an image with GHCR * Inspect the docker image - * ``` + ``` docker inspect ghcr.io//: ``` * Pull the docker image - * ``` + ``` docker pull ghcr.io//: ``` @@ -174,20 +174,20 @@ To install Skopeo, read [this documentation](install.md). Use **docker build** to build a container based on a Dockerfile * Build a container based on current directory Dockerfile - * ``` + ``` docker build . ``` * Build a container and store the image with a given name * Template - * ``` + ``` docker build -t ":" ``` * Example - * ``` + ``` docker build -t newimage:latest ``` * Build a docker container without using the cache - * ``` + ``` docker build --no-cache ``` @@ -206,15 +206,15 @@ docker images To run a container based on an image, use the command **docker run**. * Run an image - * ``` + ``` docker run ``` * Run an image in the background (run and detach) - * ``` + ``` docker run -d ``` * Run an image with CLI input - * ``` + ``` docker run -it ``` @@ -229,7 +229,7 @@ You can also specify the shell, e.g. **docker run -it /bin/bash** To run a new command in an existing container, use **docker exec**. * Execute interactive shell on the container - * ``` + ``` docker exec -it sh ``` @@ -238,11 +238,11 @@ To run a new command in an existing container, use **docker exec**. ### Bash shell into container * Bash shell into a container - * ``` + ``` docker exec -i -t /bin/bash ``` * Bash shell into a container with root - * ``` + ``` docker exec -i -t -u root /bin/bash ``` @@ -300,22 +300,22 @@ docker cp : ### Delete all the containers, images and volumes * To delete all containers: - * ``` + ``` docker compose rm -f -s -v ``` * To delete all images: - * ``` + ``` docker rmi -f $(docker images -aq) ``` * To delete all volumes: - * ``` + ``` docker volume rm $(docker volume ls -qf dangling=true) ``` * To delete all containers, images and volumes: - * ``` + ``` docker compose rm -f -s -v && docker rmi -f $(docker images -aq) && docker volume rm $(docker volume ls -qf dangling=true) ``` @@ -324,7 +324,7 @@ docker cp : ### Kill all the Docker processes * To kill all processes: - * ``` + ``` killall Docker && open /Applications/Docker.app ``` @@ -353,11 +353,11 @@ docker ps -s ### Examine disks usage * Basic mode - * ``` + ``` docker system df ``` * Verbose mode - * ``` + ``` docker system df -v ``` diff --git a/collections/system_administrators/computer_it_basics/file_transfer.md b/collections/system_administrators/computer_it_basics/file_transfer.md index 41718dc..a6320d3 100644 --- a/collections/system_administrators/computer_it_basics/file_transfer.md +++ b/collections/system_administrators/computer_it_basics/file_transfer.md @@ -33,14 +33,14 @@ Deploying on the TFGrid with tools such as the Playground and Terraform is easy ### File transfer with IPv4 * From local to remote, write the following on the local terminal: - * ``` + ``` scp / @:/// ``` * From remote to local, you can write the following on the local terminal (more secure): - * ``` + ``` scp @:/// / * From remote to local, you can also write the following on the remote terminal: - * ``` + ``` scp / @:/// ### File transfer with IPv6 @@ -56,11 +56,11 @@ For IPv6, it is similar to IPv4 but you need to add `-6` after scp and add `\[` We show here how to transfer files between two computers. Note that at least one of the two computers must be local. This will transfer the content of the source directory into the destination directory. * From local to remote - * ``` + ``` rsync -avz --progress --delete /path/to/local/directory/ remote_user@:/path/to/remote/directory ``` * From remote to local - * ``` + ``` rsync -avz --progress --delete remote_user@:/path/to/remote/directory/ /path/to/local/directory ``` @@ -77,16 +77,16 @@ Here is short description of the parameters used: [rsync-sidekick](https://github.com/m-manu/rsync-sidekick) propagates changes from source directory to destination directory. You can run rsync-sidekick before running rsync. Make sure that [Go is installed](#install-go). * Install rsync-sidekick - * ``` + ``` sudo go install github.com/m-manu/rsync-sidekick@latest ``` * Reorganize the files and folders with rsync-sidekick - * ``` + ``` rsync-sidekick /path/to/local/directory/ username@IP_Address:/path/to/remote/directory ``` * Transfer and update files and folders with rsync - * ``` + ``` sudo rsync -avz --progress --delete --log-file=/path/to/local/directory/rsync_storage.log /path/to/local/directory/ username@IP_Address:/path/to/remote/directory ``` @@ -95,18 +95,18 @@ Here is short description of the parameters used: We show how to automate file transfers between two computers using rsync. * Create the script file - * ``` + ``` nano rsync_backup.sh ``` * Write the following script with the proper paths. Here the log is saved in the same directory. - * ``` + ``` # filename: rsync_backup.sh #!/bin/bash sudo rsync -avz --progress --delete --log-file=/path/to/local/directory/rsync_storage.log /path/to/local/directory/ username@IP_Address:/path/to/remote/directory ``` * Give permission - * ``` + ``` sudo chmod +x /path/to/script/rsync_backup.sh ``` * Set a cron job to run the script periodically @@ -115,11 +115,11 @@ We show how to automate file transfers between two computers using rsync. sudo cp path/to/script/rsync_backup.sh /root ``` * Open the cron file - * ``` + ``` sudo crontab -e ``` * Add the following to run the script everyday. For this example, we set the time at 18:00PM - * ``` + ``` 0 18 * * * /root/rsync_backup.sh ``` @@ -128,11 +128,11 @@ We show how to automate file transfers between two computers using rsync. Depending on your situation, the parameters **--checksum** or **--ignore-times** can be quite useful. Note that adding either parameter will slow the transfer. * With **--ignore time**, you ignore both the time and size of each file. This means that you transfer all files from source to destination. - * ``` + ``` rsync --ignore-time source_folder/ destination_folder ``` * With **--checksum**, you verify with a checksum that the files from source and destination are the same. This means that you transfer all files that have a different checksum compared source to destination. - * ``` + ``` rsync --checksum source_folder/ destination_folder ``` @@ -141,11 +141,11 @@ Depending on your situation, the parameters **--checksum** or **--ignore-times** rsync does not act the same whether you use or not a slash ("\/") at the end of the source path. * Copy content of **source_folder** into **destination_folder** to obtain the result: **destination_folder/source_folder_content** - * ``` + ``` rsync source_folder/ destination_folder ``` * Copy **source_folder** into **destination_folder** to obtain the result: **destination_folder/source_folder/source_folder_content** - * ``` + ``` rsync source_folder destination_folder ``` diff --git a/collections/system_administrators/computer_it_basics/git_github_basics.md b/collections/system_administrators/computer_it_basics/git_github_basics.md index dca25e4..831d810 100644 --- a/collections/system_administrators/computer_it_basics/git_github_basics.md +++ b/collections/system_administrators/computer_it_basics/git_github_basics.md @@ -16,6 +16,12 @@ - [Go to another branch](#go-to-another-branch) - [Add your changes to a local branch](#add-your-changes-to-a-local-branch) - [Push changes of a local branch to the remote Github branch](#push-changes-of-a-local-branch-to-the-remote-github-branch) + - [Count the differences between two branches](#count-the-differences-between-two-branches) + - [See the default branch](#see-the-default-branch) + - [Force a push](#force-a-push) + - [Merge a branch to a different branch](#merge-a-branch-to-a-different-branch) + - [Clone completely one branch to another branch locally then push the changes to Github](#clone-completely-one-branch-to-another-branch-locally-then-push-the-changes-to-github) + - [The 3 levels of the command reset](#the-3-levels-of-the-command-reset) - [Reverse modifications to a file where changes haven't been staged yet](#reverse-modifications-to-a-file-where-changes-havent-been-staged-yet) - [Download binaries from Github](#download-binaries-from-github) - [Resolve conflicts between branches](#resolve-conflicts-between-branches) @@ -50,11 +56,11 @@ You can install git on MAC, Windows and Linux. You can consult Git's documentati ### Install on Linux * Fedora distribution - * ``` + ``` dnf install git-all ``` * Debian-based distribution - * ``` + ``` apt install git-all ``` * Click [here](https://git-scm.com/download/linux) for other Linux distributions @@ -62,7 +68,7 @@ You can install git on MAC, Windows and Linux. You can consult Git's documentati ### Install on MAC * With Homebrew - * ``` + ``` brew install git ``` @@ -125,11 +131,11 @@ git checkout ### Add your changes to a local branch * Add all changes - * ``` + ``` git add . ``` * Add changes of a specific file - * ``` + ``` git add / ``` @@ -139,13 +145,13 @@ git checkout To push changes to Github, you can use the following commands: -* ``` + ``` git add . ``` -* ``` + ``` git commit -m "write your changes here in comment" ``` -* ``` + ``` git push ``` @@ -178,15 +184,15 @@ git push --force ### Merge a branch to a different branch * Checkout the branch you want to copy content TO - * ``` + ``` git checkout branch_name ``` * Merge the branch you want content FROM - * ``` + ``` git merge origin/dev_mermaid ``` * Push the changes - * ``` + ``` git push -u origin/head ``` @@ -197,19 +203,19 @@ git push --force For this example, we copy **branchB** into **branchA**. * See available branches - * ``` + ``` git branch -r ``` * Go to **branchA** - * ``` + ``` git checkout branchA ``` * Copy **branchB** into **branchA** - * ``` + ``` git git reset --hard branchB ``` * Force the push - * ``` + ``` git push --force ``` @@ -217,17 +223,17 @@ For this example, we copy **branchB** into **branchA**. ### The 3 levels of the command reset -* ``` + ``` git reset --soft ``` * Bring the History to the Stage/Index * Discard last commit -* ``` + ``` git reset --mixed ``` * Bring the History to the Working Directory * Discard last commit and add -* ``` + ``` git reset --hard ``` * Bring the History to the Working Directory @@ -252,7 +258,7 @@ git checkout ### Download binaries from Github * Template: - * ``` + ``` wget -O https://raw.githubusercontent.com//// ``` @@ -263,29 +269,29 @@ git checkout We show how to resolve conflicts in a development branch (e.g. **branch_dev**) and then merging the development branch into the main branch (e.g. **branch_main**). * Clone the repo - * ``` + ``` git clone ``` * Pull changes and potential conflicts - * ``` + ``` git pull origin branch_main ``` * Checkout the development branch - * ``` + ``` git checkout branch_dev ``` * Resolve conflicts in a text editor * Save changes in the files * Add the changes - * ``` + ``` git add . ``` * Commit the changes - * ``` + ``` git commit -m "your message here" ``` * Push the changes - * ``` + ``` git push ``` @@ -294,11 +300,11 @@ We show how to resolve conflicts in a development branch (e.g. **branch_dev**) a ### Download all repositories of an organization * Log in to gh - * ``` + ``` gh auth login ``` * Clone all repositories. Replace with the organization in question. - * ``` + ``` gh repo list --limit 1000 | while read -r repo _; do gh repo clone "$repo" "$repo" done @@ -309,15 +315,15 @@ We show how to resolve conflicts in a development branch (e.g. **branch_dev**) a ### Revert a push commited with git * Find the commit ID - * ``` + ``` git log -p ``` * Revert the commit - * ``` + ``` git revert ``` * Push the changes - * ``` + ``` git push ``` @@ -334,11 +340,11 @@ git clone -b --single-branch //.git ### Revert to a backup branch * Checkout the branch you want to update (**branch**) - * ``` + ``` git checkout ``` * Do a reset of your current branch based on the backup branch - * ``` + ``` git reset --hard ``` @@ -363,19 +369,19 @@ Note that this will not work for untracked and new files. See below for untracke This method can be used to overwrite local files. This will work even if you have untracked and new files. * Save local changes on a stash - * ``` + ``` git stash --include-untracked ``` * Discard local changes - * ``` + ``` git reset --hard ``` * Discard untracked and new files - * ``` + ``` git clean -fd ``` * Pull the remote branch - * ``` + ``` git pull ``` @@ -388,27 +394,27 @@ Then, to delete the stash, you can use **git stash drop**. The stash command is used to record the current state of the working directory. * Stash a branch (equivalent to **git stash push**) - * ``` + ``` git stash ``` * List the changes in the stash - * ``` + ``` git stash list ``` * Inspect the changes in the stash - * ``` + ``` git stash show ``` * Remove a single stashed state from the stash list and apply it on top of the current working tree state - * ``` + ``` git stash pop ``` * Apply the stash on top of the current working tree state without removing the state from the stash list - * ``` + ``` git stash apply ``` * Drop a stash - * ``` + ``` git stash drop ``` @@ -431,15 +437,15 @@ To download VS-Code, visit their website and follow the given instructions. There are many ways to install VS-Codium. Visit the [official website](https://vscodium.com/#install) for more information. * Install on MAC - * ``` + ``` brew install --cask vscodium ``` * Install on Linux - * ``` + ``` snap install codium --classic ``` * Install on Windows - * ``` + ``` choco install vscodium ``` -- 2.40.1 From 3540d7d64d56bc4debfe0c027011c044f9727621 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 13:04:07 -0400 Subject: [PATCH 31/34] manual, sysadmins, ipfs --- .../advanced/ipfs/ipfs_fullvm.md | 58 +++++++++---------- .../advanced/ipfs/ipfs_microvm.md | 42 +++++++------- 2 files changed, 50 insertions(+), 50 deletions(-) diff --git a/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md b/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md index e61a173..f2312a4 100644 --- a/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md +++ b/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md @@ -30,7 +30,7 @@ We start by deploying a full VM on the ThreeFold Playground. * Minimum storage: 50GB * After deployment, note the VM IPv4 address * Connect to the VM via SSH - * ``` + ``` ssh root@VM_IPv4_address ``` @@ -39,39 +39,39 @@ We start by deploying a full VM on the ThreeFold Playground. We create a root-access user. Note that this step is optional. * Once connected, create a new user with root access (for this guide we use "newuser") - * ``` + ``` adduser newuser ``` * You should now see the new user directory - * ``` + ``` ls /home ``` * Give sudo capacity to the new user - * ``` + ``` usermod -aG sudo newuser ``` * Switch to the new user - * ``` + ``` su - newuser ``` * Create a directory to store the public key - * ``` + ``` mkdir ~/.ssh ``` * Give read, write and execute permissions for the directory to the new user - * ``` + ``` chmod 700 ~/.ssh ``` * Add the SSH public key in the file **authorized_keys** and save it - * ``` + ``` nano ~/.ssh/authorized_keys ``` * Exit the VM - * ``` + ``` exit ``` * Reconnect with the new user - * ``` + ``` ssh newuser@VM_IPv4_address ``` @@ -81,19 +81,19 @@ We set a firewall to monitor and control incoming and outgoing network traffic. For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). We thus add the following rules: * Allow SSH (port 22) - * ``` + ``` sudo ufw allow ssh ``` * Allow port 4001 - * ``` + ``` sudo ufw allow 4001 ``` * To enable the firewall, write the following: - * ``` + ``` sudo ufw enable ``` * To see the current security rules, write the following: - * ``` + ``` sudo ufw status verbose ``` You now have enabled the firewall with proper security rules for your IPFS deployment. @@ -109,23 +109,23 @@ If you want to run pubsub capabilities, you need to allow **port 8081**. For mor We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions). * Download the binary - * ``` + ``` wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz ``` * Unzip the file - * ``` + ``` tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz ``` * Change directory - * ``` + ``` cd kubo ``` * Run the install script - * ``` + ``` sudo bash install.sh ``` * Verify that IPFS Kubo is properly installed - * ``` + ``` ipfs --version ``` @@ -134,23 +134,23 @@ We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#i We initialize IPFS and run the IPFS daemon. * Initialize IPFS - * ``` + ``` ipfs init --profile server ``` * Increase the storage capacity (optional) - * ``` + ``` ipfs config Datastore.StorageMax 30GB ``` * Run the IPFS daemon - * ``` + ``` ipfs daemon ``` * Set an Ubuntu systemd service to keep the IPFS daemon running after exiting the VM - * ``` + ``` sudo nano /etc/systemd/system/ipfs.service ``` * Enter the systemd info - * ``` + ``` [Unit] Description=IPFS Daemon [Service] @@ -163,27 +163,27 @@ We initialize IPFS and run the IPFS daemon. WantedBy=multi-user.target ``` * Enable the service - * ``` + ``` sudo systemctl daemon-reload sudo systemctl enable ipfs sudo systemctl start ipfs ``` * Verify that the IPFS daemon is properly running - * ``` + ``` sudo systemctl status ipfs ``` ## Final Verification We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification. * Reboot the VM - * ``` + ``` sudo reboot ``` * Reconnect to the VM - * ``` + ``` ssh newuser@VM_IPv4_address ``` * Check that the IPFS daemon is running - * ``` + ``` ipfs swarm peers ``` ## Questions and Feedback diff --git a/collections/system_administrators/advanced/ipfs/ipfs_microvm.md b/collections/system_administrators/advanced/ipfs/ipfs_microvm.md index 2f58f16..3919dde 100644 --- a/collections/system_administrators/advanced/ipfs/ipfs_microvm.md +++ b/collections/system_administrators/advanced/ipfs/ipfs_microvm.md @@ -31,7 +31,7 @@ We start by deploying a micro VM on the ThreeFold Playground. * Minimum storage: 50GB * After deployment, note the VM IPv4 address * Connect to the VM via SSH - * ``` + ``` ssh root@VM_IPv4_address ``` @@ -40,11 +40,11 @@ We start by deploying a micro VM on the ThreeFold Playground. We install the prerequisites before installing and setting IPFS. * Update Ubuntu - * ``` + ``` apt update ``` * Install nano and ufw - * ``` + ``` apt install nano && apt install ufw -y ``` @@ -57,20 +57,20 @@ For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). We thus add the following rules: * Allow SSH (port 22) - * ``` + ``` ufw allow ssh ``` * Allow port 4001 - * ``` + ``` ufw allow 4001 ``` * To enable the firewall, write the following: - * ``` + ``` ufw enable ``` * To see the current security rules, write the following: - * ``` + ``` ufw status verbose ``` @@ -91,23 +91,23 @@ If you want to run pubsub capabilities, you need to allow **port 8081**. For mor We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions). * Download the binary - * ``` + ``` wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz ``` * Unzip the file - * ``` + ``` tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz ``` * Change directory - * ``` + ``` cd kubo ``` * Run the install script - * ``` + ``` bash install.sh ``` * Verify that IPFS Kubo is properly installed - * ``` + ``` ipfs --version ``` @@ -116,15 +116,15 @@ We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#i We initialize IPFS and run the IPFS daemon. * Initialize IPFS - * ``` + ``` ipfs init --profile server ``` * Increase the storage capacity (optional) - * ``` + ``` ipfs config Datastore.StorageMax 30GB ``` * Run the IPFS daemon - * ``` + ``` ipfs daemon ``` @@ -133,19 +133,19 @@ We initialize IPFS and run the IPFS daemon. We set the IPFS daemon with zinit. This will make sure that the IPFS daemon starts at each VM reboot or if it stops functioning momentarily. * Create the yaml file - * ``` + ``` nano /etc/zinit/ipfs.yaml ``` * Set the execution command - * ``` + ``` exec: /usr/local/bin/ipfs daemon ``` * Run the IPFS daemon with the zinit monitor command - * ``` + ``` zinit monitor ipfs ``` * Verify that the IPFS daemon is running - * ``` + ``` ipfs swarm peers ``` @@ -154,11 +154,11 @@ We set the IPFS daemon with zinit. This will make sure that the IPFS daemon star We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification. * Reboot the VM - * ``` + ``` reboot -f ``` * Reconnect to the VM and verify that the IPFS daemon is running - * ``` + ``` ipfs swarm peers ``` -- 2.40.1 From de162fe6b46e3b385cfd42b18799317a25fb0459 Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 13:17:52 -0400 Subject: [PATCH 32/34] manual, faq --- collections/faq/faq.md | 16 +++++++--------- collections/threefold_token/threefold_token.md | 6 +++--- 2 files changed, 10 insertions(+), 12 deletions(-) diff --git a/collections/faq/faq.md b/collections/faq/faq.md index fb60afb..d9216fd 100644 --- a/collections/faq/faq.md +++ b/collections/faq/faq.md @@ -743,18 +743,18 @@ To learn more about this process, [watch this great video](https://youtu.be/axvK If you've already done an SSH connection on your computer, the issue is most probably that the "host key has just been changed". To fix this, try one of those two solutions: * Linux and MAC: - * ``` + ``` sudo rm ~/.ssh/known_hosts ``` * Windows: - * ``` + ``` rm ~/.ssh/known_hosts ``` To be more specific, you can remove the probematic host: * Windows, Linux and MAC: - * ``` + ``` ssh-keygen -R ``` @@ -2074,7 +2074,7 @@ There can be many different fixes for this error. Here are some troubleshooting * [Flash the RAID controller](https://fohdeesha.com/docs/perc.html) (i.e. crossflashing), OR; * Change the controller to a Dell H310 controller (for Dell servers) * Try the command **badblocks** (replace **sda** with your specific disk). Note that this command will delete all the data on the disk - * ``` + ``` sudo badblocks -svw -b 512 -t 0x00 /dev/sda ``` @@ -2094,7 +2094,7 @@ Anyone experiencing frequently this issue where Z-OS sometimes detects an SSD as * Boot a Ubuntu Linux live USB * Install **gnome-disks** if it isn't already installed: - * ``` + ``` sudo apt install gnome-disks ``` * Open the application launcher and search for **Disks** @@ -2161,15 +2161,13 @@ Many different reasons can cause this issue. When you get that error, sometimes * Fix 1: * Force the wiping of the disk: - * ``` + ``` sudo wipefs -af /dev/sda ``` * Fix 2: * Unmount the disk then wipe it: - * ``` - sudo umount /dev/sda ``` - * ``` + sudo umount /dev/sda sudo wipefs -a /dev/sda ``` diff --git a/collections/threefold_token/threefold_token.md b/collections/threefold_token/threefold_token.md index 800efa6..a1122f3 100644 --- a/collections/threefold_token/threefold_token.md +++ b/collections/threefold_token/threefold_token.md @@ -36,15 +36,15 @@ TFT lives on 4 different chains: TFChain, Stellar chain, Ethereum chain and Bina The TFT contract address on different chains are the following: - [TFT Contract address on Stellar](https://stellarchain.io/assets/TFT-GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47) - - ``` + ``` TFT-GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47 ``` - [TFT Contract address on Ethereum](https://etherscan.io/token/0x395E925834996e558bdeC77CD648435d620AfB5b) - - ``` + ``` 0x395E925834996e558bdeC77CD648435d620AfB5b ``` - [TFT Contract address on BSC](https://bscscan.com/address/0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf) - - ``` + ``` 0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf ``` -- 2.40.1 From e2f71ca64174a52c3c7bc20f1237eddc1956915c Mon Sep 17 00:00:00 2001 From: Mik-TF Date: Tue, 14 May 2024 13:22:04 -0400 Subject: [PATCH 33/34] manual, collaboration --- collections/collaboration/contribute.md | 6 +++--- collections/collaboration/development_cycle.md | 2 +- collections/collaboration/img/dev_cycle.png | Bin 0 -> 24454 bytes 3 files changed, 4 insertions(+), 4 deletions(-) create mode 100644 collections/collaboration/img/dev_cycle.png diff --git a/collections/collaboration/contribute.md b/collections/collaboration/contribute.md index 796b7f5..8ef689c 100644 --- a/collections/collaboration/contribute.md +++ b/collections/collaboration/contribute.md @@ -68,16 +68,16 @@ To do so, you simply need to clone the forked repository on your local computer The steps are the following: * In the terminal, write the following line to clone the forked `info_grid` repository: - * ``` + ``` git clone https://github.com/YOUR_GIT_ACCOUNT/info_grid ``` * make sure to write your own Github account in the URL * To deploy the mdbook locally, first go to the **info_grid** directory: - * ``` + ``` cd info_grid ``` * Then write the following line. It will open the manual automatically. - * ``` + ``` mdbook serve -o ``` * Note that, by default, the URL is the following, using port `3000`, `http://localhost:3000/` diff --git a/collections/collaboration/development_cycle.md b/collections/collaboration/development_cycle.md index c321367..02ae9b4 100644 --- a/collections/collaboration/development_cycle.md +++ b/collections/collaboration/development_cycle.md @@ -1,6 +1,6 @@ The development cycle is explained below: -![Untitled presentation (1)](https://user-images.githubusercontent.com/8425762/170034170-7247a737-9d99-481d-9289-88d361275043.png) +![dev_cycle](./img/dev_cycle.png) diff --git a/collections/collaboration/img/dev_cycle.png b/collections/collaboration/img/dev_cycle.png new file mode 100644 index 0000000000000000000000000000000000000000..ddf2766fdee33c702581771fd123d746b8b89f20 GIT binary patch literal 24454 zcmeIa2{hGx+cvzLC+TXQ&_Dx92uY?Ul2FDXBxD}SHe{@;K_i9IKoS{4WeTCFR5DMQ zN#-dbGv9Hl>wfO%S>JlUZ>{fJ?|RqszO8%RU3<6pfB*l#;XIG?IF9po^{}Gs{JE>< zG8l~catHP)GZ@o68H_1ovuEO+WHz}D{59QXkDTgk{BfRba0RbdGWQ>4s#u+3+UuP$ zWEfdmSr~4$(LZBoXlY|?#q6GzE{P9u(}(t+G1Oz4SXr)BHL);csM;E?6%bmhpm%ny zAito{T7CgBK>;xV;kAbq?k09;F&JwZa{G3xI{0-|+dHe;rcLw=_#E<*>t5$|>+l}A zD?W#(&k%aNer}s)w2t;XodVuy$%Nz-&k-$+f#fz@d+m{kSFh|e-cDUDy-zdviuBa; zhm{X-rYe2r+TJ^IK{ofoY|-aW_SW?~i;rHgZt}}x`{vy2l9uzm$3c85?w3)gbgUL%6 zl6__A<7)1#>`$s^<(@eedyG@oqgdO%y0Kl^+Q)Sb2QM$LZEwyFULKy$WltG{9BS(7 zru}UtX?)Ypd8(b}4Gg$|vJGH=Pu`zhw35$XBmk z6*-QnSh%G});zZrtku@k)Qq!fRhM{214MUW=O7$Gn70i#JR3wN%{W$ z`=Nd7hFq*us(E;MRV*!|larH~WBp}r?L86_5}K)w@sC`GO>+Epsszd0R8~~<;Be{? zSh{SP=_juZ4@4{~?8h11t*u8C6mFh5b7sk3e?2abHOe-X7@Da5{P}ZVyY7VMrAwEF z`(xa;N=c=6tJ3%s)zm}>jCB}kCR$b}ax7o5B5b5PU3UR)(j+AzyHWYrvHL%N{=}%; zR@;{OC)lh}!fc(fu_?*VKjB=v^Y)`h9F5<cPMW^R4!fj1H1;RR_fBWXovFI>1F%xpRQG$e#)?bW1!6zuvdHo#<{t2(s5sum%-uUJK?wZ^HtB-ejeDu=F`+S8IqjmU%q zbu0^&J$o_+lr^xHYaH9RVyvZ#9tfK$o*itBGb!@%^LsW>V|v9bhH6O%T4Fg!fm)XGXjPcQJn!-rT?9wYVN&mC&fNU6v)q#S{{`xb_`6azy51M@NS$ zcIuZ?v3vIHDK07s!dmR_>)X6#%OO2Iy$|t@@rj9%X=yr&p(^9!>W1hRJeE8;f7A|vHR9xB|veS7ZQxq{seE9IKCr_T-sQH9oukG0S)Up23^1imnn6^B(@!pjSOY?** z6SXW8M|>P@r_`A*#5M>R{l4jpD4SRCUU$~kdiOQzM~>Y7aC3F2EXUb>UN=NE#TE^( zTD1y`9P=Ia@#9DRibKBHPltyc@o=9SdOn)Q^9TwS-YYG@_;`6Oiqecfs2Eyg-0WDH z9TKwf{-KSRE?)fDX&3q8#gXdj>bY~~9C2_M6bLA1XJ?nu>TYfhZD}!XSQlT}scvZa z^zK%zkmzVW8rO)3gEVd|EaFX%UR7H*HZ_@Iy{y`KCRoOk!=>t1Y$Cf|O7*oXR}Ry# zN!&&dh=`6>M{s(k9*Hqn!p2tB%wdkm(%;|T*3xp=);50q`t^8wi)R+6duAXe6Jy;l za?HRWSb%5C=FJC|>pH)now=jdSw%m~L!!%RYA%;^gIM;_DD|6cJC%oqhK4bgeE}0g z0j8#=$~roa5MP(Dvqx1_R7l2{U?CzdiI{(Vj115D*ZkEYhkTEpxhivW)dn{=H)L3| zvS@AGO1=-b3VkcneL?%B*pUl*dym{dwqECWtM{aO!s)le$$|4 z`{rDK$IkdN=KG}@#vNZ@RHVD5m&WMEjE*`Z{$O1oMYKG*&R2-RxbYX3s&8APuWsq4 z0Ou#QayGys<`&vZe)!CpD2s95^t z{&r!$>;)-~)JKo+Dhi1&2$-~LH7ZJM%x2~q45qgJLqpj}k1uu^;y2laF@jW5iHdRD6 zld<7`Bn%ZZv&cgmy}y0?M(->5iDO5e1s?3X`ra*8G9LNW zdF}dj6)Xs3=y-!qi!5r>wB1G<*4**&2}Wp0?fJ0U6c1rrrX3m_%!Me|az{hw2v$Ev zAUM@&;B#J{60VPstc-L;`FlQ_=-~|;HaIr>+ar+(op{3l6v=L7t}3JD#NpVoZvINa zeQIiIlW}f*q&vH|R`>Mj(})D6$kj2sx?iiL50}JH&|J!^{;`$$xuK^8Yy0KVEf4th zGEQgeBi#M!tPDlG+^}&YhrOQP`|HcI4Fo@##z(}(smkBoQcxPD)lbh?_vSop-Y5I7 z_kFv*?X!U(H#fH&LhtZkgQu`njre4q#b&bW_!`*OW-6zQtR&9iSy)s=>1DZ&!|pY1 zqc1WtGM21d8IGaEq?x_Hw$!mJsaeQ%)B&mLCWnNkU5fVn`SUB{OuVD9N4vY1hX%9& zw*YI@*$rwvI4?Yj(v=P)m_Wu6e;AwEMalw=QwD=HvmM&f#)YNne zlg7JimlnaC5TyV`!yQ{nS>kIvO`)(bHvA!b5p+;X}iwYqHKhYz- z&3R?0QhAZ(5y@JU+oM$u9{haT?Afzts#t}HA2;fEWu>?!);LCbd3(p}WzJ0fRbloy zH@CDnT;*ocvOKAY*QkgrQ#wU*?g{KAkpJ!d_4t$T=h9{^({)QlY*JNG2?EG$8STw0 z6D~U*bemH);^j+aKq(+h8U}2ec&l1TT99H6KOWPj>g!{cciM4&7ZDONM)qlTcIePn z`w<@cs8Du|OEB_RWJ-z_P_1KsiN-`{?=RPNJ(p@24ZNxH}{#>9#3JDLN0`p)}iKrGQ3`ZIIb3ZDyJ~x(!74$K}2k-|+J5Q=_U?CU zQJ+VTqObEMA85$()S%V6RWlgVYz#2)`O6no2ZvWcOgCnAXeF2r)hy?jn#L<>n2&MA6@L%R=@gOAJ^R_G3$-;6{ik+a?Nttf6gt z$1=5_pgz7c)*CQE^*8|QFxzV2RrzhHs)iOB2-DoYk1mPperWsfnX+iIEPEHAm z;CS11^_&1HT_peEpV`}N>)uScxpKQqVzu2vbCHL53TB?f00I1k3l{?5s~a1K^~?yv zujl!ezk9b2xpHOZ$y~$K)QfN5zCEaNkUuuT}0lsETpFU`w9Z)kQo0|FE%^1be?%lg3=Sd-4<5H6C^(|x7BlhF9 zBk#Al24WP8jVA2Ij#yYk*>zStO->fW7BTtx{nIKT<6D1UxDb!Z)uJL!4kSb2#W{;_ zl*PrxZ3f!W*4ADn?(p0)easG#rC>!3#nM$lwjP#lPMP}12_Xs(Lw0q&+B;>+6g*!k zY9rO-$DjTBWrs4$7}$pB22;}rtbho>J?@=5k7eK8s)F0NyG=KmH|DkSK(&3qBbQ&t z01J2}BtVtfmd&X(1Jc226ms}cx7aleU{GU6HPd@NC#SGp#&pV34|iCVwv|Kzh}d%@ zm`6oLf!at$5Tis5)JS2_Dt#;gOhKP_!00|-9mjA2j>uHTgFot%8$5Oy#FiH$cMbJ6 zM@T%s^Nse|9Hf0$({FMpkDOp7-Q zn<*TD0i*n3D2-cKxMWapH3vtukI#~&OO}K^dv=J!GwbM396mxzJ(x2y28q7$Owx^W zj_Uz}fkb82ZUO6pO)v1S|KUKgy|IP9egk)UZS2kWCyyXXxsLY+fId1pGSFQ%J~o2b z!(VNkIX{hW0VDS)FcN5r+gyl4?I=HdQ6;~qzQZ)=3e2p}?rS!>C@SyVz55x#KRmqk zUO%TtsgqT2Z!c}aps4O@&yj}xVCDi2 z=Zq6Pa$1R&B`CTcv+uGq{`y=G|5_9*=iT?K(ke18uJ2y*QN8$WsFL#Y5Bs}~4~b2T z_A0f#oyqWPN8Vd_6pINDbTh%UtvK8(_{I$u1O(%zFZTr>=2W~s8~dxfHU@lvNo|@- zMUqV@<)Fup*J?X;#_PnGyf`}Cadt2#p{${2ML;bghRDO5Fq3cZu09Yw{p3kdkWv!6 z(8C-c?9k%U0gkc}_CWgs{RI}O%)211MPyzu9 zP~5=b?duzg%9&^V`aM{;A5=a72_j(XFBM9uj&|x!)j%vJ$_d3kve0Fgl}~`iN=i#P zR5XAusKULudWR`R31n1Zmtm%`kkDoXnWs;mW)&AzRPe~vR=WC(v0tCdIA;&sh-Ku% zMN|QB*N&#*?($e!q_(=T(u%HM zUDc5&Y~R0s9}X%CtD4e`va%P}$BVzC%FRn92C9NSemp=7aFz$VsfkI!T0rX(hlPDfO7ZC%JL$h zRVG^bmYAxcEC+=2oa0F;xuu0)eZ0Lop#cxJUEOdjaQTv+fq}9a%a1?qtV}eU7$2*6 zu6Kd33Kf$GrPvh1<$129&4Uy{5kN4p0M_qLFVDFOUPx8d2mCckKf5ZkZQ9AIlp}%4 zG<01IzewHNZsfsb0tn;S9+gOZAu1PDB7j)eNInC!l?R{nV29Nsq@IfAt7f3i?`}Ev zq&H7W7sM3;BL0UuRq)p*q$sao{bk$2@i&U$D+@Ulvr}g<{G0&mgh6S8ehh#7So)!e z1^)R6_#tqE+^EHRb9X7BXwiIWaFrGfVsW)yr3G&AcFw8o8Cu<$UuqW0rdB%iC!}x6 zJ{4;+I@nY3>U0o>-N)OT6FVDnK`NHmnWW>IudD-m#yHomm8QxOtPRm2BsA2FLWLv!3gsaPNYv^!g?RW?njtEqpSggrlj)a^Ct>t7d$9sU8Gb?>o0Pp~o zQrK;LwBd5r#f#HX&YOaS&hqzNv20n$xpNHSA6{xkk$m9fltN`6NdN|d1Z0x2ocfy{ zT2U&Y3KttXBqL+bP4->ts4INS&D8;2XR@*p6!%na>b?2#~73eHgQ_Rp<550GD6)v3!77 zL&2_LYv+E|KV6+{OMwmV6HS1<3Eq&GlQZ+^{9@C1Q%oEQdw5oomOyXx`pA4f?mE;+ ze2C;=-88x`Xht;`m(+s?4+`mJT$I!^?Q1EZr$)${)JOuPCSKJGY}_bAIm%bp)#OXB zxaNTaS32xcBu9Vj)sOVJdv_TjJZul_F#z}C(V^acP@kq?4l9yvBgMqTR1Y71jG~PP zs0%_R_7-75gc^z{Mvu-;f>8{uWfToKq@lYIg*tg2c@HsIyNY>S*SLX+j|9vQITt47vPIi`H# z#A9I64ba|z5NRqaQk@b}j4xTZa2-TXz+`Q=v3^AC9Zl=@$;!5qMvCvCyb%w21gux> z+?z9(l~v?!k>a64#VAE-Z>`&Vc}3R74I5e`BBPa0oeFue;lPle6q~4X*#4|^7YLd>9GN~i9}>KDk@&R5(U4>HoyT!jpCiI>(EnhaF8wdkVfcE zyQ+xcOa75DPYojp?w9BW5UdXnm;|x}L21wed#;xm=Q3i49mOpvspG3{cmLFvdwS1Q zmxDe=N&o>LhJpqa1(p6EZ*w;GIK-W9kfF~b*QtaYIsyTMviabkgTACO!Wcp#YLOzN z8R86v$0i6PHcej&Kn2($h7Sy-=w)7B+1b`rol;U*_yjE7%@qQBP@b$3G`NC18vW|q zTxiHPUDq=dTJzg?yAkCrb9@{pmsfcmX!BAN$_;$f@gyx%snIt5+{wvnFQB zw3(ZEcn%^2#~w#@#BYN?M{!ow{AT~5OVodZbT~^;1?-P*t$+S2fQuP&jTS7FVMQguF|+t zIz|x-b#y&u7jso@lPB?Eb1+yq&~6F=lG;U~N;{gaUcPjx^%`&Vt<3tvpgM3#qAumU zH?}rQO(YSO0&I=7X9`*rlUI>w6@aSZW|GgW1sotWydgXI+`s>c$>P^@=#gt$#k;)= zB!W@jG+cBV^5X?nr>3RF8(VHZ{TAW^5DZ90!Wnicsd;&hdqa>Pa1o>x5EM7+{cqm9 zDf5OE!1?a)}m>L`3s_eXr;)KcU6Brn6`LzS-5nC3hDk3Jv7ekF=`bC5ZZkrwp^48L& zB}VVnAq;^r0Z;kaT!a@P5@^q*LKR%t&!1^tW6xPrHevi^?+O(ETS6Y&xq`G{Bo?9hc(g z|8dbtasY`*CtbI%rq`-w;LV#i`f0P|+HAW06+@w%Q3wa=nUvQS_vZe!w9|en2vY_u z_0=!*vsR{r79zn@nMa8stNP>&27^s^&z`@*N`mL@M}_Lv{^&*;$?-__N|1z}=1NUC z>qpJ0J0z3Gnad5{i6}7aHAINKuSqiye;q+#Ga+LAaI_@1)8RzuD1+;p}^WCoO zl+>>rMh-<3+q`*m%MF)%^PLyhFKqU!xl^2LfQtUmgISEaT9`a?2km?kIK~!w?j#2f zLuVn!l>dGI&hK^*=Rmcu>pR6y1)iI_6fh;l0pop}@J`*VTV7t@+7~efj{*ria{ei& z>wA8?G{P(x6iOo4NZp^1dY!eWroAPTSsLHB6l;L~lxldb-Yj@yE&@nCl+41AEG{W| z>N;LBBlJ87X1GosIks^V_m0A}*H&(jBfl%)?66B7XZm3V!*3V%hl}>qTm}L*DT^cv z0A2zhl$33l(mcJ1+-#3^!((E~yf(4FMv~d;{C1O>O**{*wg+s6__(+*0AbP` zW8|miuDmFVneJN+N|JO`GC&}CLr@z|=K9xH>+DjXxCKL#2YyE4`3$8DHBK7ee9=vP zBPHrsEY_BIR4V_+YbQqBhC5>OoV8i4b00S^#O24Xfn-urTA;;bNbzTI;+^58o>_Yx zraPY_uh==)OFw20&a9vO=-m2+Kj#1WOa^D&>fiUrdr|hyj@fW1GFmhoHMHW(%@N&2 zL;s?6nje$j`~UQH4o_6Z*MH0&ee=&nmVa3%K2>VVpI^EhH8t(eIPJl0UHBi})}5Lk z#;URASn1%l&32 z28qV%J&Y5LX9{}%<5?h>B&Vd@|F?G>ojB%rzFB|={__nQA;zBp*0N>%^Q;+v?>uMq zXAt{XwG1O!>)a>3kJ9Pi>{3?ZQT}}CV7H$v#z<<*@bDCd=y{Ykpr5iE&kVs4D9gQV z8>*a;=7odjDR79Mhr+ebDz*QvvWkj}_S|zpV5YQDXCQz6Yt}@kAge`2Cm&27sqV&D zG8mFsezX)B=QJj#{C|VL)peI+ZyW<@iv$UXhOtVUh3E|wR|tUtFLN27$l!YOLEg0G zX4Eal&o@ zEyG~^pT8;o{NIcNas&e4uZXCqlD8LTk;w+=h!-^%KnFDL!O}7^bD`Ek%$&Pmfd-hX zEz7bOsP>0fqTXcX5D%Nj_lf|f;Qd27xdKWXDiQslGz!7Z_}so72mw?TWMydIu$8HW zMG2H-q(x?}%YaF7*pb>+XU4hMc?eC`n#gRe1L+f~k|Q14@!y|LzSs<$1A32`ORRR) zqesKYX#{0VqNyLn35nF?GJ;ta$!*T!jiYPp3+k=l6#sP`PLF6PQ0%@O-yS5K0L-FC0C+mqwC5uehk)Xbj+6f)_zlEB95|;~t zg*Yl&n5nSRCfEHv??zc-T%0jjS)Q$1OKz>(OQvCf19HlN_abKDral~lq$iVNjEC?+ z*<)vyfQp&i467|)?ZyxIlskXl84l_)aIbqGrX?Ii9_)<7`uOd%`9ngH1sBdL)Ku!= z5Kz?b%z>;%MRTrAoUmCb2#=$1pTQ|`Fkt6(3{ly&rM$mMyKnO7h4N zNlxUI91l155x4;sEm|Z59kn5?78(LPOp0J#TejE!fa>f%KA0sH0gfK^trv9-`1?m) znz!6^d~_&FLKpllk;wM7E*kBnQQn^n3gHQcW0e!`aQL2D3S`%mz-Ko+JsD+ewd+VH zghrpDRv&m}upS7r5(`5TEJ~yPx(f>;0t0s=(#a@f`)(`iOstK@Z+$)$OO7Y(DUty& z397ofF))*>Oc)~P*a9>$L@ltkKpC%-gC6Cbo<1RvHHa_@ju7C@!oq^ca;5?6iPNWF zke&)sO5q;=Zv7883SOOVAd?nxtIwb3i+Wig z`rk}j0MSjAWIixaVOeY3j-wzWot4%pSOMdVi*vv*`7(oj`(S4&!f$z0s^HGU;L{{a z5_s|tP8O!7g_F_>d}-uMl$sb)O2sB3A0K?(Fcdjvj5H%mm6U7%O8_4uw2XGQi2*lT z#6fP9{O~E4w#TGL7Zw&u#_VFY+#-97al!hr+KI6M7>i_e-NtN#LO_)rA+wH>QW<&4 z0JBXYo`HwS2QT&%!5b|A)24xq35DiH6)589rUB0d4AFKVMvO5rJeRZ(R6KVTt9TX2 zm!+)rU>aO1B^Dm)CmCqO&%PvSs;42#tnu@miWmCs1UMc%aQrgFm)x?R=r2;{H!3}P zhV>B^6{Ub3NcsiVbR6VZ>^Y)e9th}P2K{RA;^?G&2I3P61LzrQB_Kx*%r2822h>U) zBa$@trKKCex+2hHwa9tF>=x%dbcUid?h2t;4a|Gm;J113mFuEh*%U9DOWY%zOr%7j zzJe8+T%&Pzotor~z=Ct>YS#CZ2mu$VKiGw+qN=LOC5P=;h-W|}!XivC%+zWKwPF}@ zkaU|cJD{HAWSj2G$jI;@8(_3eEiHd(@-phGq)|!vi9prEiNolh?km?SW>ZX(w1f?HfUf1)X-QGtHbP*vDXm!8VJI=8K6(7Mu@s`NqWC$H(A{ z?w7p0GPsdpevv{i5z)%%xxkE2Oay%daRA|oax}SU5bnMD-Y?f}!HyGc8bz2ZLO_7A zv{+IqvE1CnaaQ`uK}V+@DcGN_p#i^EUN(Jk^t)ZkO?^dmE33y`uu`vVW~NjVZPEw4 zK=A>LKY4^edcm8t-un|XgQGm?SX8NGcjl?sm2O?Ftpd?o+2=UPriTVCpX23WzWXOB zHn(LtV7~Ar;{bpPR-P#m5l~oG)#B^#pXBD|N(L@MX{(=$_#d}&m)$e4w4h7+;1(s5 zImA{n<&pv0EGAW*1ZgZT5N_m>1>HwVD?CxY7y=OB9d=Fu6KK$(2n?aPI>mm?_KqTR zkpSx%DstC6JrQHvt>H}rj)W`!CU%4xKnb!k(q&?vnK*}1z`0)v2u~jlmw+zhG@x8a zr~|Q#46n2=t}owK3X_>qt4?WYDY8Ir7?Oz$h(Yj9_@>LU^}`7ReC(|yF5j+hg4i@^ zW5L!Wya-jxrK%w%IhhzuIHq2s9Ge7`8{sa4)RTmVCkRHV1GZrJNRtHx1>;=D9LMSx zxZPrMIa79krud=819(sK`HAxYsUxF_%vNQE`-1xc{vt&@?K{PIREmA}RxWoLmd=lX`h{gx>e{HDa@2 zE(ij54^kKE>&Ub9^BAIu0_IDBO9>YPg^)`TI%I#)ny~}RuD_ps2(=k9Dj?)7$kAvK zP><4liisge4i9&hpA94#a<`#_D3}^f(Bp)cU*yUrdr8h@Z*_zPsUPMzGuT{-V}kYy zx^E+B`uFeNeFTt2V!)5=+_`hub7F7-wVyb`I(W1GRVuWiJGXAF1$&L4{|Vk?u(e z7AuJdk|%bRlQ|S`{22T94MINAfqiFwA&=wi_k|SYab1!j;FqSPMTW2il7o+5 zIOJg)*rL{@5(2C7!`-z+bW&RcHT6hI)qZ_qfW9=xy7RN}#OuLEAen6z5mEP)8h=GS zY6}+zLgOSVACxN# z1-qV{oE*fpTU?mV7W?!Oe^ewSEMiX+!A%ao@1ML$@YU7bCI^8<1wf9=1oa#mf`$@Z z7cdw`YrLB)H!>NFw35&O*9f?Xe;4w8x3TLHY+xkYf`KA&3T9x_o)nUU4T4$F1Dlw@XY* z&P|!-(R!?1A_P$#>4#cE7M7-&pdE$01E%9{PWG$V)b6Lbjf$lP$KGbFn|J-M*|!*b zGMX+=<>OSGxiawxlWps~Q^zBi#b?3`B90u1%s;YH*1vG2%)8TZ$4~5AvU|m2>2=T6 zeSD&~{MLr+>&!xns+%YNZsQv|Ke4yCueHfpDkg5%ucRVA&$6U|W=UC3(_Fuukc3Hwox>w*cG(?5Qbh{;{OQxxah==YFK>57Vz*->74+Wp7qcyd zRkN?JNQuG|YKUEio4$sRk2XuS!zDfX)=g$g-Jz@mHHNhN>6B_Rd&sA`Aq)er}Dnu9Q_3~0asT*G{tHOq=pXaYbc8e03zHV zhgeWhx9s-AhfWy*Mg|6>C;0DpxVfQnRn2=Op)@qUpyVE$W?#O1@pg@YYY<2<;D;mT zD#(p(?xi?*k0)h`17=p^LASnloUOn=y6D{g9!c zy}IV!ON;JTc9x>IEC?&40+cH2GM1T!D=t?XGR|3GP<||CL5RhIojdb_o$u7g7l>~C zIpl2TJB5)TaHvSAlMl*J5rr9qA4oze@>lO8agYKDq=v6Btod^>*o$OP4mRZl#B56& z+=HUa&Uf+Rz{3aDxb)qo>?XbTggr>XS^AYcjJ0~^=r|+%LCi&rt`soCS8Zi?jXd9x!dvs_s+evhj0o&G3MDUiYa;BsYm&cR;}t@KEV@5ZvfKHA`^XudU|Ai3=c8Up6ydkC->2#_q&sM> zo<>hiB)T;q!IVvm4@mjLYyhSuKIIu}^O+A*}n zE~v15cw`#m#4MwzG{(7$GR?OETHi?i`4k)LE)ns|nR>KYwRW2I&tPx~g|dFpqj}^Y z0bovMc%`}o$AU_TISDqUZSxohj9I<}qGo*V8LN)x+B5Ic9$-6_Vv@BcWh6v(nbII#QsOUEpf88`!YD`T|3(0 zi!K-K^2C-t=kFIkjF+p7ILbi?{M!$E`7P(TkGKBy)MuIsuGUXJq2=!<%n6@nv%XTj z6_@)xS}dUpe&4?L@7uR&d&3qIh1os-=k@R#HX4mx|1lc%U$D?PME{J=W!cHa@aN;# zS-zaEh*O(I{O_LeDkjx^#4y6 z)UV*dmN@6~`;j$+dx0z>v9Ri`@6p7mV?3lYtcMfS{?kqV=VuD)Jpg^}#F~HR;`~l= zy#8y!?`u9#nbicN@})nsPU~_`X6v6Z`u^u8#rQY9w75gRsQ7yzZ{~9T+kMeB_$`+H z$2GkZqyFnMveJ4=%jBPN{uv}DO0QQ>j#7*G3$vJHJbAOWj?vpq7dpIzMzS;ZfnRoqu=4Eq~6z6%;_y1X6JL7v>;C5kk zr^}c({+G-A$6}#-yfW+eWqs?pMlkYhguZ?;bjWj+AfDw9c{aELu#6-e)_Ho zoDJ~f#!GwFrN~eDmW%GkC9LkCK(}O7AD?;>LF?UZ>9vmmX2^<%q#l>ThS;A0TI!j7 z%~@Hh(q8MtR5vn=wpivcP6vZ~(TOQLohi*=405kG3T*u{jUA%jyW1!DH+#AVePl4= z+_M|8#{@uKd^^Cu!l$w`1P?B~YhQ*w|BASjC3QDs@GrA5DU6CdewLM&cQ-F=IdM^T ztw-Esk9Iz`@?-X|33J~;pS{9~lmJXbX+?U^X#^dX3l@>mvpB`4p62?!j2NMwCXv$S ze{q}_g^e+gDgA7k_kld*R;tmC1C5HE0S$q`1>m@SRDDTOpf!DKZ|j^Nn1H?@jo}h zwSq+u%l@3*x1FB9pC%(t4MVi;ryuR&$w6L7!QkI+_Tzm#DW1<0cRo2-gkm0#-TP#X2Fg(WfAmm=*yiEW}F(A9~+dl*I`y<#4zkh+^$GKnE z4{0@i`7>!IW@ZhOleE}hcC83aBPET0`%&B-M!(?f^nXoRVx_|5l+_(!|2=k#R^l@Z z#Wk3g$q)UPky}W^*13ERWfrkzbf1)B9*_O&u5NFi+~T4xSU|sjW~>uN0u ztt2d#;Lvjtpf@F(@l6J$@u8x)6qc_fLcvoGAJ>bNCCumWXoHEV(7wz`aaelKb_Eq2 z2679w$4foCuJxlAw$xCSJ>fpcn7_N%H_3`Y>Xo^-z(I$%8XVuV$gfsGELK z_VI-dAkbK&yix>jC5P(fUNVkChhGK453>6Aen*RlpINA13PAsmLl;)C3T;+Q!b$j7 zn72SMk@!aE*NEA+1uOcCg(EQI544UWXuBa2%Bd?!7IebhEbkQ)Vj?0UWNIe*=Z?2G zd@u}m0MQDF!QjWg_YMge=dYd5E+#bW3N%j5mLRpll>#J5>H#h7UZM>3|bsI2?N3_*}6$M0ZVxt{UYdhNg!Pa zmhikTWhHrZLt`ZN<8NjU`QvZF7fNDK6O8!J+BX(4Twp3FKS@he|?z=`hKW& zCs(Zf8S5NRaLc600+ELxiBVUw)QIqU@OyEN{iZ0j%|N&T2X{UUyL&uL@gb$LB z+RLN%MHRFLbVf(hg&|JSh0(7~=GEELsZ{`Et1^Cq4v|482@~g&U_|8TM}gtNhY(^1 zb<+}0u4NGJU}!Lms%UB@iU0aQU)ZnfPS7+z3jfF-+vJ&@R2%8dfl^!4@KYp+N+_Jz?yLr!f$Eb7YtJvQ&| z!4w8oi#nI6D+nzt)ZvE~c06HUsg_mi*g);1nI9dSSaK#1>65#X+lhJ}8gqrluLO~k zNH{$7`j|3u)RGcQBsGl_I4D$1?Ibrq?;}Q%edn2dG%z@r18V~v=3!`{Ka88jc6qSN zK6-qt+pQh_9B51PhqI?7-OY`f9RW;2@!);9cT)@yEo7YfpmVlyDK(`4U|GLHr8%j> zQ-h0{f+!xCOb=MVxCz_t8VRv!kKtJWJ%u7h%s|oW@3)Ux*+kbC36X^wECwA0fxb(1 zsH|$*+I03wB>EJ7j*JcCN3mbPn=B`;9fvarvh0O7qGTb;NHuhxqG_gu+Ny9>^ty+m4WI>0SDHy@o~8{ot@gLc-O00g zb3Qc_B3@9>Ce9F`>ETQ15F}+63^R=~^(ev7PsfE&XFavYPI{P$&V(I<8c`haZ)E(Z z7W*ojJR-pQDcpe!OYV4?U7Tp8`!HPrD4Qkouqc~47+l)f=sz2n<(7cXNc6!Lq34@g zCxJw5TMPG-GbB2F+|jnPLJhY?(j7H~#+zwK41M<|x`v8#EIJ|#=v*4d{!+*c&G*r3 zMLQm{Dv6T?mi6!g#G{pgd2xny54HkO6SQ-Q1|v=%&D3apKokx_50hqL zj}12@_h{-m?(eMBz)>m`m8mZUYnY~5Kk;^ID$MWH zJrJGLw2H(TVl}XYgs~-|K$H)up&}{8S_M;L3%D=omfD} zcVMMPV%i=>DPZHDGX_En8GUQao}SKTCm?ntL{HJ`A-oZgDwO zI_wcytB>f#(7+ulBMBulTc6;PGcN3=lmF0jij6v46fhplA>kPI`t@-#%#dS9C)F`r zH{F%`i0CEOs>JwU1A;;!9&!VF_c;!P34}lu90omzPp-O8I&=jF`6`{$Mj7N0-Ada6QV+s=|nmmeIYV`5e zy0)TFPgFbP!+7+FPYxS#u6rg0^0j{*hTmh^Hl1gN*>^t)DS%PrUNYANiw2)*!%Ag3 zDWd(F$-9v8CW$sVz+42DDYiYGPjUz3X<;5ls?}RbvH&+F3{jvOy?kv2LGF@ceJZra z=okt35GI2hv@0-GnhECjb(}hJ)J2>eUXmJwp7E!!0nlkQBSyOYOtU<$BA;;rlVkdX#QvXCl_Lqx0#`(dehqGj`1 zr(3T6B@7)QCOj)xOd$*x42-#^LwNAP=@jX;wR*L)7;8_Hgo71G-VW+3Mzb9cq+myH zp_8DVsS8+BOG^#X;p^ApK*6K{15Hue86gF1TO%t+xU|1Zq$qyasj|!m+Wj!Z<^N4n6=|MkXS(-XIx~9RxNi$l-j+HT!dQ zmMvY%jVsXp|9pp67$@f7Xnj1^i~M6G|V!) z#=`m$eke{)aq5Y>A413XfD)(9F0cg3Ha0d{&fH+^N|S7i$kBq#R)XLdcjE2e-X6l3 z(J&0Z7hX<{(oRvotVf0mtat(<1Kk>6P{39K6;SY9x&0J%zNb3&7b@(iUZTB5Nfuff z1Q52=bq}$JfC3;O;TCYj5OtdfDMaa}>nsreA}TVu>W}}Q^vod(BY&F_P=$jHGG{1Eb<{wKy330TT}5D4 zaOYUeAKUxwzFr8J!nnHb=j?FO8L6ib>n76MF3EX9<`mWj;LVEpwrCsmrRL}@!w-PJ zur9dv&nnr1=Rs%N&dy| zs60TD+0SQrwftV zg2ODxr$*MaWV_A?*RkRKnUy%7BN&5(Q}&3FcKX#(fK%aAvG);84vKLc!10oYio_dZ zxuWc9zT^BZ;9|@vDgd{^%r$h#BKWFfv8Tc{Vsr|kp5M+J4m6u$B<9=i$ru!X_(sPq zJ$%R;kQdt2)YPI%P^JC+ERKBOD>OKVwb(=r4|K;+&nI~BON0u^T>zj(O@rtjOK$%7r}^a#);g)hUVK;zFp*LfDY01uI^I7#|KM4~bN& zH5D{JIVlk#>BKI|^^~*F?o);?XzFg|`D^w^eip)U)C-NmiDtd^zR@E(;li$~DxCHv zq{70U+$VX>bQ%U7#zdX@NPMB%R=8!p)IMcIM+&lzz-j~{vDXx+Fs0)(vG@;&rTv1L zi2?!{J?MNZmyus%;<{|N7@$Jf4IrZrXGx6*p5S-LXuqt7%|e9?>Ysv?NH5y4ShVQM zx1r((F_!tx!OrfNoY6`f999D7UK>t;pd^K4WS235bw@{|QC*4in_AjQ%pj8-W*k`# zAuOAph0Kyl68%0nPA#y-t{CqFRPE1;8g!PNJlF^qC8r+(DPY}bmF^nG+T9RQ=-ev} z@T%nc`ucT+%QlF%)Ckarb!tq{kF$=*UV#2FIWJo3zpE89U?XabLzzLFBn+XF&QqDR z!Qm`pI=hLE7emYQUfiq(jP-Ps*WJ5!?V{u_>$Q07>Z0TQ;z8HYu|$;OAth0>D%v#n zr})7zMvcgH2qVc~@Nbb_EkM%6wPxGGD=HKU&;yguF739@!0|YWN5C|v!txv4HYwEw z*7m4HxsjcAw%P{d*=MZiosBpGyoR%UBCTrE3d+*JuTu6yM}LTkrsb@u48QxpH&jjF z^we;y-A&UI(3q<5Ku8YE5d{!B&kJL^@^T<;98$0;Gm!rm`ZVDL=w;DhEo4KZYv8CL z)FFi9Q!p0tUP3tV5R<|E*K9gY2fFntQR~N#9KOwQ=_`;WuK_j@!Fc?l1gsbZ)+_b(|)QtJ0BeU;u)UPf$6+RUu`kD|TXg z=4krEg4@qg_cO_zH{91cDM}Kl4TS?I=lEAYWh8(CqtlA0DcPReWAWTeOJ+}p6a7iv z7LVEPnNTu;Pyuw2S_%dfid2F6*8X+NgacD>>YYi=GifLwTk7Ya{_bJA#1?OYv$DREzPSh2Y$!-x&>a{6S=5( zfGpu21m@V(v3^gM=5Mm?Ui^v0q`oi)k04uc+rK(+ZRTec0hYaz0o#UWM@--Z)qT?% zCdS$o`$c)6tM>Kvc}KfNbL$&!;z;r5zn3um|B Date: Tue, 14 May 2024 13:24:02 -0400 Subject: [PATCH 34/34] manual, all parsing OK --- .../collaboration_tools/website_link_checker.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/collections/collaboration/collaboration_tools/website_link_checker.md b/collections/collaboration/collaboration_tools/website_link_checker.md index 788c5e0..162795c 100644 --- a/collections/collaboration/collaboration_tools/website_link_checker.md +++ b/collections/collaboration/collaboration_tools/website_link_checker.md @@ -39,15 +39,15 @@ flag. Otherwise exits with code 0. Note that errors set as --warnings will alway ### With Python * Clone the repository - * ``` + ``` git clone https://github.com/threefoldfoundation/website-link-checker ``` * Change directory - * ``` + ``` cd website-link-checker ``` * Run the program - * ``` + ``` python website-link-checker.py https://example.com -e 404 -w all ``` -- 2.40.1