diff --git a/collections/manual/documentation/developers/proxy/commands.md b/collections/manual/documentation/developers/proxy/commands.md new file mode 100644 index 0000000..baa2b4d --- /dev/null +++ b/collections/manual/documentation/developers/proxy/commands.md @@ -0,0 +1,127 @@ +

Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Work on Docs](#work-on-docs) +- [To start the GridProxy server](#to-start-the-gridproxy-server) +- [Run tests](#run-tests) + +*** + +## Introduction + +The Makefile makes it easier to do mostly all the frequently commands needed to work on the project. + +## Work on Docs + +we are using [swaggo/swag](https://github.com/swaggo/swag) to generate swagger docs based on the annotation inside the code. + +- install swag executable binary + +```bash +go install github.com/swaggo/swag/cmd/swag@latest +``` + +- now if you check the binary directory inside go directory you will find the executable file. + +```bash +ls $(go env GOPATH)/bin +``` + +- to run swag you can either use the full path `$(go env GOPATH)/bin/swag` or export go binary to `$PATH` + +```bash +export PATH=$PATH:$(go env GOPATH)/bin +``` + +- use swag to format code comments. + +```bash +swag fmt +``` + +- update the docs + +```bash +swag init +``` + +- to parse external types from vendor + +```bash +swag init --parseVendor +``` + +- for a full generate docs command + +```bash +make docs +``` + +## To start the GridProxy server + +After preparing the postgres database you can `go run` the main file in `cmds/proxy_server/main.go` which responsible for starting all the needed server/clients. + +The server options + +| Option | Description | +|---|---| +| -address | Server ip address (default `":443"`) | +| -ca | certificate authority used to generate certificate (default `"https://acme-staging-v02.api.letsencrypt.org/directory"`) | +| -cert-cache-dir | path to store generated certs in (default `"/tmp/certs"`) | +| -domain | domain on which the server will be served | +| -email | email address to generate certificate with | +| -log-level | log level | +| -no-cert | start the server without certificate | +| -postgres-db | postgres database | +| -postgres-host | postgres host | +| -postgres-password | postgres password | +| -postgres-port | postgres port (default 5432) | +| -postgres-user | postgres username | +| -tfchain-url | tF chain url (default `"wss://tfchain.dev.grid.tf/ws"`) | +| -relay-url | RMB relay url (default`"wss://relay.dev.grid.tf"`) | +| -mnemonics | Dummy user mnemonics for relay calls | +| -v | shows the package version | + +For a full server setup: + +```bash +make restart +``` + +## Run tests + +There is two types of tests in the project + +- Unit Tests + - Found in `pkg/client/*_test.go` + - Run with `go test -v ./pkg/client` +- Integration Tests + - Found in `tests/queries/` + - Run with: + +```bash +go test -v \ +--seed 13 \ +--postgres-host \ +--postgres-db tfgrid-graphql \ +--postgres-password postgres \ +--postgres-user postgres \ +--endpoint \ +--mnemonics +``` + + - Or to run a specific test you can append the previous command with + +```bash +-run +``` + + You can found the TestName in the `tests/queries/*_test.go` files. + +To run all the tests use + +```bash +make test-all +``` diff --git a/collections/manual/documentation/developers/proxy/contributions.md b/collections/manual/documentation/developers/proxy/contributions.md new file mode 100644 index 0000000..3960676 --- /dev/null +++ b/collections/manual/documentation/developers/proxy/contributions.md @@ -0,0 +1,55 @@ +

Contributions Guide

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Project structure](#project-structure) + - [Internal](#internal) + - [Pkg](#pkg) +- [Writing tests](#writing-tests) + +*** + +## Introduction + +We propose a quick guide to learn how to contribute. + +## Project structure + +The main structure of the code base is as follows: + +- `charts`: helm chart +- `cmds`: includes the project Golang entrypoints +- `docs`: project documentation +- `internal`: contains the explorer API logic and the cert manager implementation, this where most of the feature work will be done +- `pkg`: contains client implementation and shared libs +- `tests`: integration tests +- `tools`: DB tools to prepare the Postgres DB for testing and development +- `rootfs`: ZOS root endpoint that will be mounted in the docker image + +### Internal + +- `explorer`: contains the explorer server logic: + - `db`: the db connection and operations + - `mw`: defines the generic action mount that will be be used as http handler +- `certmanager`: logic to ensure certificates are available and up to date + +`server.go` includes the logic for all the API operations. + +### Pkg + +- `client`: client implementation +- `types`: defines all the API objects + +## Writing tests + +Adding a new endpoint should be accompanied with a corresponding test. Ideally every change or bug fix should include a test to ensure the new behavior/fix is working as intended. + +Since these are integration tests, you need to first make sure that your local db is already seeded with the ncessary data. See tools [doc](./db_testing.md) for more information about how to prepare your db. + +Testing tools offer two clients that are the basic of most tests: + +- `local`: this client connects to the local db +- `proxy client`: this client connects to the running local instance + +You need to start an instance of the server before running the tests. Check [here](./commands.md) for how to start. diff --git a/collections/manual/documentation/developers/proxy/database.md b/collections/manual/documentation/developers/proxy/database.md new file mode 100644 index 0000000..58c327a --- /dev/null +++ b/collections/manual/documentation/developers/proxy/database.md @@ -0,0 +1,21 @@ +

Database

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Max Open Connections](#max-open-connections) + +*** + +## Introduction + +The grid proxy has access to a postgres database containing information about the tfgrid, specifically information about grid nodes, farms, twins, and contracts.\ +The database is filled/updated by this [indexer](https://github.com/threefoldtech/tfchain_graphql). +The grid proxy mainly retrieves information from the db with a few modifications for efficient retrieval (e.g. adding indices, caching node gpus, etc..). + +## Max Open Connections + +The postgres database can handle 100 open connections concurrently (that is the default value set by postgres), this number can be increased, depending on the infrastructure, by modifying it in the postgres.conf file where the db is deployed, or by executing the following query `ALTER system SET max_connections=size-of-connection`, but this requires a db restart to take effect.\ +The explorer creates a connection pool to the postgres db, with a max open pool connections set to a specific number (currently 80).\ +It's important to distinguish between the database max connections, and the max pool open connections, because if the pool did not have any constraints, it would try to open as many connections as it wanted, without any notion of the maximum connections the database accepts. It's the database responsibility then to accept or deny the connection.\ +This is why the max number of open pool connections is set to 80: It's below the max connections the database could handle (100), and it gives room for other actors outside of the explorer to open connections with the database.\ diff --git a/collections/manual/documentation/developers/proxy/db_testing.md b/collections/manual/documentation/developers/proxy/db_testing.md new file mode 100644 index 0000000..ce0a31d --- /dev/null +++ b/collections/manual/documentation/developers/proxy/db_testing.md @@ -0,0 +1,45 @@ +

DB for testing

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Run postgresql container](#run-postgresql-container) +- [Create the DB](#create-the-db) + - [Method 1: Generate a db with relevant schema using the db helper tool:](#method-1-generate-a-db-with-relevant-schema-using-the-db-helper-tool) + - [Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run:](#method-2-fill-the-db-from-a-production-db-dump-file-for-example-if-you-have-dumpsql-file-you-can-run) + +*** + +## Introduction + +We show how to use a database for testing. + +## Run postgresql container + +```bash +docker run --rm --name postgres \ + -e POSTGRES_USER=postgres \ + -e POSTGRES_PASSWORD=postgres \ + -e POSTGRES_DB=tfgrid-graphql \ + -p 5432:5432 -d postgres +``` + +## Create the DB +you can either Generate a db with relevant schema to test things locally quickly, or load a previously taken DB dump file: + +### Method 1: Generate a db with relevant schema using the db helper tool: + +```bash +cd tools/db/ && go run . \ + --postgres-host 127.0.0.1 \ + --postgres-db tfgrid-graphql \ + --postgres-password postgres \ + --postgres-user postgres \ + --reset \ +``` + +### Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run: + +```bash +psql -h 127.0.0.1 -U postgres -d tfgrid-graphql < dump.sql +``` diff --git a/collections/manual/documentation/developers/proxy/explorer.md b/collections/manual/documentation/developers/proxy/explorer.md new file mode 100644 index 0000000..2651cae --- /dev/null +++ b/collections/manual/documentation/developers/proxy/explorer.md @@ -0,0 +1,38 @@ +

The Grid Explorer

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Explorer Overview](#explorer-overview) +- [Explorer Endpoints](#explorer-endpoints) + +*** + +## Introduction + +The Grid Explorer is a rest API used to index a various information from the TFChain. + +## Explorer Overview + +- Due to limitations on indexing information from the blockchain, Complex inter-tables queries and limitations can't be applied directly on the chain. +- Here comes the TFGridDB, a shadow database contains all the data on the chain which is being updated each 2 hours. +- Then the explorer can apply a raw SQL queries on the database with all limitations and filtration needed. +- The used technology to extract the info from the blockchain is Subsquid check the [repo](https://github.com/threefoldtech/tfchain_graphql). + +## Explorer Endpoints + +| HTTP Verb | Endpoint | Description | +| --------- | --------------------------- | ---------------------------------- | +| GET | `/contracts` | Show all contracts on the chain | +| GET | `/farms` | Show all farms on the chain | +| GET | `/gateways` | Show all gateway nodes on the grid | +| GET | `/gateways/:node_id` | Get a single gateway node details | +| GET | `/gateways/:node_id/status` | Get a single node status | +| GET | `/nodes` | Show all nodes on the grid | +| GET | `/nodes/:node_id` | Get a single node details | +| GET | `/nodes/:node_id/status` | Get a single node status | +| GET | `/stats` | Show the grid statistics | +| GET | `/twins` | Show all the twins on the chain | +| GET | `/nodes/:node_id/statistics`| Get a single node ZOS statistics | + +For the available filters on each node. check `/swagger/index.html` endpoint on the running instance. diff --git a/collections/manual/documentation/developers/proxy/production.md b/collections/manual/documentation/developers/proxy/production.md new file mode 100644 index 0000000..fe4e108 --- /dev/null +++ b/collections/manual/documentation/developers/proxy/production.md @@ -0,0 +1,117 @@ +

Running Proxy in Production

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Production Run](#production-run) +- [To upgrade the machine](#to-upgrade-the-machine) +- [Dockerfile](#dockerfile) +- [Update helm package](#update-helm-package) +- [Install the chart using helm package](#install-the-chart-using-helm-package) + +*** + +## Introduction + +We show how to run grid proxy in production. + +## Production Run + +- Download the latest binary [here](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-client) +- add the execution permission to the binary and move it to the bin directory + + ```bash + chmod +x ./gridproxy-server + mv ./gridproxy-server /usr/local/bin/gridproxy-server + ``` + +- Add a new systemd service + +```bash +cat << EOF > /etc/systemd/system/gridproxy-server.service +[Unit] +Description=grid proxy server +After=network.target + +[Service] +ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics +Type=simple +Restart=always +User=root +Group=root + +[Install] +WantedBy=multi-user.target +Alias=gridproxy.service +EOF +``` + +- enable the service + + ```bash + systemctl enable gridproxy.service + ``` + +- start the service + + ```bash + systemctl start gridproxy.service + ``` + +- check the status + + ```bash + systemctl status gridproxy.service + ``` + +- The command options: + - domain: the host domain which will generate ssl certificate to. + - email: the mail used to run generate the ssl certificate. + - ca: certificate authority server url, e.g. + - let's encrypt staging: `https://acme-staging-v02.api.letsencrypt.org/directory` + - let's encrypt production: `https://acme-v02.api.letsencrypt.org/directory` + - postgres -\*: postgres connection info. + +## To upgrade the machine + +- just replace the binary with the new one and apply + +```bash +systemctl restart gridproxy-server.service +``` + +- it you have changes in the `/etc/systemd/system/gridproxy-server.service` you have to run this command first + +```bash +systemctl daemon-reload +``` + +## Dockerfile + +To build & run dockerfile + +```bash +docker build -t threefoldtech/gridproxy . +docker run --name gridproxy -e POSTGRES_HOST="127.0.0.1" -e POSTGRES_PORT="5432" -e POSTGRES_DB="db" -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="password" -e MNEMONICS="" threefoldtech/gridproxy +``` + +## Update helm package + +- Do `helm lint charts/gridproxy` +- Regenerate the packages `helm package -u charts/gridproxy` +- Regenerate index.yaml `helm repo index --url https://threefoldtech.github.io/tfgridclient_proxy/ .` +- Push your changes + +## Install the chart using helm package + +- Adding the repo to your helm + + ```bash + helm repo add gridproxy https://threefoldtech.github.io/tfgridclient_proxy/ + ``` + +- install a chart + + ```bash + helm install gridproxy/gridproxy + ``` diff --git a/collections/manual/documentation/developers/proxy/proxy.md b/collections/manual/documentation/developers/proxy/proxy.md new file mode 100644 index 0000000..7dc936c --- /dev/null +++ b/collections/manual/documentation/developers/proxy/proxy.md @@ -0,0 +1,149 @@ +

Introducing Grid Proxy

+ +

Table of Content

+ +- [About](#about) +- [How to Use the Project](#how-to-use-the-project) +- [Used Technologies \& Prerequisites](#used-technologies--prerequisites) +- [Start for Development](#start-for-development) +- [Setup for Production](#setup-for-production) +- [Get and Install the Binary](#get-and-install-the-binary) +- [Add as a Systemd Service](#add-as-a-systemd-service) + +*** + + + +## About + +The TFGrid client Proxy acts as an interface to access information about the grid. It supports features such as filtering, limitation, and pagination to query the various entities on the grid like nodes, contracts and farms. Additionally the proxy can contact the required twin ID to retrieve stats about the relevant objects and performing ZOS calls. + +The proxy is used as the backend of several threefold projects like: + +- [Dashboard](../../dashboard/dashboard.md) + + + +## How to Use the Project + +If you don't want to care about setting up your instance you can use one of the live instances. each works against a different TFChain network. + +- Dev network: + - Swagger: +- Qa network: + - Swagger: +- Test network: + - Swagger: +- Main network: + - Swagger: + +Or follow the [development guide](#start-for-development) to run yours. +By default, the instance runs against devnet. to configure that you will need to config this while running the server. + +> Note: You may face some differences between each instance and the others. that is normal because each network is in a different stage of development and works correctly with others parts of the Grid on the same network. + + +## Used Technologies & Prerequisites + +1. **GoLang**: Mainly the two parts of the project written in `Go 1.17`, otherwise you can just download the compiled binaries from github [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) +2. **Postgresql**: Used to load the TFGrid DB +3. **Docker**: Containerize the running services such as Postgres and Redis. +4. **Mnemonics**: Secret seeds for adummy identity to use for the relay client. + +For more about the prerequisites and how to set up and configure them. follow the [Setup guide](./setup.md) + + + +## Start for Development + +To start the services for development or testing make sure first you have all the [Prerequisites](#used-technologies--prerequisites). + +- Clone this repo + + ```bash + git clone https://github.com/threefoldtech/tfgrid-sdk-go.git + cd tfgrid-sdk-go/grid-proxy + ``` + +- The `Makefile` has all that you need to deal with Db, Explorer, Tests, and Docs. + + ```bash + make help # list all the available subcommands. + ``` + +- For a quick test explorer server. + + ```bash + make all-start e= + ``` + + Now you can access the server at `http://localhost:8080` +- Run the tests + + ```bash + make test-all + ``` + +- Generate docs. + + ```bash + make docs + ``` + +To run in development environment see [here](./db_testing.md) how to generate test db or load a db dump then use: + +```sh +go run cmds/proxy_server/main.go --address :8080 --log-level debug -no-cert --postgres-host 127.0.0.1 --postgres-db tfgrid-graphql --postgres-password postgres --postgres-user postgres --mnemonics +``` + +Then visit `http://localhost:8080/` + +For more illustrations about the commands needed to work on the project, see the section [Commands](./commands.md). For more info about the project structure and contributions guidelines check the section [Contributions](./contributions.md). + + + +## Setup for Production + +## Get and Install the Binary + +- You can either build the project: + + ```bash + make build + chmod +x cmd/proxy_server/server \ + && mv cmd/proxy_server/server /usr/local/bin/gridproxy-server + ``` + +- Or download a release: + Check the [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) page and edit the next command with the chosen version. + + ```bash + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v1.6.7-rc2/tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \ + && tar -xzf tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \ + && chmod +x server \ + && mv server /usr/local/bin/gridproxy-server + ``` + +## Add as a Systemd Service + +- Create the service file + + ```bash + cat << EOF > /etc/systemd/system/gridproxy-server.service + [Unit] + Description=grid proxy server + After=network.target + + [Service] + ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --substrate wss://tfchain.dev.grid.tf/ws --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics + Type=simple + Restart=always + User=root + Group=root + + [Install] + WantedBy=multi-user.target + Alias=gridproxy.service + EOF + ``` + diff --git a/collections/manual/documentation/developers/proxy/proxy_readme.md b/collections/manual/documentation/developers/proxy/proxy_readme.md new file mode 100644 index 0000000..aaf4266 --- /dev/null +++ b/collections/manual/documentation/developers/proxy/proxy_readme.md @@ -0,0 +1,25 @@ +

Grid Proxy

+ +Welcome to the *Grid Proxy* section of the TFGrid Manual! + +In this comprehensive guide, we delve into the intricacies of the ThreeFold Grid Proxy, a fundamental component that empowers the ThreeFold Grid ecosystem. + +This section is designed to provide users, administrators, and developers with a detailed understanding of the TFGrid Proxy, offering step-by-step instructions for its setup, essential commands, and insights into its various functionalities. + +The Grid Proxy plays a pivotal role in facilitating secure and efficient communication between nodes within the ThreeFold Grid, contributing to the decentralized and autonomous nature of the network. + +Whether you are a seasoned ThreeFold enthusiast or a newcomer exploring the decentralized web, this manual aims to be your go-to resource for navigating the ThreeFold Grid Proxy landscape. + +To assist you on your journey, we have organized the content into distinct chapters below, covering everything from initial setup procedures and database testing to practical commands, contributions, and insights into the ThreeFold Explorer and the Grid Proxy Database functionalities. + +

Table of Contents

+ +- [Introducing Grid Proxy](./proxy.md) +- [Setup](./setup.md) +- [DB Testing](./db_testing.md) +- [Commands](./commands.md) +- [Contributions](./contributions.md) +- [Explorer](./explorer.md) +- [Database](./database.md) +- [Production](./production.md) +- [Release](./release.md) \ No newline at end of file diff --git a/collections/manual/documentation/developers/proxy/release.md b/collections/manual/documentation/developers/proxy/release.md new file mode 100644 index 0000000..5f5fe84 --- /dev/null +++ b/collections/manual/documentation/developers/proxy/release.md @@ -0,0 +1,32 @@ +

Release Grid-Proxy

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Steps](#steps) +- [Debugging](#debugging) + +*** + +## Introduction + +We show the steps to release a new version of the Grid Proxy. + +## Steps + +To release a new version of the Grid-Proxy component, follow these steps: + +Update the `appVersion` field in the `charts/Chart.yaml` file. This field should reflect the new version number of the release. + +The release process includes generating and pushing a Docker image with the latest GitHub tag. This step is automated through the `gridproxy-release.yml` workflow. + +Trigger the `gridproxy-release.yml` workflow by pushing the desired tag to the repository. This will initiate the workflow, which will generate the Docker image based on the tag and push it to the appropriate registry. + +## Debugging +In the event that the workflow does not run automatically after pushing the tag and making the release, you can manually execute it using the GitHub Actions interface. Follow these steps: + +Go to the [GitHub Actions page](https://github.com/threefoldtech/tfgrid-sdk-go/actions/workflows/gridproxy-release.yml) for the Grid-Proxy repository. + +Locate the workflow named gridproxy-release.yml. + +Trigger the workflow manually by selecting the "Run workflow" option. \ No newline at end of file diff --git a/collections/manual/documentation/developers/proxy/setup.md b/collections/manual/documentation/developers/proxy/setup.md new file mode 100644 index 0000000..fa8d07f --- /dev/null +++ b/collections/manual/documentation/developers/proxy/setup.md @@ -0,0 +1,50 @@ +

Setup

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Golang](#install-golang) +- [Docker](#docker) +- [Postgres](#postgres) +- [Get Mnemonics](#get-mnemonics) + +*** + +## Introduction + +We show how to set up grid proxy. + +## Install Golang + +To install Golang, you can follow the official [guide](https://go.dev/doc/install). + +## Docker + +Docker is useful for running the TFGridDb in container environment. Read this to [install Docker engine](../../system_administrators/computer_it_basics/docker_basics.md#install-docker-desktop-and-docker-engine). + +Note: it will be necessary to follow step #2 in the previous article to run docker without sudo. if you want to avoid that. edit the docker commands in the `Makefile` and add sudo. + +## Postgres + +If you have docker installed you can run postgres on a container with: + +```bash +make db-start +``` + +Then you can either load a dump of the database if you have one: + +```bash +make db-dump p=~/dump.sql +``` + +or easier you can fill the database tables with randomly generated data with the script `tools/db/generate.go` to do that run: + +```bash +make db-fill +``` + +## Get Mnemonics + +1. Install [polkadot extension](https://github.com/polkadot-js/extension) on your browser. +2. Create a new account from the extension. It is important to save the seeds. diff --git a/collections/manual/documentation/developers/tfchain/dev_tfchain.md b/collections/manual/documentation/developers/tfchain/dev_tfchain.md new file mode 100644 index 0000000..a575535 --- /dev/null +++ b/collections/manual/documentation/developers/tfchain/dev_tfchain.md @@ -0,0 +1,95 @@ +

ThreeFold Chain

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Twins](#twins) +- [Farms](#farms) +- [Nodes](#nodes) +- [Node Contract](#node-contract) +- [Rent Contract](#rent-contract) +- [Name Contract](#name-contract) +- [Contract billing](#contract-billing) +- [Contract locking](#contract-locking) +- [Contract grace period](#contract-grace-period) +- [DAO](#dao) +- [Farming Policies](#farming-policies) +- [Node Connection price](#node-connection-price) +- [Node Certifiers](#node-certifiers) + +*** + +## Introduction + +ThreeFold Chain (TFChain) is the base layer for everything that interacts with the grid. Nodes, farms, users are registered on the chain. It plays the central role in achieving decentralised consensus between a user and Node to deploy a certain workload. A contract can be created on the chain that is essentially an agreement between a node and user. + +## Twins + +A twin is the central Identity object that is used for every entity that lives on the grid. A twin optionally has an IPV6 planetary network address which can be used for communication between twins no matter of the location they are in. A twin is coupled to a private/public keypair on chain. This keypair can hold TFT on TF Chain. + +## Farms + +A farm must be created before a Node can be booted. Every farms needs to have an unique name and is linked to the Twin that creates the farm. Once a farm is created, a unique ID is generated. This ID can be used to provide to the boot image of a Node. + +## Nodes + +When a node is booted for the first time, it registers itself on the chain and a unique identity is generated for this Node. + +## Node Contract + +A node contract is a contract between a user and a Node to deploy a certain workload. The contract is specified as following: + +``` +{ + "contract_id": auto generated, + "node_id": unique id of the node, + "deployment_data": some additional deployment data + "deployment_hash": hash of the deployment definition signed by the user + "public_ips": number of public ips to attach to the deployment contract +} +``` + +We don't save the raw workload definition on the chain but only a hash of the definition. After the contract is created, the user must send the raw deployment to the specified node in the contract. He can find where to send this data by looking up the Node's twin and contacting that twin over the planetary network. + +## Rent Contract + +A rent contract is also a contract between a user and a Node, but instead of being able to reserve a part of the node's capacity, the full capacity is rented. Once a rent contract is created on a Node by a user, only this user can deploy node contracts on this specific node. A discount of 50% is given if a the user wishes to rent the full capacity of a node by creating a rent contract. All node contracts deployed on a node where a user has a rent contract are free of use expect for the public ip's which can be added on a node contract. + +## Name Contract + +A name contract is a contract that specifies a unique name to be used on the grid's webgateways. Once a name contract is created, this name can be used as and entrypoint for an application on the grid. + +## Contract billing + +Every contract is billed every 1 hour on the chain, the amount that is due is deducted from the user's wallet every 24 hours or when the user cancels his contract. The total amount acrued in those 24 hours gets send to following destinations: + +- 10% goes to the threefold foundation +- 5% goes to staking pool wallet (to be implemented in a later phase) +- 50% goes to certified sales channel +- 35% TFT gets burned + +See [pricing](../../../knowledge_base/cloud/pricing/pricing.md) for more information on how the cost for a contract is calculated. + +## Contract locking + +To not overload the chain with transfer events and others we choose to lock the amount due for a contract every hour and after 24 hours unlock the amount and deduct it in one go. This lock is saved on a user's account, if the user has multiple contracts the locked amount will be stacked. + +## Contract grace period + +When the owner of a contract runs out funds on his wallet to pay for his deployment, the contract goes in to a Grace Period state. The deployment, whatever that might be, will be unaccessible during this period to the user. When the wallet is funded with TFT again, the contract goes back to a normal operating state. If the grace period runs out (by default 2 weeks) the user's deployment and data will be deleted from the node. + +## DAO + +See [DAO](../../dashboard/tfchain/tf_dao.md) for more information on the DAO on TF Chain. + +## Farming Policies + +See [farming_policies](farming_policies.md) for more information on the farming policies on TF Chain. + +## Node Connection price + +A connection price is set to every new Node that boots on the Grid. This connection price influences the amount of TFT farmed in a period. The connection price set on a node is permanent. The DAO can propose the increase / decrease of the connection price. At the time of writing the connection price is set to $ 0.08. When the DAO proposes a connection price and the vote is passed, new nodes will attach to the new connection price. + +## Node Certifiers + +Node certifiers are entities who are allowed to set a node's certification level to `Certified`. The DAO can propose to add / remove entities that can certify nodes. This is usefull for allowing approved resellers of Threefold nodes to mark nodes as Certified. A certified node farms 25% more tokens than `Diy` a node. \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfchain/farming_policies.md b/collections/manual/documentation/developers/tfchain/farming_policies.md new file mode 100644 index 0000000..e997b8f --- /dev/null +++ b/collections/manual/documentation/developers/tfchain/farming_policies.md @@ -0,0 +1,94 @@ +

Farming Policies

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Farming Policy Fields](#farming-policy-fields) +- [Limits on linked policy](#limits-on-linked-policy) +- [Creating a Policy](#creating-a-policy) +- [Linking a policy to a Farm](#linking-a-policy-to-a-farm) + +*** + +## Introduction + +A farming policy defines how farming rewards are handed out for nodes. Every node has a farming policy attached. A farming policy is either linked to a farm, in which case new nodes are given the farming policy of the farm they are in once they register themselves. Alternatively a farming policy can be a "default". These are not attached to a farm, but instead they are used for nodes registered in farms which don't have a farming policy. Multiple defaults can exist at the same time, and the most fitting should be chosen. + +## Farming Policy Fields + +A farming policy has the following fields: + +- id (used to link policies) +- name +- Default. This indicates if the policy can be used by any new node (if the parent farm does not have a dedicated attached policy). Essentially, a `Default` policy serves as a base which can be overriden per farm by linking a non default policy to said farm. +- Reward tft per CU, SU and NU, IPV4 +- Minimal uptime needed in integer format (example 995) +- Policy end (After this block number the policy can not be linked to new farms any more) +- If this policy is immutable or not. Immutable policies can never be changed again + +Additionally, we also use the following fields, though those are only useful for `Default` farming policies: + +- Node needs to be certified +- Farm needs to be certified (with certification level, which will be changed to an enum). + +In case a farming policy is not attached to a farm, new nodes will pick the most appropriate farming policy from the default ones. To decide which one to pick, they should be considered in order with most restrictive first until one matches. That means: + +- First check for the policy with highest farming certification (in the current case gold) and certified nodes +- Then check for a policy with highest farming certification (in the current case gold) and non certified nodes +- Check for policy without farming certification but certified nodes +- Last check for a policy without any kind of certification + +Important here is that certification of a node only happens after it comes live for the first time. As such, when a node gets certified, farming certification needs to be re evaluated, but only if the currently attached farming policy on the node is a `Default` policy (as specifically linked policies have priority over default ones). When evaluating again, we first consider if we are eligible for the farming policy linked to the farm, if any. + +## Limits on linked policy + +When a council member attaches a policy to a farm, limits can be set. These limits define how much a policy can be used for nodes, before it becomes unusable and gets removed. The limits currently are: + +- Farming Policy ID: the ID of the farming policy which we want to limit to a farm. +- CU. Every time a node is added in the farm, it's CU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of CU that can be attached to this policy is reached. +- SU. Every time a node is added in the farm, it's SU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of SU that can be attached to this policy is reached. +- End date. After this date the policy is not effective anymore and can't be used. It is removed from the farm and a default policy is used. +- Certification. If set, only certified nodes can get this policy. Non certified nodes get a default policy. + +Once a limit is reached, the farming policy is removed from the farm, so new nodes will get one of the default policies until a new policy is attached to the farm. + +## Creating a Policy + +A council member can create a Farming Policy (DAO) in the following way: + +1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics` +2: Now select the account to propose from (should be an account that's a council member). +3: Select as action `dao` -> `propose` +5: Set a `threshold` (amount of farmers to vote) +6: Select an actions `tfgridModule` -> `createFarmingPolicy` and fill in all the fields. +7: Create a forum post with the details of the farming policy and fill in the link of that post in the `link` field +8: Give it some good `description`. +9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`. +10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate). + +All (su, cu, nu, ipv4) values should be expressed in units USD. Minimal uptime should be expressed as integer that represents an percentage (example: `95`). + +Policy end is optional (0 or some block number in the future). This is used for expiration. + +For reference: + +![image](./img/create_policy.png) + +## Linking a policy to a Farm + +First identify the policy ID to link to a farm. You can check for farming policies in [chainstate](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate) -> `tfgridModule` -> `farmingPolciesMap`, start with ID 1 and increment with 1 until you find the farming policy which was created when the proposal was expired and closed. + +1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics` +2: Now select the account to propose from (should be an account that's a council member). +3: Select as proposal `dao` -> `propose` +4: Set a `threshold` (amount of farmers to vote) +5: Select an actions `tfgridModule` -> `attachPolicyToFarm` and fill in all the fields (FarmID and Limits). +6: Limits contains a `farming_policy_id` (Required) and cu, su, end, node count (which are all optional). It also contains `node_certification`, if this is set to true only certified nodes can have this policy. +7: Create a forum post with the details of why we want to link that farm to that policy and fill in the link of that post in the `link` field +8: Give it some good `description`. +9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`. +10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate). + +For reference: + +![image](./img/attach.png) diff --git a/collections/manual/documentation/developers/tfchain/img/attach.png b/collections/manual/documentation/developers/tfchain/img/attach.png new file mode 100644 index 0000000..96e3c5f Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/attach.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/close_proposal.png b/collections/manual/documentation/developers/tfchain/img/close_proposal.png new file mode 100644 index 0000000..07e66a2 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/close_proposal.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/create_contract.png b/collections/manual/documentation/developers/tfchain/img/create_contract.png new file mode 100644 index 0000000..f082e80 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/create_contract.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/create_policy.png b/collections/manual/documentation/developers/tfchain/img/create_policy.png new file mode 100644 index 0000000..fa344e7 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/create_policy.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/create_provider.png b/collections/manual/documentation/developers/tfchain/img/create_provider.png new file mode 100644 index 0000000..e8668a2 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/create_provider.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/propose_approve.png b/collections/manual/documentation/developers/tfchain/img/propose_approve.png new file mode 100644 index 0000000..667f66f Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/propose_approve.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/proposed_approve.png b/collections/manual/documentation/developers/tfchain/img/proposed_approve.png new file mode 100644 index 0000000..5202c4c Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/proposed_approve.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/query_provider.png b/collections/manual/documentation/developers/tfchain/img/query_provider.png new file mode 100644 index 0000000..de66d4c Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/query_provider.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_approve.png b/collections/manual/documentation/developers/tfchain/img/service_contract_approve.png new file mode 100644 index 0000000..1e0d034 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_approve.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_bill.png b/collections/manual/documentation/developers/tfchain/img/service_contract_bill.png new file mode 100644 index 0000000..55e84fe Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_bill.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_cancel.png b/collections/manual/documentation/developers/tfchain/img/service_contract_cancel.png new file mode 100644 index 0000000..7669510 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_cancel.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_create.png b/collections/manual/documentation/developers/tfchain/img/service_contract_create.png new file mode 100644 index 0000000..69ec62a Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_create.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_id.png b/collections/manual/documentation/developers/tfchain/img/service_contract_id.png new file mode 100644 index 0000000..e49c396 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_id.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_reject.png b/collections/manual/documentation/developers/tfchain/img/service_contract_reject.png new file mode 100644 index 0000000..6235530 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_reject.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_set_fees.png b/collections/manual/documentation/developers/tfchain/img/service_contract_set_fees.png new file mode 100644 index 0000000..6cfa91a Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_set_fees.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_set_metadata.png b/collections/manual/documentation/developers/tfchain/img/service_contract_set_metadata.png new file mode 100644 index 0000000..e472145 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_set_metadata.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_state.png b/collections/manual/documentation/developers/tfchain/img/service_contract_state.png new file mode 100644 index 0000000..e824552 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_state.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/service_contract_twin_from_account.png b/collections/manual/documentation/developers/tfchain/img/service_contract_twin_from_account.png new file mode 100644 index 0000000..293bad2 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/service_contract_twin_from_account.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/tf.png b/collections/manual/documentation/developers/tfchain/img/tf.png new file mode 100644 index 0000000..528b5d9 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/tf.png differ diff --git a/collections/manual/documentation/developers/tfchain/img/vote_proposal.png b/collections/manual/documentation/developers/tfchain/img/vote_proposal.png new file mode 100644 index 0000000..16111a0 Binary files /dev/null and b/collections/manual/documentation/developers/tfchain/img/vote_proposal.png differ diff --git a/collections/manual/documentation/developers/tfchain/introduction.md b/collections/manual/documentation/developers/tfchain/introduction.md new file mode 100644 index 0000000..a983b68 --- /dev/null +++ b/collections/manual/documentation/developers/tfchain/introduction.md @@ -0,0 +1,57 @@ +

ThreeFold Chain

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deployed instances](#deployed-instances) +- [Create a TFChain twin](#create-a-tfchain-twin) +- [Get your twin ID](#get-your-twin-id) + +*** + +## Introduction + +ThreeFold blockchain (aka TFChain) serves as a registry for Nodes, Farms, Digital Twins and Smart Contracts. +It is the backbone of [ZOS](https://github.com/threefoldtech/zos) and other components. + +## Deployed instances + +- Development network (Devnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer) + - Websocket url: `wss://tfchain.dev.grid.tf` + - GraphQL UI: [https://graphql.dev.grid.tf/graphql](https://graphql.dev.grid.tf/graphql) + +- QA testing network (QAnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer) + - Websocket url: `wss://tfchain.qa.grid.tf` + - GraphQL UI: [https://graphql.qa.grid.tf/graphql](https://graphql.qa.grid.tf/graphql) + +- Test network (Testnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer) + - Websocket url: `wss://tfchain.test.grid.tf` + - GraphQL UI: [https://graphql.test.grid.tf/graphql](https://graphql.test.grid.tf/graphql) + +- Production network (Mainnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer) + - Websocket url: `wss://tfchain.grid.tf` + - GraphQL UI: [https://graphql.grid.tf/graphql](https://graphql.grid.tf/graphql) + +## Create a TFChain twin + +A twin is a unique identifier linked to a specific account on a given TFChain network. +Actually there are 2 ways to create a twin: + +- With the [Dashboard](../../dashboard/wallet_connector.md) + - a twin is automatically generated while creating a TFChain account +- With the TFConnect app + - a twin is automatically generated while creating a farm (in this case the twin will be created on mainnet) + +## Get your twin ID + +One can retrieve the twin ID associated to his account going to `Developer` -> `Chain state` -> `tfgridModule` -> `twinIdByAccountID()`. + +![service_contract_twin_from_account](img/service_contract_twin_from_account.png) diff --git a/collections/manual/documentation/developers/tfchain/tfchain_external_service_contract.md b/collections/manual/documentation/developers/tfchain/tfchain_external_service_contract.md new file mode 100644 index 0000000..992186a --- /dev/null +++ b/collections/manual/documentation/developers/tfchain/tfchain_external_service_contract.md @@ -0,0 +1,142 @@ +

External Service Contract: How to set and execute

+

Table of Contents

+ +- [Introduction](#introduction) +- [Step 1: Create the contract and get its unique ID](#step-1-create-contract--get-unique-id) +- [Step 2: Fill contract](#step-2-fill-contract) +- [Step 3: Both parties approve contract](#step-3-both-parties-approve-contract) +- [Step 4: Bill for the service](#step-4-bill-for-the-service) +- [Step 5: Cancel the contract](#step-5-cancel-the-contract) + +*** + + +# Introduction + +It is now possible to create a generic contract between two TFChain users (without restriction of account type) for some external service and bill for it. + +The initial scenario is when two parties, a service provider and a consumer of the service, want to use TFChain to automatically handle the billing/payment process for an agreement (in TFT) they want to make for a service which is external from the grid. +This is actually a more direct and generic feature if we compare to the initial rewarding model where a service provider (or solution provider) is receiving TFT from a rewards distribution process, linked to a node contract and based on a cloud capacity consumption, which follows specific billing rules. + +The initial requirements are: +- Both service and consumer need to have their respective twin created on TFChain (if not, see [here](tfchain.md#create-a-tfchain-twin) how to do it) +- Consumer account needs to be funded (lack of funds will simply result in the contract cancelation while billed) + +In the following steps we detail the sequence of extrinsics that need to be called in TFChain Polkadot portal for setting up and executing such contract. + +Make sure to use right [links](tfchain.md#deployed-instances) depending on the targeted network. + + +# Step 1: Create contract / Get unique ID + +## Create service contract + +The contract creation can be initiated by both service or consumer. +In TFChain Polkadot portal, the one who iniciates the contract should go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCreate()`, using the account he pretends to use in the contract, and select the corresponding service and consumer accounts before submiting the transaction. + +![service_contract_create](img/service_contract_create.png) + +Once executed the service contract is `Created` between the two parties and a unique ID is generated. + +## Last service contract ID + +To get the last generated service contract ID go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContractID()`. + +![service_contract_id](img/service_contract_id.png) + +## Parse service contract + +To get the corresponding contract details, go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContracts()` and provide the contract ID. +You should see the following details: + +![service_contract_state](img/service_contract_state.png) + +Check if the contract fields are correct, especially the twin ID of both service and consumer, to be sure you get the right contract ID, referenced as `serviceContractId`. + +## Wrong contract ID ? + +If twin IDs are wrong ([how to get my twin ID?](tfchain.md#get-your-twin-id)) on service contract fields it means the contract does not correspond to the last created contract. +In this case parse the last contracts on stack by decreasing `serviceContractId` and try to identify the right one; or the contract was simply not created so you should repeat the creation process and evaluate the error log. + + +# Step 2: Fill contract + +Once created, the service contract must be filled with its relative `per hour` fees: +- `baseFee` is the constant "per hour" price (in TFT) for the service. +- `variableFee` is the maximum "per hour" amount (in TFT) that can be billed extra. + +To provide these values (only service can set fees), go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetFees()` specifying `serviceContractId`. + +![service_contract_set_fees](img/service_contract_set_fees.png) + +Some metadata (the description of the service for example) must be filled in a similar way (`Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetMetadata()`). +In this case service or consumer can set metadata. + +![service_contract_set_metadata](img/service_contract_set_metadata.png) + +The agreement will be automatically considered `Ready` when both metadata and fees are set (`metadata` not empty and `baseFee` greater than zero). +Note that whenever this condition is not reached both extrinsics can still be called to modify agreement. +You can check the contract status at each step of flow by parsing it as shown [here](#parse-service-contract). + + +# Step 3: Both parties approve contract + +Now having the agreement ready the contract can be submited for approval. +To approve the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractApprove()` specifying `serviceContractId`. + +![service_contract_approve](img/service_contract_approve.png) + +To reject the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractReject()` specifying `serviceContractId`. + +![service_contract_reject](img/service_contract_reject.png) + +The contract needs to be explicitly `Approved` by both service and consumer to be ready for billing. +Before reaching this state, if one of the parties decides to call the rejection extrinsic, it will instantly lead to the cancelation of the contract (and its permanent removal). + + +# Step 4: Bill for the service + +Once the contract is accepted by both it can be billed. + +## Send bill to consumer + +Only the service can bill the consumer going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractBill()` specifying `serviceContractId` and billing informations such as `variableAmount` and some `metadata`. + +![service_contract_bill](img/service_contract_bill.png) + +## Billing frequency + +⚠️ Important: because a service should not charge the user if it doesn't work, it is required that bills be send in less than 1 hour intervals. +Any bigger interval will result in a bounded 1 hour bill (in other words, extra time will not be billed). +It is the service responsability to bill on right frequency! + +## Amount due calculation + +When the bill is received, the chain calculates the bill amount based on the agreement values as follows: + +~~~ +amount = baseFee * T / 3600 + variableAmount +~~~ + +where `T` is the elapsed time, in seconds and bounded by 3600 (see [above](#billing-frequency)), since last effective billing operation occured. + +## Protection against draining + +Note that if `variableAmount` is too high (i.e `variableAmount > variableFee * T / 3600`) the billing extrinsic will fail. +The `variableFee` value in the contract is interpreted as being "per hour" and acts as a protection mechanism to avoid consumer draining. +Indeed, as it is technically possible for the service to send a bill every second, there would be no gain for that (unless overloading the chain uselessly). +So it is also the service responsability to set a suitable `variableAmount` according to the billing frequency! + +## Billing considerations + +Then, if all goes well and no error is dispatched after submitting the transaction, the consumer pays for the due amount calculated from the bill (see calculation detail [above](#amount-due-calculation)). +In practice the amount is transferred from the consumer twin account to the service twin account. +Be aware that if the consumer is out of funds the billing will fail AND the contract will automatically be canceled. + + +# Step 5: Cancel the contract + +At every moment of the flow since the contract is created it can be canceled (and definitively removed). +Only the service or the consumer can do it going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCancel()` specifying `serviceContractId`. + +![service_contract_cancel](img/service_contract_cancel.png) diff --git a/collections/manual/documentation/developers/tfchain/tfchain_solution_provider.md b/collections/manual/documentation/developers/tfchain/tfchain_solution_provider.md new file mode 100644 index 0000000..c027d16 --- /dev/null +++ b/collections/manual/documentation/developers/tfchain/tfchain_solution_provider.md @@ -0,0 +1,81 @@ +

Solution Provider

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Changes to Contract Creation](#changes-to-contract-creation) +- [Creating a Provider](#creating-a-provider) +- [Council needs to approve a provider before it can be used](#council-needs-to-approve-a-provider-before-it-can-be-used) + +*** + +## Introduction + +> Note: While the solution provider program is still active, the plan is to discontinue the program in the near future. We will update the manual as we get more information. We currently do not accept new solution providers. + +A "solution" is something running on the grid, created by a community member. This can be brought forward to the council, who can vote on it to recognize it as a solution. On contract creation, a recognized solution can be referenced, in which case part of the payment goes toward the address coupled to the solution. On chain a solution looks as follows: + +- Description (should be some text, limited in length. Limit should be rather low, if a longer one is desired a link can be inserted. 160 characters should be enough imo). +- Up to 5 payout addresses, each with a payout percentage. This is the percentage of the payout received by the associated address. The amount is deducted from the payout to the treasury and specified as percentage of the total contract cost. As such, the sum of these percentages can never exceed 50%. If this value is not 50%, the remainder is payed to the treasure. Example: 10% payout percentage to addr 1, 5% payout to addr 2. This means 15% goes to the 2 listed addresses combined and 35% goes to the treasury (instead of usual 50). Rest remains as is. If the cost would be 10TFT, 1TFT goes to the address1, 0.5TFT goes to address 2, 3.5TFT goes to the treasury, instead of the default 5TFT to the treasury +- A unique code. This code is used to link a solution to the contract (numeric ID). + +This means contracts need to carry an optional solution code. If the code is not specified (default), the 50% goes entirely to the treasury (as is always the case today). + +A solution can be created by calling the extrinsic `smartContractModule` -> `createSolutionProvider` with parameters: + +- description +- link (to website) +- list of providers + +Provider: + +- who (account id) +- take (amount of take this account should get) specified as an integer of max 50. example: 25 + +A forum post should be created with the details of the created solution provider, the dao can vote to approve this or not. If the solution provider get's approved, it can be referenced on contract creation. + +Note that a solution can be deleted. In this case, existing contracts should fall back to the default behavior (i.e. if code not found -> default). + +## Changes to Contract Creation + +When creating a contract, a `solution_provider_id` can be passed. An error will be returned if an invalid or non-approved solution provider id is passed. + +## Creating a Provider + +Creating a provider is as easy as going to the [polkadotJS UI](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) (Currently only on devnet) + +Select module `SmartContractModule` -> `createSolutionProvider(..)` + +Fill in all the details, you can specify up to 5 target accounts which can have a take of the TFT generated from being a provider. Up to a total maximum of 50%. `Take` should be specified as a integer, example (`25`). + +Once this object is created, a forum post should be created here: + +![create](./img/create_provider.png) + +## Council needs to approve a provider before it can be used + +First propose the solution to be approved: + +![propose_approve](./img/propose_approve.png) + +After submission it should like like this: + +![proposed_approved](./img/proposed_approve.png) + +Now another member of the council needs to vote: + +![vote](./img/vote_proposal.png) + +After enough votes are reached, it can be closed: + +![close](./img/close_proposal.png) + +If the close was executed without error the solution should be approved and ready to be used + +Query the solution: `chainstate` -> `SmartContractModule` -> `solutionProviders` + +![query](./img/query_provider.png) + +Now the solution provider can be referenced on contract creation: + +![create](./img/create_contract.png) diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd.md b/collections/manual/documentation/developers/tfcmd/tfcmd.md new file mode 100644 index 0000000..daa502a --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd.md @@ -0,0 +1,15 @@ +

TFCMD

+ +TFCMD (`tfcmd`) is a command line interface to interact and develop on Threefold Grid using command line. + +Consult the [ThreeFoldTech TFCMD repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-cli) for the latest updates. Make sure to read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md). + +

Table of Contents

+ +- [Getting Started](./tfcmd_basics.md) +- [Deploy a VM](./tfcmd_vm.md) +- [Deploy Kubernetes](./tfcmd_kubernetes.md) +- [Deploy ZDB](./tfcmd_zdbs.md) +- [Gateway FQDN](./tfcmd_gateway_fqdn.md) +- [Gateway Name](./tfcmd_gateway_name.md) +- [Contracts](./tfcmd_contracts.md) \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_basics.md b/collections/manual/documentation/developers/tfcmd/tfcmd_basics.md new file mode 100644 index 0000000..8816eea --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_basics.md @@ -0,0 +1,67 @@ +

TFCMD Getting Started

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) +- [Login](#login) +- [Commands](#commands) +- [Using TFCMD](#using-tfcmd) + +*** + +## Introduction + +This section covers the basics on how to set up and use TFCMD (`tfcmd`). + +TFCMD is available as binaries. Make sure to download the latest release and to stay up to date with new releases. + +## Installation + +An easy way to use TFCMD is to download and extract the TFCMD binaries to your path. + +- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) + - ``` + wget + ``` +- Extract the binaries + - ``` + tar -xvf + ``` +- Move `tfcmd` to any `$PATH` directory: + ```bash + mv tfcmd /usr/local/bin + ``` + +## Login + +Before interacting with Threefold Grid with `tfcmd` you should login with your mnemonics and specify the grid network: + +```console +$ tfcmd login +Please enter your mnemonics: +Please enter grid network (main,test): +``` + +This validates your mnemonics and store your mnemonics and network to your default configuration dir. +Check [UserConfigDir()](https://pkg.go.dev/os#UserConfigDir) for your default configuration directory. + +## Commands + +You can run the command `tfcmd help` at any time to access the help section. This will also display the available commands. + +| Command | Description | +| ---------- | ---------------------------------------------------------- | +| cancel | Cancel resources on Threefold grid | +| completion | Generate the autocompletion script for the specified shell | +| deploy | Deploy resources to Threefold grid | +| get | Get a deployed resource from Threefold grid | +| help | Help about any command | +| login | Login with mnemonics to a grid network | +| version | Get latest build tag | + +## Using TFCMD + +Once you've logged in, you can use commands to deploy workloads on the TFGrid. Read the next sections for more information on different types of workloads available with TFCMD. + + diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_contracts.md b/collections/manual/documentation/developers/tfcmd/tfcmd_contracts.md new file mode 100644 index 0000000..bb14c5d --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_contracts.md @@ -0,0 +1,99 @@ +

Contracts

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Get](#get) + - [Get Contracts](#get-contracts) + - [Get Contract](#get-contract) +- [Cancel](#cancel) + - [Optional Flags](#optional-flags) + +*** + +## Introduction + +We explain how to handle contracts on the TFGrid with `tfcmd`. + +## Get + +### Get Contracts + +Get all contracts + +```bash +tfcmd get contracts +``` + +Example: + +```console +$ tfcmd get contracts +5:13PM INF starting peer session=tf-1184566 twin=81 +Node contracts: +ID Node ID Type Name Project Name +50977 21 network vm1network vm1 +50978 21 vm vm1 vm1 +50980 14 Gateway Name gatewaytest gatewaytest + +Name contracts: +ID Name +50979 gatewaytest +``` + +### Get Contract + +Get specific contract + +```bash +tfcmd get contract +``` + +Example: + +```console +$ tfcmd get contract 50977 +5:14PM INF starting peer session=tf-1185180 twin=81 +5:14PM INF contract: +{ + "contract_id": 50977, + "twin_id": 81, + "state": "Created", + "created_at": 1702480020, + "type": "node", + "details": { + "nodeId": 21, + "deployment_data": "{\"type\":\"network\",\"name\":\"vm1network\",\"projectName\":\"vm1\"}", + "deployment_hash": "21adc91ef6cdc915d5580b3f12732ac9", + "number_of_public_ips": 0 + } +} +``` + +## Cancel + +Cancel specified contracts or all contracts. + +```bash +tfcmd cancel contracts ... [Flags] +``` + +Example: + +```console +$ tfcmd cancel contracts 50856 50857 +5:17PM INF starting peer session=tf-1185964 twin=81 +5:17PM INF contracts canceled successfully +``` + +### Optional Flags + +- all: cancel all twin's contracts. + +Example: + +```console +$ tfcmd cancel contracts --all +5:17PM INF starting peer session=tf-1185964 twin=81 +5:17PM INF contracts canceled successfully +``` \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_gateway_fqdn.md b/collections/manual/documentation/developers/tfcmd/tfcmd_gateway_fqdn.md new file mode 100644 index 0000000..538438f --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_gateway_fqdn.md @@ -0,0 +1,87 @@ +

Gateway FQDN

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +We explain how to use gateway fully qualified domain names on the TFGrid using `tfcmd`. + +## Deploy + +```bash +tfcmd deploy gateway fqdn [flags] +``` + +### Required Flags + +- name: name for the gateway deployment also used for canceling the deployment. must be unique. +- node: node id to deploy gateway on. +- backends: list of backends the gateway will forward requests to. +- fqdn: FQDN pointing to the specified node. + +### Optional Flags + +-tls: add TLS passthrough option (default false). + +Example: + +```console +$ tfcmd deploy gateway fqdn -n gatewaytest --node 14 --backends http://93.184.216.34:80 --fqdn example.com +3:34PM INF deploying gateway fqdn +3:34PM INF gateway fqdn deployed +``` + +## Get + +```bash +tfcmd get gateway fqdn +``` + +gateway is the name used when deploying gateway-fqdn using tfcmd. + +Example: + +```console +$ tfcmd get gateway fqdn gatewaytest +2:05PM INF gateway fqdn: +{ + "NodeID": 14, + "Backends": [ + "http://93.184.216.34:80" + ], + "FQDN": "awady.gridtesting.xyz", + "Name": "gatewaytest", + "TLSPassthrough": false, + "Description": "", + "NodeDeploymentID": { + "14": 19653 + }, + "SolutionType": "gatewaytest", + "ContractID": 19653 +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +deployment-name is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel gatewaytest +3:37PM INF canceling contracts for project gatewaytest +3:37PM INF gatewaytest canceled +``` \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_gateway_name.md b/collections/manual/documentation/developers/tfcmd/tfcmd_gateway_name.md new file mode 100644 index 0000000..a4c8191 --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_gateway_name.md @@ -0,0 +1,88 @@ +

Gateway Name

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +We explain how to use gateway names on the TFGrid using `tfcmd`. + +## Deploy + +```bash +tfcmd deploy gateway name [flags] +``` + +### Required Flags + +- name: name for the gateway deployment also used for canceling the deployment. must be unique. +- backends: list of backends the gateway will forward requests to. + +### Optional Flags + +- node: node id gateway should be deployed on. +- farm: farm id gateway should be deployed on, if set choose available node from farm that fits vm specs (default 1). note: node and farm flags cannot be set both. +-tls: add TLS passthrough option (default false). + +Example: + +```console +$ tfcmd deploy gateway name -n gatewaytest --node 14 --backends http://93.184.216.34:80 +3:34PM INF deploying gateway name +3:34PM INF fqdn: gatewaytest.gent01.dev.grid.tf +``` + +## Get + +```bash +tfcmd get gateway name +``` + +gateway is the name used when deploying gateway-name using tfcmd. + +Example: + +```console +$ tfcmd get gateway name gatewaytest +1:56PM INF gateway name: +{ + "NodeID": 14, + "Name": "gatewaytest", + "Backends": [ + "http://93.184.216.34:80" + ], + "TLSPassthrough": false, + "Description": "", + "SolutionType": "gatewaytest", + "NodeDeploymentID": { + "14": 19644 + }, + "FQDN": "gatewaytest.gent01.dev.grid.tf", + "NameContractID": 19643, + "ContractID": 19644 +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +deployment-name is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel gatewaytest +3:37PM INF canceling contracts for project gatewaytest +3:37PM INF gatewaytest canceled +``` \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_kubernetes.md b/collections/manual/documentation/developers/tfcmd/tfcmd_kubernetes.md new file mode 100644 index 0000000..9a7c2b1 --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_kubernetes.md @@ -0,0 +1,147 @@ +

Kubernetes

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +In this section, we explain how to deploy Kubernetes workloads on the TFGrid using `tfcmd`. + +## Deploy + +```bash +tfcmd deploy kubernetes [flags] +``` + +### Required Flags + +- name: name for the master node deployment also used for canceling the cluster deployment. must be unique. +- ssh: path to public ssh key to set in the master node. + +### Optional Flags + +- master-node: node id master should be deployed on. +- master-farm: farm id master should be deployed on, if set choose available node from farm that fits master specs (default 1). note: master-node and master-farm flags cannot be set both. +- workers-node: node id workers should be deployed on. +- workers-farm: farm id workers should be deployed on, if set choose available node from farm that fits master specs (default 1). note: workers-node and workers-farm flags cannot be set both. +- ipv4: assign public ipv4 for master node (default false). +- ipv6: assign public ipv6 for master node (default false). +- ygg: assign yggdrasil ip for master node (default true). +- master-cpu: number of cpu units for master node (default 1). +- master-memory: master node memory size in GB (default 1). +- master-disk: master node disk size in GB (default 2). +- workers-number: number of workers nodes (default 0). +- workers-ipv4: assign public ipv4 for each worker node (default false) +- workers-ipv6: assign public ipv6 for each worker node (default false) +- workers-ygg: assign yggdrasil ip for each worker node (default true) +- workers-cpu: number of cpu units for each worker node (default 1). +- workers-memory: memory size for each worker node in GB (default 1). +- workers-disk: disk size in GB for each worker node (default 2). + +Example: + +```console +$ tfcmd deploy kubernetes -n kube --ssh ~/.ssh/id_rsa.pub --master-node 14 --workers-number 2 --workers-node 14 +4:21PM INF deploying network +4:22PM INF deploying cluster +4:22PM INF master yggdrasil ip: 300:e9c4:9048:57cf:504f:c86c:9014:d02d +``` + +## Get + +```bash +tfcmd get kubernetes +``` + +kubernetes is the name used when deploying kubernetes cluster using tfcmd. + +Example: + +```console +$ tfcmd get kubernetes examplevm +3:14PM INF k8s cluster: +{ + "Master": { + "Name": "kube", + "Node": 14, + "DiskSize": 2, + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + "FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723", + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "300:e9c4:9048:57cf:e8a0:662b:4e66:8faa", + "IP": "10.20.2.2", + "CPU": 1, + "Memory": 1024 + }, + "Workers": [ + { + "Name": "worker1", + "Node": 14, + "DiskSize": 2, + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + "FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723", + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "300:e9c4:9048:57cf:66d0:3ee4:294e:d134", + "IP": "10.20.2.2", + "CPU": 1, + "Memory": 1024 + }, + { + "Name": "worker0", + "Node": 14, + "DiskSize": 2, + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + "FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723", + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "300:e9c4:9048:57cf:1ae5:cc51:3ffc:81e", + "IP": "10.20.2.2", + "CPU": 1, + "Memory": 1024 + } + ], + "Token": "", + "NetworkName": "", + "SolutionType": "kube", + "SSHKey": "", + "NodesIPRange": null, + "NodeDeploymentID": { + "14": 22743 + } +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +deployment-name is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel kube +3:37PM INF canceling contracts for project kube +3:37PM INF kube canceled +``` \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_vm.md b/collections/manual/documentation/developers/tfcmd/tfcmd_vm.md new file mode 100644 index 0000000..21e1471 --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_vm.md @@ -0,0 +1,171 @@ + +

Deploy a VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Flags](#flags) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) + - [Examples](#examples) + - [Deploy a VM without GPU](#deploy-a-vm-without-gpu) + - [Deploy a VM with GPU](#deploy-a-vm-with-gpu) +- [Get](#get) + - [Get Example](#get-example) +- [Cancel](#cancel) + - [Cancel Example](#cancel-example) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +In this section, we explore how to deploy a virtual machine (VM) on the ThreeFold Grid using `tfcmd`. + +# Deploy + +You can deploy a VM with `tfcmd` using the following template accompanied by required and optional flags: + +```bash +tfcmd deploy vm [flags] +``` + +## Flags + +When you use `tfcmd`, there are two required flags (`name` and `ssh`), while the other remaining flags are optional. Using such optional flags can be used to deploy a VM with a GPU for example or to set an IPv6 address and much more. + +### Required Flags + +- **name**: name for the VM deployment also used for canceling the deployment. The name must be unique. +- **ssh**: path to public ssh key to set in the VM. + +### Optional Flags + +- **node**: node ID the VM should be deployed on. +- **farm**: farm ID the VM should be deployed on, if set choose available node from farm that fits vm specs (default `1`). Note: node and farm flags cannot both be set. +- **cpu**: number of cpu units (default `1`). +- **disk**: size of disk in GB mounted on `/data`. If not set, no disk workload is made. +- **entrypoint**: entrypoint for the VM FList (default `/sbin/zinit init`). Note: setting this without the flist option will fail. +- **flist**: FList used in the VM (default `https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist`). Note: setting this without the entrypoint option will fail. +- **ipv4**: assign public ipv4 for the VM (default `false`). +- **ipv6**: assign public ipv6 for the VM (default `false`). +- **memory**: memory size in GB (default `1`). +- **rootfs**: root filesystem size in GB (default `2`). +- **ygg**: assign yggdrasil ip for the VM (default `true`). +- **gpus**: assign a list of gpus' IDs to the VM. Note: setting this without the node option will fail. + +## Examples + +We present simple examples on how to deploy a virtual machine with or without a GPU using `tfcmd`. + +### Deploy a VM without GPU + +```console +$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10 +12:06PM INF deploying network +12:06PM INF deploying vm +12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821 +``` +### Deploy a VM with GPU + +```console +$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10 --gpus '0000:0e:00.0/1882/543f' --gpus '0000:0e:00.0/1887/593f' --node 12 +12:06PM INF deploying network +12:06PM INF deploying vm +12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821 +``` + +# Get + +To get the VM, use the following template: + +```bash +tfcmd get vm +``` + +Make sure to replace `` with the name of the VM specified using `tfcmd`. + +## Get Example + +In the following example, the name of the deployment to get is `examplevm`. + +```console +$ tfcmd get vm examplevm +3:20PM INF vm: +{ + "Name": "examplevm", + "NodeID": 15, + "SolutionType": "examplevm", + "SolutionProvider": null, + "NetworkName": "examplevmnetwork", + "Disks": [ + { + "Name": "examplevmdisk", + "SizeGB": 10, + "Description": "" + } + ], + "Zdbs": [], + "Vms": [ + { + "Name": "examplevm", + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist", + "FlistChecksum": "", + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Corex": false, + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "301:ad3a:9c52:98d1:cd05:1595:9abb:e2f1", + "IP": "10.20.2.2", + "Description": "", + "CPU": 2, + "Memory": 4096, + "RootfsSize": 2048, + "Entrypoint": "/sbin/zinit init", + "Mounts": [ + { + "DiskName": "examplevmdisk", + "MountPoint": "/data" + } + ], + "Zlogs": null, + "EnvVars": { + "SSH_KEY": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcGrS1RT36rHAGLK3/4FMazGXjIYgWVnZ4bCvxxg8KosEEbs/DeUKT2T2LYV91jUq3yibTWwK0nc6O+K5kdShV4qsQlPmIbdur6x2zWHPeaGXqejbbACEJcQMCj8szSbG8aKwH8Nbi8BNytgzJ20Ysaaj2QpjObCZ4Ncp+89pFahzDEIJx2HjXe6njbp6eCduoA+IE2H9vgwbIDVMQz6y/TzjdQjgbMOJRTlP+CzfbDBb6Ux+ed8F184bMPwkFrpHs9MSfQVbqfIz8wuq/wjewcnb3wK9dmIot6CxV2f2xuOZHgNQmVGratK8TyBnOd5x4oZKLIh3qM9Bi7r81xCkXyxAZbWYu3gGdvo3h85zeCPGK8OEPdYWMmIAIiANE42xPmY9HslPz8PAYq6v0WwdkBlDWrG3DD3GX6qTt9lbSHEgpUP2UOnqGL4O1+g5Rm9x16HWefZWMjJsP6OV70PnMjo9MPnH+yrBkXISw4CGEEXryTvupfaO5sL01mn+UOyE= abdulrahman@AElawady-PC\n" + }, + "NetworkName": "examplevmnetwork" + } + ], + "QSFS": [], + "NodeDeploymentID": { + "15": 22748 + }, + "ContractID": 22748 +} +``` + +# Cancel + +To cancel your VM deployment, use the following template: + +```bash +tfcmd cancel +``` + +Make sure to replace `` with the name of the deployment specified using `tfcmd`. + +## Cancel Example + +In the following example, the name of the deployment to cancel is `examplevm`. + +```console +$ tfcmd cancel examplevm +3:37PM INF canceling contracts for project examplevm +3:37PM INF examplevm canceled +``` + +# Questions and Feedback + +If you have any questions or feedback, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfcmd/tfcmd_zdbs.md b/collections/manual/documentation/developers/tfcmd/tfcmd_zdbs.md new file mode 100644 index 0000000..b9c01d7 --- /dev/null +++ b/collections/manual/documentation/developers/tfcmd/tfcmd_zdbs.md @@ -0,0 +1,125 @@ +

ZDBs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +In this section, we explore how to use ZDBs related commands using `tfcmd` to interact with the TFGrid. + +## Deploy + +```bash +tfcmd deploy zdb [flags] +``` + +### Required Flags + +- project_name: project name for the ZDBs deployment also used for canceling the deployment. must be unique. +- size: HDD of zdb in GB. + +### Optional Flags + +- node: node id zdbs should be deployed on. +- farm: farm id zdbs should be deployed on, if set choose available node from farm that fits zdbs deployment specs (default 1). note: node and farm flags cannot be set both. +- count: count of zdbs to be deployed (default 1). +- names: a slice of names for the number of ZDBs. +- password: password for ZDBs deployed +- description: description for your ZDBs, it's optional. +- mode: the enumeration of the modes 0-db can operate in (default user). +- public: if zdb gets a public ip6 (default false). + +Example: + +- Deploying ZDBs + +```console +$ tfcmd deploy zdb --project_name examplezdb --size=10 --count=2 --password=password +12:06PM INF deploying zdbs +12:06PM INF zdb 'examplezdb0' is deployed +12:06PM INF zdb 'examplezdb1' is deployed +``` + +## Get + +```bash +tfcmd get zdb +``` + +`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd get zdb examplezdb +3:20PM INF zdb: +{ + "Name": "examplezdb", + "NodeID": 11, + "SolutionType": "examplezdb", + "SolutionProvider": null, + "NetworkName": "", + "Disks": [], + "Zdbs": [ + { + "name": "examplezdb1", + "password": "password", + "public": false, + "size": 10, + "description": "", + "mode": "user", + "ips": [ + "2a10:b600:1:0:c4be:94ff:feb1:8b3f", + "302:9e63:7d43:b742:469d:3ec2:ab15:f75e" + ], + "port": 9900, + "namespace": "81-36155-examplezdb1" + }, + { + "name": "examplezdb0", + "password": "password", + "public": false, + "size": 10, + "description": "", + "mode": "user", + "ips": [ + "2a10:b600:1:0:c4be:94ff:feb1:8b3f", + "302:9e63:7d43:b742:469d:3ec2:ab15:f75e" + ], + "port": 9900, + "namespace": "81-36155-examplezdb0" + } + ], + "Vms": [], + "QSFS": [], + "NodeDeploymentID": { + "11": 36155 + }, + "ContractID": 36155, + "IPrange": "" +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel examplezdb +3:37PM INF canceling contracts for project examplezdb +3:37PM INF examplezdb canceled +``` \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfrobot/tfrobot.md b/collections/manual/documentation/developers/tfrobot/tfrobot.md new file mode 100644 index 0000000..c8b2d5f --- /dev/null +++ b/collections/manual/documentation/developers/tfrobot/tfrobot.md @@ -0,0 +1,13 @@ +

TFROBOT

+ +TFROBOT (`tfrobot`) is a command line interface tool that offers simultaneous mass deployment of groups of VMs on the ThreeFold Grid, with support of multiple retries for failed deployments, and customizable configurations, where you can define node groups, VMs groups and other configurations through a YAML or a JSON file. + +Consult the [ThreeFoldTech TFROBOT repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/tfrobot) for the latest updates and read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) to get up to speed if needed. + +

Table of Contents

+ +- [Installation](./tfrobot_installation.md) +- [Configuration File](./tfrobot_config.md) +- [Deployment](./tfrobot_deploy.md) +- [Commands and Flags](./tfrobot_commands_flags.md) +- [Supported Configurations](./tfrobot_configurations.md) \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfrobot/tfrobot_commands_flags.md b/collections/manual/documentation/developers/tfrobot/tfrobot_commands_flags.md new file mode 100644 index 0000000..f33c59d --- /dev/null +++ b/collections/manual/documentation/developers/tfrobot/tfrobot_commands_flags.md @@ -0,0 +1,57 @@ +

Commands and Flags

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Commands](#commands) +- [Subcommands](#subcommands) +- [Flags](#flags) + +*** + +## Introduction + +We present the various commands, subcommands and flags available with TFROBOT. + + +## Commands + +You can run the command `tfrobot help` at any time to access the help section. This will also display the available commands. + +| Command | Description | +| ---------- | ---------------------------------------------------------- | +| completion | Generate the autocompletion script for the specified shell | +| help | Help about any command | +| version | Get latest build tag | + +Use `tfrobot [command] --help` for more information about a command. + +## Subcommands + +You can use subcommands to deploy and cancel workloads on the TFGrid. + +- **deploy:** used to mass deploy groups of vms with specific configurations + ```bash + tfrobot deploy -c path/to/your/config.yaml + ``` +- **cancel:** used to cancel all vms deployed using specific configurations + ```bash + tfrobot cancel -c path/to/your/config.yaml + ``` +- **load:** used to load all vms deployed using specific configurations + ```bash + tfrobot load -c path/to/your/config.yaml + ``` + +## Flags + +You can use different flags to configure your deployment. + +| Flag | Usage | +| :---: | :---: | +| -c | used to specify path to configuration file | +| -o | used to specify path to output file to store the output info in | +| -d | allow debug logs to appear in the output logs | +| -h | help | + +> **Note:** Make sure to use every flag once. If the flag is repeated, it will ignore all values and take the last value of the flag.` \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfrobot/tfrobot_config.md b/collections/manual/documentation/developers/tfrobot/tfrobot_config.md new file mode 100644 index 0000000..55c2850 --- /dev/null +++ b/collections/manual/documentation/developers/tfrobot/tfrobot_config.md @@ -0,0 +1,131 @@ +

Configuration File

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Examples](#examples) + - [YAML Example](#yaml-example) + - [JSON Example](#json-example) +- [Create a Configuration File](#create-a-configuration-file) + +*** + +## Introduction + +To use TFROBOT, the user needs to create a YAML or a JSON configuration file that will contain the mass deployment information, such as the groups information, number of VMs to deploy how, the compute, storage and network resources needed, as well as the user's credentials, such as the SSH public key, the network (main, test, dev, qa) and the TFChain mnemonics. + +## Examples + +We present here a configuration file example that deploys 3 nodes with 2 vcores, 16GB of RAM, 100GB of SSD, 50GB of HDD and an IPv4 address. The same deployment is shown with a YAML file and with a JSON file. Parsing is based on file extension, TFROBOT will use JSON format if the file has a JSON extension and YAML format otherwise. + +You can use this example for guidance, and make sure to replace placeholders and adapt the groups based on your actual project details. To the minimum, `ssh_key1` should be replaced by the user SSH public key and `example-mnemonic` should be replaced by the user mnemonics. + +Note that if no IPs are specified as true (IPv4 or IPv6), an Yggdrasil IP address will automatically be assigned to the VM, as at least one IP should be set to allow an SSH connection to the VM. + +### YAML Example + +``` +node_groups: + - name: group_a + nodes_count: 3 + free_cpu: 2 + free_mru: 16 + free_ssd: 100 + free_hdd: 50 + dedicated: false + public_ip4: true + public_ip6: false + certified: false + region: europe +vms: + - name: examplevm123 + vms_count: 5 + node_group: group_a + cpu: 1 + mem: 0.25 + public_ip4: true + public_ip6: false + ssd: + - size: 15 + mount_point: /mnt/ssd + flist: https://hub.grid.tf/tf-official-apps/base:latest.flist + entry_point: /sbin/zinit init + root_size: 0 + ssh_key: example1 + env_vars: + user: user1 + pwd: 1234 +ssh_keys: + example1: ssh_key1 +mnemonic: example-mnemonic +network: dev +max_retries: 5 +``` + +### JSON Example + +``` +{ + "node_groups": [ + { + "name": "group_a", + "nodes_count": 3, + "free_cpu": 2, + "free_mru": 16, + "free_ssd": 100, + "free_hdd": 50, + "dedicated": false, + "public_ip4": true, + "public_ip6": false, + "certified": false, + "region": europe, + } + ], + "vms": [ + { + "name": "examplevm123", + "vms_count": 5, + "node_group": "group_a", + "cpu": 1, + "mem": 0.25, + "public_ip4": true, + "public_ip6": false, + "ssd": [ + { + "size": 15, + "mount_point": "/mnt/ssd" + } + ], + "flist": "https://hub.grid.tf/tf-official-apps/base:latest.flist", + "entry_point": "/sbin/zinit init", + "root_size": 0, + "ssh_key": "example1", + "env_vars": { + "user": "user1", + "pwd": "1234" + } + } + ], + "ssh_keys": { + "example1": "ssh_key1" + }, + "mnemonic": "example-mnemonic", + "network": "dev", + "max_retries": 5 +} +``` + +## Create a Configuration File + +You can start with the example above and adjust for your specific deployment needs. + +- Create directory + ``` + mkdir tfrobot_deployments && cd $_ + ``` +- Create configuration file and adjust with the provided example above + ``` + nano config.yaml + ``` + +Once you've set your configuration file, all that's left is to deploy on the TFGrid. Read the next section for more information on how to deploy with TFROBOT. \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfrobot/tfrobot_configurations.md b/collections/manual/documentation/developers/tfrobot/tfrobot_configurations.md new file mode 100644 index 0000000..7ceb867 --- /dev/null +++ b/collections/manual/documentation/developers/tfrobot/tfrobot_configurations.md @@ -0,0 +1,68 @@ +

Supported Configurations

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Config File](#config-file) +- [Node Group](#node-group) +- [Vms Groups](#vms-groups) +- [Disk](#disk) + +*** + +## Introduction + +When deploying with TFROBOT, you can set different configurations allowing for personalized deployments. + +## Config File + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| [node_group](#node-group) | description of all resources needed for each node_group | list of structs of type node_group | +| [vms](#vms-groups) | description of resources needed for deploying groups of vms belong to node_group | list of structs of type vms | +| ssh_keys | map of ssh keys with key=name and value=the actual ssh key | map of string to string | +| mnemonic | mnemonic of the user | should be valid mnemonic | +| network | valid network of ThreeFold Grid networks | main, test, qa, dev | +| max_retries | times of retries of failed node groups | positive integer | + +## Node Group + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| name | name of node_group | node group name should be unique | +| nodes_count | number of nodes in node group| nonzero positive integer | +| free_cpu | number of cpu of node | nonzero positive integer max = 32 | +| free_mru | free memory in the node in GB | min = 0.25, max = 256 | +| free_ssd | free ssd storage in the node in GB | positive integer value | +| free_hdd | free hdd storage in the node in GB | positive integer value | +| dedicated | are nodes dedicated | `true` or `false` | +| public_ip4 | should the nodes have free ip v4 | `true` or `false` | +| public_ip6 | should the nodes have free ip v6 | `true` or `false` | +| certified | should the nodes be certified(if false the nodes could be certified or DIY) | `true` or `false` | +| region | region could be the name of the continents the nodes are located in | africa, americas, antarctic, antarctic ocean, asia, europe, oceania, polar | + +## Vms Groups + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| name | name of vm group | string value with no special characters | +| vms_count | number of vms in vm group| nonzero positive integer | +| node_group | name of node_group the vm belongs to | should be defined in node_groups | +| cpu | number of cpu for vm | nonzero positive integer max = 32 | +| mem | free memory in the vm in GB | min = 0.25, max 256 | +| planetary | should the vm have yggdrasil ip | `true` or `false` | +| public_ip4 | should the vm have free ip v4 | `true` or `false` | +| public_ip6 | should the vm have free ip v6 | `true` or `false` | +| flist | should be a link to valid flist | valid flist url with `.flist` or `.fl` extension | +| entry_point | entry point of the flist | path to the entry point in the flist | +| ssh_key | key of ssh key defined in the ssh_keys map | should be valid ssh_key defined in the ssh_keys map | +| env_vars | map of env vars | map of type string to string | +| ssd | list of disks | should be of type disk| +| root_size | root size in GB | 0 for default root size, max 10TB | + +## Disk + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| size | disk size in GB| positive integer min = 15 | +| mount_point | disk mount point | path to mountpoint | diff --git a/collections/manual/documentation/developers/tfrobot/tfrobot_deploy.md b/collections/manual/documentation/developers/tfrobot/tfrobot_deploy.md new file mode 100644 index 0000000..7e16d12 --- /dev/null +++ b/collections/manual/documentation/developers/tfrobot/tfrobot_deploy.md @@ -0,0 +1,59 @@ + + +

Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy Workloads](#deploy-workloads) +- [Delete Workloads](#delete-workloads) +- [Logs](#logs) +- [Using TFCMD with TFROBOT](#using-tfcmd-with-tfrobot) + - [Get Contracts](#get-contracts) + +*** + +## Introduction + +We present how to deploy workloads on the ThreeFold Grid using TFROBOT. + +## Prerequisites + +To deploy workloads on the TFGrid with TFROBOT, you first need to [install TFROBOT](./tfrobot_installation.md) on your machine and create a [configuration file](./tfrobot_config.md). + +## Deploy Workloads + +Once you've installed TFROBOT and created a configuration file, you can deploy on the TFGrid with the following command. Make sure to indicate the path to your configuration file. + +```bash +tfrobot deploy -c ./config.yaml +``` + +## Delete Workloads + +To delete the contracts, you can use the following line. Make sure to indicate the path to your configuration file. + +```bash +tfrobot cancel -c ./config.yaml +``` + +## Logs + +To ensure a complete log history, append `2>&1 | tee path/to/log/file` to the command being executed. + +```bash +tfrobot deploy -c ./config.yaml 2>&1 | tee path/to/log/file +``` + +## Using TFCMD with TFROBOT + +### Get Contracts + +The TFCMD tool works well with TFROBOT, as it can be used to query the TFGrid, for example you can see the contracts created by TFROBOT by running the TFCMD command, taking into consideration that you are using the same mnemonics and are on the same network: + +```bash +tfcmd get contracts +``` + +For more information on TFCMD, [read the documentation](../tfcmd/tfcmd.md). \ No newline at end of file diff --git a/collections/manual/documentation/developers/tfrobot/tfrobot_installation.md b/collections/manual/documentation/developers/tfrobot/tfrobot_installation.md new file mode 100644 index 0000000..deec2b8 --- /dev/null +++ b/collections/manual/documentation/developers/tfrobot/tfrobot_installation.md @@ -0,0 +1,36 @@ +

Installation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) + +*** + +## Introduction + +This section covers the basics on how to install TFROBOT (`tfrobot`). + +TFROBOT is available as binaries. Make sure to download the latest release and to stay up to date with new releases. + +## Installation + +To install TFROBOT, simply download and extract the TFROBOT binaries to your path. + +- Create a new directory for `tfgrid-sdk-go` + ``` + mkdir tfgrid-sdk-go + cd tfgrid-sdk-go + ``` +- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) + - ``` + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v0.14.4/tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` +- Extract the binaries + - ``` + tar -xvf tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` +- Move `tfrobot` to any `$PATH` directory: + ```bash + mv tfrobot /usr/local/bin + ``` \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/1_create_farm.md b/collections/manual/documentation/farmers/3node_building/1_create_farm.md new file mode 100644 index 0000000..460fc32 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/1_create_farm.md @@ -0,0 +1,88 @@ +

1. Create a Farm

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Create a TFChain Account](#create-a-tfchain-account) +- [Create a Farm](#create-a-farm) +- [Create a ThreeFold Connect Wallet](#create-a-threefold-connect-wallet) +- [Add a Stellar Address for Payout](#add-a-stellar-address-for-payout) + - [Farming Rewards Distribution](#farming-rewards-distribution) +- [More Information](#more-information) + +*** + +## Introduction + +We cover the basic steps to create a farm with the ThreeFold Dashboard. We also create a TFConnect app wallet to receive the farming rewards. + +## Create a TFChain Account + +We create a TFChain account using the ThreeFold Dashboard. + +Go to the [ThreeFold Dashboard](https://dashboard.grid.tf/), click on **Create Account**, choose a password and click **Connect**. + +![tfchain_create_account](./img/dashboard_tfchain_create_account.png) + +Once your profile gets activated, you should find your Twin ID and Address generated under your Mnemonics for verification. Also, your Account Balance will be available at the top right corner under your profile name. + +![tf_mnemonics](./img/dashboard_tf_mnemonics.png) + +## Create a Farm + +We create a farm using the dashboard. + +In the left-side menu, select **Farms** -> **Your Farms**. + +![your_farms](./img/dashboard_your_farms.png) + +Click on **Create Farm**, choose a farm name and then click **Create**. + +![create_farm](./img/dashboard_create_farm.png) + +![farm_name](./img/dashboard_farm_name.png) + +## Create a ThreeFold Connect Wallet + +Your farming rewards should be sent to a Stellar wallet with a TFT trustline enabled. The simplest way to proceed is to create a TF Connect app wallet as the TFT trustline is enabled by default on this wallet. For more information on TF Connect, read [this section](../../threefold_token/storing_tft/tf_connect_app.md). + +Let's create a TFConnect Wallet and take note of the wallet address. First, download the app. + +This app is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en&gl=US) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885). + +- Note that for Android phones, you need at minimum Android Nougat, the 8.0 software version. +- Note that for iOS phones, you need at minimum iOS 14.5. It will be soon available to iOS 13. + +Open the app, click **SIGN UP**, choose a ThreeFold Connect Id, write your email address, take note of the seed phrase and choose a pin. Once this is done, you will have to verify your email address. Check your email inbox. + +In the app menu, click on **Wallet** and then click on **Create Initial Wallet**. + +To find your wallet address, click on the **circled i** icon at the bottom of the screen. + +![dashboard_tfconnect_wallet_1](./img/dashboard_tfconnect_wallet_1.png) + +Click on the button next to your Stellar address to copy the address. + +![dashboard_tfconnect_wallet_2](./img/dashboard_tfconnect_wallet_2.png) + +You will need the TF Connect wallet address for the next section. + +> Note: Make sure to keep your TF Connect Id and seed phrase in a secure place offline. You will need these two components to recover your account if you lose access. + +## Add a Stellar Address for Payout + +In the **Your Farms** section of the dashboard, click on **Add/Edit Stellar Payout Address**. + +![dashboard_walletaddress_1](./img/dashboard_walletaddress_1.png) + +Paste your Stellar wallet address and click **Submit**. + +![dashboard_walletaddress_2](./img/dashboard_walletaddress_2.png) + +### Farming Rewards Distribution + +Farming rewards will be sent to your farming wallet around the 8th of each month. This can vary depending on the situation. The minting is done automatically by code and verified by humans as a double check. + +## More Information + +For more information, such as setting IP addresses, you can consult the [Dashboard Farms section](../../dashboard/farms/farms.md). \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/2_bootstrap_image.md b/collections/manual/documentation/farmers/3node_building/2_bootstrap_image.md new file mode 100644 index 0000000..9234242 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/2_bootstrap_image.md @@ -0,0 +1,177 @@ +

2. Create a Zero-OS Bootstrap Image

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the Zero-OS Bootstrap Image](#download-the-zero-os-bootstrap-image) +- [Burn the Zero-OS Bootstrap Image](#burn-the-zero-os-bootstrap-image) + - [CD/DVD BIOS](#cddvd-bios) + - [USB Key BIOS+UEFI](#usb-key-biosuefi) + - [BalenaEtcher (MAC, Linux, Windows)](#balenaetcher-mac-linux-windows) + - [CLI (Linux)](#cli-linux) + - [Rufus (Windows)](#rufus-windows) +- [Additional Information (Optional)](#additional-information-optional) + - [Expert Mode](#expert-mode) + - [Use a Specific Kernel](#use-a-specific-kernel) + - [Disable GPU](#disable-gpu) + - [Bootstrap Image URL](#bootstrap-image-url) + - [Zeros-OS Bootstrapping](#zeros-os-bootstrapping) + - [Zeros-OS Expert Bootstrap](#zeros-os-expert-bootstrap) + +*** + +## Introduction + +We will now learn how to create a Zero-OS bootstrap image in order to boot a DIY 3Node. + +## Download the Zero-OS Bootstrap Image + +Let's download the Zero-OS bootstrap image. + +In the Farms section of the Dashboard, click on **Bootstrap Node Image** + +![dashboard_bootstrap_farm](./img/dashboard_bootstrap_farm.png) + +or use the direct link [https://v3.bootstrap.grid.tf](https://v3.bootstrap.grid.tf): + +``` +https://v3.bootstrap.grid.tf +``` + +![Farming_Create_Farm_21](./img/farming_createfarm_21.png) + +This is the Zero-OS v3 Bootstrapping page. + +![Farming_Create_Farm_22](./img/farming_createfarm_22.png) + +Write your farm ID and choose production mode. + +![Farming_Create_Farm_23](./img/farming_createfarm_23.png) + +If your system is new, you might be able to run the bootstrap in UEFI mode. + +![Farming_Create_Farm_24](./img/farming_createfarm_24.png) + +For older systems, run the bootstrap in BIOS mode. For BIOS CD/DVD, choose **ISO**. For BIOS USB, choose **USB** + +Download the bootstrap image. Next, we will burn the bootstrap image. + + + +## Burn the Zero-OS Bootstrap Image + +We show how to burn the Zero-OS bootstrap image. A quick and modern way is to burn the bootstrap image on a USB key. + +### CD/DVD BIOS + +For the BIOS **ISO** image, download the file and burn it on a DVD. + +### USB Key BIOS+UEFI + +There are many ways to burn the bootstrap image on a USB key. The easiest way that works for all operating systems is to use BalenaEtcher. We also provide other methods. + +#### BalenaEtcher (MAC, Linux, Windows) + +For **MAC**, **Linux** and **Windows**, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. This will work for the option **EFI IMG** for UEFI boot, and with the option **USB** for BIOS boot. Simply follow the steps presented to you and make sure you select the bootstrap image file you downloaded previously. + +> Note: There are alternatives to BalenaEtcher (e.g. [usbimager](https://gitlab.com/bztsrc/usbimager/)). + +**General Steps with BalenaEtcher:** + +1. Download BalenaEtcher +2. Open BalenaEtcher +3. Select **Flash from file** +4. Find and select the bootstrap image (with your correct farm ID) +5. Select **Target** (your USB key) +6. Select **Flash** + +That's it. Now you have a bootstrap image on Zero-OS as a bootable removable media device. + + +#### CLI (Linux) + +For the BIOS **USB** and the UEFI **EFI IMG** images, you can do the following on Linux: + + sudo dd status=progress if=FILELOCATION.ISO(or .IMG) of=/dev/sd* + +Here the * is to indicate that you must adjust according to your disk. To see your disks, write lsblk in the command window. Make sure you select the proper disk! + +*If you USB key is not new, make sure that you format it before burning the Zero-OS image. + +#### Rufus (Windows) + +For Windows, if you are using the "dd" able image, instead of writing command line, you can use the free USB flashing program called [Rufus](https://sourceforge.net/projects/rufus.mirror/) and it will automatically do this without needing to use the command line. Rufus also formats the boot media in the process. + +## Additional Information (Optional) + +We cover some additional information. Note that the following information is not needed for a basic farm setup. + +### Expert Mode + +You can use the [expert mode](https://v3.bootstrap.grid.tf/expert) to generate specific Zero-OS bootstrap images. + +Along the basic options of the normal bootstrap mode, the expert mode allows farmers to add extra kernel arguments and decide which kernel to use from a vast list of Zero-OS kernels. + +#### Use a Specific Kernel + +You can use the expert mode to choose a specific kernel. Simply set the information you normally use and then select the proper kernel you need in the **Kernel** drop-down list. + +![](./img/bootstrap_kernel_list.png) + +#### Disable GPU + +You can use the expert mode to disable GPU on your 3Node. + +![](./img/bootstrap_disable-gpu.png) + +In the expert mode of the Zero-OS Bootstrap generator, fill in the following information: + +- Farmer ID + - Your current farm ID +- Network + - The network of your farm +- Extra kernel arguments + - ``` + disable-gpu + ``` +- Kernel + - Leave the default kernel +- Format + - Choose a bootstrap image format +- Click on **Generate** +- Click on **Download** + +### Bootstrap Image URL + +In both normal and expert mode, you can use the generated URL to quickly download a Zero-OS bootstrap image based on your farm specific setup. + +Using URLs can be a very quick and efficient way to create new bootstrap images once your familiar with the Zero-OS bootstrap URL template and some potential varations. + +``` +https://.bootstrap.grid.tf//////.../ +``` + +Note that the arguments and the kernel are optional. + +The following content will provide some examples. + +#### Zeros-OS Bootstrapping + +On the [main page](https://v3.bootstrap.grid.tf/), once you've written your farm ID and selected a network, you can copy the generated URL of any given image format. + +For example, the following URL is a download link to an **EFI IMG** of the Zero-OS bootstrap image of farm 1 on the main TFGrid v3 network: + +``` +https://v3.bootstrap.grid.tf/uefimg/prod/1 +``` + +#### Zeros-OS Expert Bootstrap + +You can use the generated sublink at the **Generate step** of the expert mode to get a quick URL to download your bootstrap image. + +- After setting the parameters and arguments, click on **Generate** +- Add the **Target** content to the following URL `https://v3.bootstrap.grid.tf` + - For example, the following URL sets an **ipxe** script of the Zero-OS bootstrap of farm 1 on the main TFGrid v3 network, with the **disable-gpu** function enabled as an extra kernel argument and a specific kernel: + - ``` + https://v3.bootstrap.grid.tf/ipxe/test/1/disable-gpu/zero-os-development-zos-v3-generic-b8706d390d.efi + ``` \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/3_set_hardware.md b/collections/manual/documentation/farmers/3node_building/3_set_hardware.md new file mode 100644 index 0000000..6f053dd --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/3_set_hardware.md @@ -0,0 +1,188 @@ +

3. Set the Hardware

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Hardware Requirements](#hardware-requirements) + - [3Node Requirements Summary](#3node-requirements-summary) +- [Bandwidth Requirements](#bandwidth-requirements) +- [Link to Share Farming Setup](#link-to-share-farming-setup) +- [Powering the 3Node](#powering-the-3node) + - [Surge Protector](#surge-protector) + - [Power Distribution Unit (PDU)](#power-distribution-unit-pdu) + - [Uninterrupted Power Supply (UPS)](#uninterrupted-power-supply-ups) + - [Generator](#generator) +- [Connecting the 3Node to the Internet](#connecting-the-3node-to-the-internet) + - [Z-OS and Switches](#z-os-and-switches) +- [Using Onboard Storage (3Node Servers)](#using-onboard-storage-3node-servers) +- [Upgrading a DIY 3Node](#upgrading-a-diy-3node) + +*** + + +## Introduction + +In this section of the ThreeFold Farmers book, we cover the essential farming requirements when it comes to ThreeFold 3Node hardware. + +The essential information are available in the section [3Node Requirements Summary](#3node-requirements-summary). + +## Hardware Requirements + + +You need a theoretical minimum of 500 GB of SSD and 2 GB of RAM on a mini pc, desktop or server. In short, for peak optimization, aim for 100 GB of SSD and 8GB of RAM per thread. (Thread is equivalent to virtual core or logical core.) + +Also, TFDAO might implement a farming parameter based on [passmark](https://www.cpubenchmark.net/cpu_list.php). From the ongoing discussion on the Forum, you should aim at a CPU mark of 1000 and above per core. + +> 3Node optimal farming hardware ratio -> 100 GB of SSD + 8 GB of RAM per Virtual Core + +Note that you can run Zero-OS on a Virtual Machine (VM), but you won't farm any TFT from this process. To farm TFT, Zero-OS needs to be on bare metal. + +Also, note that ThreeFold runs its own OS, which is Zero-OS. You thus need to start with completely wiped disks. You cannot farm TFT with Windows, Linux or MAC OS installed on your disks. If you need to use such OS temporarily, boot it in Try mode with a removable media (USB key). + +Note: Once you have the necessary hardware, you need to [create a farm](./1_create_farm.md), [create a Zero-OS bootstrap image](./2_bootstrap_image.md), [wipe your disks](./4_wipe_all_disks.md) and [set the BIOS/UEFI](./5_set_bios_uefi.md) . Then you can [boot your 3Node](./6_boot_3node.md). If you are planning in building a farm in data center, [read this section](../advanced_networking/advanced_networking_toc.md). + + + +### 3Node Requirements Summary + + + +Any computer with the following specifications can be used as a DIY 3Node. + +- Any 64-bit hardware with an Intel or AMD processor chip. +- Servers, desktops and mini computers type hardware are compatible. +- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required. +- A ratio of 100GB of SSD and 8GB of RAM per thread is recommended. +- A wired ethernet connection is highly recommended to maximize reliability and the ability to farm TFT. +- A [passmark](https://www.passmark.com/) of 1000 per core is recommended and will probably be a minimum requirement in the future. + +*A passmark of 1000 per core is recommend and will be a minimum requirement in the future. This is not yet an official requirement. A 3Node with less than 1000 passmark per core of CPU would not be penalized if it is registered before the DAO settles the [Passmark Question](https://forum.threefold.io/t/cpu-benchmarking-for-reward-calculations/2479). + + + +## Bandwidth Requirements + + + +A 3Node connects to the ThreeFold Grid and transfers information, whether it is in the form of compute, storage or network units (CU, SU, NU respectively). The more resources your 3Nodes offer to the Grid, the more bandwidth will be needed to transfer the additional information. In this section, we cover general guidelines to make sure you have enough bandwidth on the ThreeFold Grid when utilization will be happening. + +Note that the TFDAO will need to discuss and settle on clearer guidelines in the near future. For now, we propose those general guidelines. Being aware of these numbers as you build and scale your ThreeFold farm will set you in the proper direction. + +> **The strict minimum for one Titan is 1 mbps of bandwidth**. + +If you want to expand your ThreeFold farm, you should check the following to make sure your bandwidth will be sufficient when there will be Grid utilization. + +**Bandwidth per 3Node Equation** + +> min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2) + +This equation means that for each TB of HDD you need 5 mbps of bandwidth, and for each TB of SSD, 8 Threads and 64GB of RAM (whichever is higher), you need 10 mbps of bandwidth. + +This means a proper bandwidth for a Titan would be 10 mbps. As stated, 1 mbps is the strict minimum for one Titan. + + + +## Link to Share Farming Setup + + +If you want ideas and suggestions when it comes to building DIY 3Nodes, a good place to start is by checking what other farmers have built. [This post on the Forum](https://forum.threefold.io/t/lets-share-our-farming-setup/286) is a great start. The following section also contains great DIY 3Node ideas. + +## Powering the 3Node + +### Surge Protector + +A surge protector is highly recommended for your farm and your 3Nodes. This ensures your 3Nodes will not overcharge if a power surge happens. Whole-house surge protectors are also an option. + +### Power Distribution Unit (PDU) + +A PDU (power distribution unit) is useful in big server settings in order to manage your wattage and keep track of your power consumption. + + +### Uninterrupted Power Supply (UPS) + + +A UPS (uninterrupted power supply) is great for a 3Node if your power goes on and off frequently for short periods of time. This ensures your 3Node does not need to constantly reboot. If your electricity provider is very reliable, a UPS might not be needed, as the small downtime resulting from rare power outages with not exceed the DIY downtime limit*. (95% uptime, 5% downtime = 36 hours per month.) Of course, for greater Grid utilization experience, considering adding a UPS to your ThreeFold farm can be highly beneficial. + +Note: Make sure to have AC Recovery Power set properly so your 3Node goes back online if power shutdowns momentarily. UPS are generally used in data center to make sure people have enough time to do a "graceful" shutdown of the units when power goes off. In the case of 3Nodes, they do not need graceful shutdowns as Zero-OS cannot lose data while functioning. The only way to power down a 3Node is simply to turn it off directly on the machine. + + +### Generator + + +A generator will be needed for very large installation with or without an unsteady main power supply. + + + +## Connecting the 3Node to the Internet + +As a general consideration, to connect a 3Node to the Internet, you must use an Ethernet cable and set DHCP as a network management protocol. Note that WiFi is not supported with ThreeFold farming. + +The general route from the 3Node to the Internet is the following: + +> 3Node -> Switch (optional) -> Router -> Modem + +Note that most home routers come with a built-in switch to provide multiple Ethernet ports. Using a stand-alone switch is optional, but can come quite handy when farmers have many 3Nodes. + + + +### Z-OS and Switches + +Switches can be managed or unmanaged. Managed switches come with managed features made available to the user (typically more of such features on premium models). + +Z-OS can work with both types of switches. As long as there's a router reachable on the other end offering DHCP and a route to the public internet, it's not important what's in between. Generally speaking, switches are more like cables, just part of the pipes that connect devices in a network. + +We present a general overview of the two types of switches. + +**Unmanaged Switches** + +Unmanaged are the most common type and if someone just says "switch" this is probably what they mean. These switches just forward traffic along to its destination in a plug and play manner with no configuration. When a switch is connected to a router, you can think of the additional free ports on the switch as essentially just extra ports on the router. It's a way to expand the available ports and sometimes also avoid running multiple long cables. My nodes are far from my router, so I run a single long ethernet cable to a switch next to the nodes and then use multiple shorter cables to connect from the switch to the nodes. + +**Managed Switches** + +Managed switches have more capabilities than unmanaged switches and they are not very common in home settings (at least not as standalone units). Some of our farmers do use managed switches. These switches offer much more control and also require configuration. They can enable advanced features like virtual LANs to segment the network. + + + +## Using Onboard Storage (3Node Servers) + +If your 3Node is based on a server, you can either use PCIe slots and PCIe-NVME adapter to install SSD NVME disk, or you can use the onboard storage. + +Usually, servers use RAID technology for onboard storage. RAID is a technology that has brought resilience and security to the IT industry. But it has some limitations that ThreeFold did not want to get stuck with. ThreeFold developed a different and more efficient way to [store data reliably](https://library.threefold.me/info/threefold#/cloud/threefold__cloud_products?id=storage-quantum-safe-filesystem). This Quantum Safe Storage overcomes some of the shortfalls of RAID and is able to work over multiple nodes geographically spread on the TF Grid. This means that there is no RAID controller in between data storage and the TF Grid. + +For your 3Nodes, you want to bypass RAID in order for Zero-OS to have bare metals on the system. + +To use onboard storage on a server without RAID, you can + +1. [Re-flash](https://fohdeesha.com/docs/perc.html) the RAID card +2. Turn on HBA/non-RAID mode +3. Install a different card. + +For HP servers, you simply turn on the HBA mode (Host Bus Adapter). + +For Dell servers, you can either cross, or [re-flash](https://fohdeesha.com/docs/perc.html), the RAID controller with an “IT-mode-Firmware” (see this [video](https://www.youtube.com/watch?v=h5nb09VksYw)) or get a DELL H310-controller (which has the non-RAID option). Otherwise, you can install a NVME SSD with a PCIe adaptor, and turn off the RAID controller. + + + +Once the disks are wiped, you can shutdown your 3Node and remove the Linux Bootstrap Image (USB key). Usually, there will be a message telling you when to do so. + + + +## Upgrading a DIY 3Node + + + +As we've seen in the [List of Common DIY 3Nodes](#list-of-common-diy-3nodes), it is sometimes necessary, and often useful, to upgrade your hardware. + +**Type of upgrades possible** + +- Add TBs of SSD/HDD +- Add RAM +- Change CPU +- Change BIOS battery +- Change fans + +For some DIY 3Node, no upgrades are required and this constitutes a good start if you want to explore DIY building without going into too much additional steps. + +For in-depth videos on how to upgrade mini-pc and rack servers, watch these great [DIY videos](https://www.youtube.com/user/floridanelson). \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/3node_building.md b/collections/manual/documentation/farmers/3node_building/3node_building.md new file mode 100644 index 0000000..8f26b69 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/3node_building.md @@ -0,0 +1,14 @@ +

Building a DIY 3Node

+ +This section of the ThreeFold Farmers book presents the necessary and basic steps to build a DIY 3Node. + +For advanced farming information, such as GPU farming and room parameters, refer to the section [Farming Optimization](../farming_optimization/farming_optimization.md). + +

Table of Contents

+ +- [1. Create a Farm](./1_create_farm.md) +- [2. Create a Zero-OS Bootstrap Image](./2_bootstrap_image.md) +- [3. Set the Hardware](./3_set_hardware.md) +- [4. Wipe All the Disks](./4_wipe_all_disks.md) +- [5. Set the BIOS/UEFI](./5_set_bios_uefi.md) +- [6. Boot the 3Node](./6_boot_3node.md) \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/4_wipe_all_disks.md b/collections/manual/documentation/farmers/3node_building/4_wipe_all_disks.md new file mode 100644 index 0000000..4e252a5 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/4_wipe_all_disks.md @@ -0,0 +1,106 @@ +

4. Wipe All the Disks

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [1. Create a Linux Bootstrap Image](#1-create-a-linux-bootstrap-image) +- [2. Boot Linux in *Try Mode*](#2-boot-linux-in-try-mode) +- [3. Use wipefs to Wipe All the Disks](#3-use-wipefs-to-wipe-all-the-disks) +- [Troubleshooting](#troubleshooting) + +*** + +## Introduction + +In this section of the ThreeFold Farmers book, we explain how to wipe all the disks of your 3Node. + + + +## Main Steps + +It only takes a few steps to wipe all the disks of a 3Node. + +1. Create a Linux Bootstrap Image +2. Boot Linux in *Try Mode* +3. Wipe All the Disks + +ThreeFold runs its own OS, which is Zero-OS. You thus need to start with completely wiped disks. Note that ALL disks must be wiped. Otherwise, Zero-OS won't boot. + +An easy method is to simply download a Linux distribution and wipe the disk with the proper command line in the Terminal. + +We will show how to do this with Ubuntu 20.04. LTS. This distribution is easy to use and it is thus a good introduction for Linux, in case you haven't yet explored this great operating system. + + + +## 1. Create a Linux Bootstrap Image + +Download the Ubuntu 20.04 ISO file [here](https://releases.ubuntu.com/20.04/) and burn the ISO image on a USB key. Make sure you have enough space on your USB key. You can also use other Linux Distro such as [GRML](https://grml.org/download/), if you want a lighter ISO image. + +The process here is the same as in section [Burning the Bootstrap Image](./2_bootstrap_image.md#burn-the-zero-os-bootstrap-image), but with the Linux ISO instead of the Zero-OS ISO. [BalenaEtcher](https://www.balena.io/etcher/) is recommended as it formats your USB in the process, and it is available for MAC, Windows and Linux. + + + +## 2. Boot Linux in *Try Mode* + +When you boot the Linux ISO image, make sure to choose *Try Mode*. Otherwise, it will install Linux on your computer. You do not want this. + + + +## 3. Use wipefs to Wipe All the Disks + +When you use wipefs, you are removing all the data on your disk. Make sure you have no important data on your disks, or make sure you have copies of your disks before doing this operation, if needed. + +Once Linux is booted, go into the terminal and write the following command lines. + +First, you can check the available disks by writing in a terminal or in a shell: + +``` +lsblk +``` + +To see what disks are connected, write this command: + +``` +fdisk -l +``` + +If you want to wipe one specific disk, here we use *sda* as an example, write this command: + +``` +sudo wipefs -a /dev/sda +``` + +And replace the "a" in sda by the letter of your disk, as shown when you did *lsblk*. The term *sudo* gives you the correct permission to do this. + +To wipe all the disks in your 3Node, write the command: + +``` +sudo for i in /dev/sd*; do wipefs -a $i; done +``` + +If you have any `fdisk` entries that look like `/dev/nvme`, you'll need to adjust the command line. + +For a nvme disk, here we use *nvme0* as an example, write: + +``` +sudo wipefs -a /dev/nvme0 +``` + +And replace the "0" in nvme0 by the number corresponding to your disk, as shown when you did *lsblk*. + +To wipe all the nvme disks, write this command line: + +``` +sudo for i in /dev/nvme*; do wipefs -a $i; done +``` + +## Troubleshooting + +If you're having issues wiping the disks, you might need to use **--force** or **-f** with wipefs (e.g. **sudo wipefs -af /dev/sda**). + +If you're having trouble getting your disks recognized by Zero-OS, some farmers have had success enabling AHCI mode for SATA in their BIOS. + +If you are using a server with onboard storage, you might need to [re-flash the RAID card](../../faq/faq.md#is-there-a-way-to-bypass-raid-in-order-for-zero-os-to-have-bare-metals-on-the-system-no-raid-controller-in-between-storage-and-the-grid). + + diff --git a/collections/manual/documentation/farmers/3node_building/5_set_bios_uefi.md b/collections/manual/documentation/farmers/3node_building/5_set_bios_uefi.md new file mode 100644 index 0000000..4c6c3e9 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/5_set_bios_uefi.md @@ -0,0 +1,172 @@ +

5. Set the BIOS/UEFI

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Z-OS and DHCP](#z-os-and-dhcp) + - [Regular Computer and 3Node Network Differences](#regular-computer-and-3node-network-differences) + - [Static IP Addresses](#static-ip-addresses) +- [The Essential Features of BIOS/UEFI for a 3Node](#the-essential-features-of-biosuefi-for-a-3node) +- [Setting the Remote Management of a Server with a Static IP Address (Optional)](#setting-the-remote-management-of-a-server-with-a-static-ip-address-optional) +- [Update the BIOS/UEFI firmware (Optional)](#update-the-biosuefi-firmware-optional) + - [Check the BIOS/UEFI version on Windows](#check-the-biosuefi-version-on-windows) + - [Check the BIOS/UEFI version on Linux](#check-the-biosuefi-version-on-linux) + - [Update the BIOS firmware](#update-the-bios-firmware) +- [Additional Information](#additional-information) + - [BIOS/UEFI and Zero-OS Bootstrap Image Combinations](#biosuefi-and-zero-os-bootstrap-image-combinations) + - [Troubleshoot](#troubleshoot) + + +*** + +## Introduction + +In this section of the ThreeFold Farmers book, we explain how to properly set the BIOS/UEFI of your 3Node. + +Note that the BIOS mode is usually needed for older hardware while the UEFI mode is usually needed for newer hardware, when it comes to booting properly Zero-OS on your DIY 3Node. + +If it doubt, start with UEFI and if it doesn't work as expected, try with BIOS. + +Before diving into the BIOS/UEFI settings, we will present some general considerations on Z-OS and DHCP. + +## Z-OS and DHCP + +The operating system running on the 3Nodes is called Zero-OS (Z-OS). When it comes to setting the proper network for your 3Node farm, you must use DHCP since Z-OS is going to request an IP from the DHCP server if there's one present, and it won't get network connectivity if there's no DHCP. + +The Z-OS philosophy is to minimize configuration wherever possible, so there's nowhere to supply a static config when setting your 3Node network. Instead, the farmer is expected to provide DHCP. + +While it is possible to set fixed IP addresses with the DHCP for the 3Nodes, it is recommended to avoid this and just set the DHCP normally without fixed IP addresses. + +By setting DHCP in BIOS/UEFI, an IP address is automatically assigned by your router to your 3Node every time you boot it. + +### Regular Computer and 3Node Network Differences + +For a regular computer (not a 3Node), if you want to use a static IP in a network with DHCP, you'd first turn off DHCP and then set the static IP to an IP address outside the DHCP range. That being said, with Z-OS, there's no option to turn off DHCP and there's nowhere to set a static IP, besides public config and remote management. In brief, the farmer must provide DHCP, either on a private or a public range, for the 3Node to boot. + +### Static IP Addresses + +In the ThreeFold ecosystem, there are only two situations where you would work with static IP addresses: to set a public config to a 3Node or a farm, and to remotely manage your 3Nodes. + +**Static IP and Public Config** + +You can [set a static IP for the public config of a 3Node or a farm](./1_create_farm.md#optional-add-public-ip-addresses). In thise case, the 3Node takes information from TF Chain and uses it to set a static configuration on a NIC (or on a virtual NIC in the case of single NIC systems). + +**Static IP and Remote Management** + +You can [set a static IP address to remotely manage a 3Node](#setting-the-remote-management-of-a-server-static-ip-address). + + + +## The Essential Features of BIOS/UEFI for a 3Node + +There are certain things that you should make sure are set properly on your 3Node. + +As a general advice, you can Load Defaults (Settings) on your BIOS, then make sure the options below are set properly. + +* Choose the correct combination of BIOS/UEFI and bootstrap image on [https://bootstrap.grid.tf/](https://bootstrap.grid.tf/) + * Newer system will use UEFI + * Older system will use BIOS + * Hint: If your 3Node boot stops at *Initializing Network Devices*, try the other method (BIOS or UEFI) +* Set Multi-Processor and Hyperthreading at Enabled + * Sometimes, it will be written Virtual Cores, or Logical Cores. +* Set Virtualization at Enabled + * On Intel, it is denoted as CPU virtualization and on ASUS, it is denoted as SVM. + * Make sure virtualization is enabled and look for the precise terms in your specific BIOS/UEFI. +* Set AC Recovery at Last Power State + * This will make sure your 3Node restarts after losing power momentarily. +* Select the proper Boot Sequence for the 3Node to boot Zero-OS from your bootstrap image + * e.g., if you have a USB key as a bootstrap image, select it in Boot Sequence +* Set Server Lookup Method (or the equivalent) at DNS. Only use Static IP if you know what you are doing. + * Your router will assign a dynamic IP address to your 3Node when it connects to Internet. +* Set Client Address Method (or the equivalent) at DHCP. Only use Static IP if you know what you are doing. + * Your router will assign a dynamic IP address to your 3Node when it connects to Internet. +* Secure Boot should be left at disabled + * Enable it if you know what you are doing. Otherwise, it can be set at disabled. + + + + +## Setting the Remote Management of a Server with a Static IP Address (Optional) + + +Note from the list above that by enabling the DHCP and DNS in BIOS, dynamic IP addresses will be assigned to 3Nodes. This way, you do not need any specific port configuration when booting a 3Node. + +As long as the 3Node is connected to the Internet via an ethernet cable (WiFi is not supported), Zero-OS will be able to boot. By setting DHCP in BIOS, an IP address is automatically assigned to your 3Node every time you boot it. This section concerns 3Node servers with remote management functions and interfaces. + +You can set up a node through static routing at the router without DHCP by assigning the MAC address of the NIC to a IP address within your private subnet. This will give a static IP address to your 3Node. + +With a static IP address, you can then configure remote management on servers. For Dell, [iDRAC](https://www.dell.com/support/kbdoc/en-us/000134243/how-to-setup-and-manage-your-idrac-or-cmc-for-dell-poweredge-servers-and-blades) is used, and for HP, [ILO](https://support.hpe.com/hpesc/public/docDisplay?docId=a00045463en_us&docLocale=en_US) is used. + + + +## Update the BIOS/UEFI firmware (Optional) + + +Updating the BIOS firmware is not always necessary, but to do so can help prevent future errors and troubleshootings. Making sure the Date and Time are set correctly can also help the booting process. + +Note: updating the BIOS/UEFI firmware is optional, but recommended. + + +### Check the BIOS/UEFI version on Windows + +Hit *Start*, type in *cmd* in the search box and click on *Command Prompt*. Write the line + +> wmic bios get smbiosbiosversion + +This will give you the BIOS or UEFI firmware of your PC. + +### Check the BIOS/UEFI version on Linux + +Simply type the following command + +> sudo dmidecode | less + +or this line: + +> sudo dmidecode -s bios-version + +### Update the BIOS firmware + +1. On the manufacturer's website, download the latest BIOS/UEFI firmware +2. Put the file on a USB flash drive (+unzip if necessary) +3. Restart your hardware and enter the BIOS/UEFI settings +4. Navigate the menus to update the BIOS/UEFI + +## Additional Information + +### BIOS/UEFI and Zero-OS Bootstrap Image Combinations + +To properly boot the Zero-OS image, you can either use an image made for a BIOS system or a UEFI system, this depends on your system. + +BIOS is older technology. It means *Basic Input/Output System*. + +UEFI is newer technology. It means *Unified Extensible Firmware Interface*. BIOS/UEFI is, in a way, the link between the hardware and the software of your computer. + +In general, setting a 3Node is similar whether it is with a BIOS or UEFI system. The important is to choose the correct combination of boot media and boot mode (BIOS/UEFI). + +The bootstrap images are available [here](https://bootstrap.grid.tf/). + +The choices are: + +1. EFI IMG - UEFI +2. EFI FILE - UEFI +3. iPXE - Boot from network +4. ISO - BIOS +5. USB - BIOS +6. LKRN - Boot from network + +Choices 1 and 2 are for UEFI (newer models). +Choices 4 and 5 are for BIOS (newer models). +Choices 3 and 6 are mainly for network boot. + +Refer to [this previous section](./2_bootstrap_image.md) for more information on creating a Zero-OS bootstrap image. + +For information on how to boot Zero-OS with iPXE, read [this section](./6_boot_3node.md#advanced-booting-methods-optional). + +### Troubleshoot + +You might have to try UEFI first and if it doesn't work, try BIOS. Usually when this is the case (UEFI doesn't work with your current computer), the following message will be shown: + +> Initializing Network Devices... + +And then... nothing. This means that you are still in the BIOS of the hardware and boot is not even started yet. When this happens, try the BIOS mode of your computer. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/6_boot_3node.md b/collections/manual/documentation/farmers/3node_building/6_boot_3node.md new file mode 100644 index 0000000..404e2e9 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/6_boot_3node.md @@ -0,0 +1,169 @@ +

6. Boot the 3Node

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [1. Booting the 3Node with Zero-OS](#1-booting-the-3node-with-zero-os) +- [2. Check the 3Node Status Online](#2-check-the-3node-status-online) +- [3. Receive the Farming Rewards](#3-receive-the-farming-rewards) +- [Advanced Booting Methods (Optional)](#advanced-booting-methods-optional) + - [PXE Booting with OPNsense](#pxe-booting-with-opnsense) + - [PXE Booting with pfSense](#pxe-booting-with-pfsense) +- [Booting Issues](#booting-issues) + - [Multiple nodes can run with the same node ID](#multiple-nodes-can-run-with-the-same-node-id) + +*** + + +## Introduction + +We explain how to boot the 3Node with the Zero-OS bootstrap image with a USB key. We also include optional advanced booting methods using OPNSense and pfSense. + +One of the great features of Zero-OS is that it can be completely run within the cache of your 3Node. Indeed, the booting device that contains your farm ID will connect to the ThreeFold Grid and download everything needed to run smoothly. There are many benefits in terms of security and protection of data that comes with this. + +## 1. Booting the 3Node with Zero-OS + +To boot Zero-OS, insert your Zero-OS bootstrap image USB key, power on your computer and choose the right booting sequence and parameters ([BIOS or UEFI](./5_set_bios_uefi.md)) in your BIOS/UEFI settings. Then, restart the 3Node. Zero-OS should boot automatically. + +Note that you need an ethernet cable connected to your router or switch. You cannot farm on the ThreeFold Grid with Wifi. + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +If time passes (an hour and more) and the node does not get registered, in many cases, [wiping the disks](./4_wipe_all_disks.md) all over again and trying another reboot usually resolves this issue. + +Once you have your node ID, you can also go on the ThreeFold Dashboard to see your 3Node and verify that your 3Node is online. + +## 2. Check the 3Node Status Online + +You can use the ThreeFold [Node Finder](../../dashboard/deploy/node_finder.md) to verify that your 3Node is online. + +* [ThreeFold Main Net Dashboard](https://dashboard.grid.tf/) +* [ThreeFold Test Net Dashboard](https://dashboard.test.grid.tf/) +* [ThreeFold Dev Net Dashboard](https://dashboard.dev.grid.tf/) +* [ThreeFold QA Net Dashboard](https://dashboard.qa.grid.tf/) + + +## 3. Receive the Farming Rewards + +The farming reward will be sent once per month at the address you gave when you set up your farm. You can review this process [here](./1_create_farm.md#add-a-stellar-address-for-payout). + +That's it. You've now completed the necessary steps to build a DIY 3Node and to connect it to the Grid. + +## Advanced Booting Methods (Optional) + +### PXE Booting with OPNsense + +> This documentation comes from the [amazing Network Booting Guide](https://forum.ThreeFold.io/t/network-booting-tutorial/2688) by @Fnelson on the ThreeFold Forum. + +Network booting ditches your standard boot USB with a local server. This TFTP server delivers your boot files to your 3 nodes. This can be useful in bigger home farms, but is all but mandatory in a data center setup. + +Network boot setup is quite easy and is centered about configuring a TFTP server. There are essentially 2 options for this, a small dedicated server such as a raspberry pi, or piggybacking on your pfsense or opnsense router. I would recommend the latter as it eliminates another piece of equipment and is probably more reliable. + +**Setting Up Your Router to Allow Network Booting** + +These steps are for OPNsense, PFsense may differ. These set are required regardless of where you have your TFTP server. + +> Services>DHCPv4>LAN>Network Booting + +Check “Enable Network Booting” + +Enter the IP address of your TFTP server under “Set next-server IP”. This may be the router’s IP or whatever device you are booting from. + +Enter “pxelinux.0” under Set default bios filename. + +Ignore the TFTP Server settings. + + +**TFTP server setup on a debian machine such as Ubuntu or Raspberry Pi** + +> apt-get update +> +> apt-get install tftpd-hpa +> +> cd /srv/tftp/ +> +> wget http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/netboot.tar.gz +> +> wget http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/pxelinux.0 +> +> wget https://bootstrap.grid.tf/krn/prod/ --no-check-certificate +> +> mv ipxe-prod.lkrn +> +> tar -xvzf netboot.tar.gz +> +> rm version.info netboot.tar.gz +> +> rm pxelinux.cfg/default +> +> chmod 777 /srv/tftp/pxelinux.cfg (optional if next step fails) +> +> echo 'default ipxe-prod.lkrn' >> pxelinux.cfg/default + + +**TFTP Server on a OPNsense router** + +> Note: When using PFsense instead of OPNsense, steps are probably similar, but the directory or other small things may differ. + +The first step is to download the TFTP server plugin. Go to system>firmware>Status and check for updates, follow prompts to install. Then click the Plugins tab and search for tftp, install os-tftp. Once that is installed go to Services>TFTP (you may need to refresh page). Check the Enable box and input your router ip (normally 192.168.1.1). Click save. + +Turn on ssh for your router. In OPNsense it is System>Settings>Administration. Then check the Enable, root login, and password login. Hop over to Putty and connect to your router, normally 192.168.1.1. Login as root and input your password. Hit 8 to enter the shell. + +In OPNsense the tftp directory is /usr/local/tftp + +> cd /usr/local +> +> mkdir tftp +> +> cd ./tftp +> +> fetch http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/netboot.tar.gz +> +> fetch http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/pxelinux.0 +> +> fetch https://bootstrap.grid.tf/krn/prod/ +> +> mv ipxe-prod.lkrn +> +> tar -xvzf netboot.tar.gz +> +> rm version.info netboot.tar.gz +> +> rm pxelinux.cfg/default +> +> echo 'default ipxe-prod.lkrn' >> pxelinux.cfg/default + +You can get out of shell by entering exit or just closing the window. + +**3Node Setup** + +Set the server to BIOS boot and put PXE or network boot as the first choice. At least on Dell machines, make sure you have the network cable in plug 1 or it won’t boot. + + + +### PXE Booting with pfSense + +> This documentation comes from the [amazing Network Booting Guide](https://forum.threefold.io/t/network-booting-tutorial/2688/7) by @TheCaptain on the ThreeFold Forum. + +These are the steps required to enable PXE booting on pfSense. This guide assumes you’ll be using the router as your PXE server; pfSense allows boot file uploads directly from its web GUI. + +* Log into your pfSense instance + * Go to System>Package Manager + * Search and add ‘tftpd’ package under ‘Available Packages’ tab +* Go to Services>TFTP Server + * Under ‘Settings’ tab check enable and enter the router IP in TFTP Server Bind IP field +* Switch to ‘Files’ tab under Services>TFTP Server and upload your ‘ipxe-prod.efi’ file acquired from https://v3.bootstrap.grid.tf/ (second option labeled ‘EFI Kernel’) +* Go to Services>DHCP Server + * Under ‘Other Options’ section click Display Advance next to ‘TFTP’ and enter router IP + * Click Display Advance next to ‘Network Booting’ + * Check enable, enter router IP in Next Server field + * Enter ipxe-prod.efi in Default BIOS file name field + +That's it! You’ll want to ensure your clients are configured with boot priority set as IPv4 in first spot. You might need to disable secure boot and enable legacy boot within BIOS. + +## Booting Issues + +### Multiple nodes can run with the same node ID + +This is a [known issue](https://github.com/threefoldtech/info_grid/issues/122) and will be resolved once the TPM effort gets finalized. + diff --git a/collections/manual/documentation/farmers/3node_building/gpu_farming.md b/collections/manual/documentation/farmers/3node_building/gpu_farming.md new file mode 100644 index 0000000..b684163 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/gpu_farming.md @@ -0,0 +1,72 @@ +

GPU Farming

+ +Welcome to the *GPU Farming* section of the ThreeFold Manual! + +In this guide, we delve into the realm of GPU farming, shedding light on the significance of Graphics Processing Units (GPUs) and how they can be seamlessly integrated into the ThreeFold ecosystem. + +

Table of Contents

+ +- [Understanding GPUs](#understanding-gpus) +- [Get Started](#get-started) +- [Install the GPU](#install-the-gpu) +- [GPU Node and the Farmerbot](#gpu-node-and-the-farmerbot) +- [Set a Price for the GPU Node](#set-a-price-for-the-gpu-node) +- [Check the GPU Node on the Node Finder](#check-the-gpu-node-on-the-node-finder) +- [Reserving the GPU Node](#reserving-the-gpu-node) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Understanding GPUs + +A Graphics Processing Unit, or GPU, is a specialized electronic circuit designed to accelerate the rendering of images and videos. Originally developed for graphics-intensive tasks in gaming and multimedia applications, GPUs have evolved into powerful parallel processors with the ability to handle complex computations, such as 3D rendering, AI and machine learning. + +In the context of ThreeFold, GPU farming involves harnessing the computational power of Graphics Processing Units to contribute to the decentralized grid. This empowers users to participate in the network's mission of creating a more equitable and efficient internet infrastructure. + +## Get Started + +In this guide, we focus on the integration of GPUs with a 3Node, the fundamental building block of the ThreeFold Grid. The process involves adding a GPU to enhance the capabilities of your node, providing increased processing power and versatility for a wide range of tasks. Note that any Nvidia or AMD graphics card should work as long as it's supported by the system. + +## Install the GPU + +We cover the basic steps to install the GPU on your 3Node. + +* Find a proper GPU model for your specific 3Node hardware +* Install the GPU on the server + * Note: You might need to move or remove some pieces of your server to make room for the GPU +* (Optional) Boot the 3Node with a Linux distro (e.g. Ubuntu) and use the terminal to check if the GPU is recognized by the system + * ``` + sudo lshw -C Display + ``` + * Output example with an AMD Radeon (on the line `product: ...`) +![gpu_farming](./img/cli_display_gpu.png) +* Boot the 3Node with the ZOS bootstrap image + +## GPU Node and the Farmerbot + +If you are using the Farmerbot, it might be a good idea to first boot the GPU node without the Farmerbot (i.e. to remove the node in the config file and restart the Farmerbot). Once you've confirmed that the GPU is properly detected by TFChain, you can then put back the GPU node in the config file and restart the Farmerbot. While this is not necessary, it can be an effective way to test the GPU node separately. + +## Set a Price for the GPU Node + +You can [set additional fees](../farming_optimization/set_additional_fees.md) for your GPU dedicated node on the [TF Dashboard](https://dashboard.grid.tf/). + +When a user reserves your 3Node as a dedicated node, you will receive TFT payments once every 24 hours. These TFT payments will be sent to the TFChain account of your farm's twin. + +## Check the GPU Node on the Node Finder + +You can use the [Node Finder](../../dashboard/deploy/node_finder.md) on the [TF Dashboard](https://dashboard.grid.tf/) to verify that the node is displayed as having a GPU. + +* On the Dashboard, go to the Node Finder +* Under **Node ID**, write the node ID of the GPU node +* Once the results are displayed, you should see **1** under **GPU** + * If you are using the Status bot, you might need to change the node status under **Select Nodes Status** (e.g. **Down**, **Standby**) to see the node's information + +> Note: It can take some time for the GPU parameter to be displayed. + +## Reserving the GPU Node + +Now, users can reserve the node in the **Dedicated Nodes** section of the Dashboard and then deploy workloads using the GPU. For more information, read [this documentation](../../dashboard/deploy/dedicated_machines.md). + +## Questions and Feedback + +If you have any questions or feedback, we invite you to discuss with the ThreeFold community on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmer chat](https://t.me/threefoldfarmers) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/3node_building/img/.done b/collections/manual/documentation/farmers/3node_building/img/.done new file mode 100644 index 0000000..d672ef9 --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/img/.done @@ -0,0 +1 @@ +farming_30.png diff --git a/collections/manual/documentation/farmers/3node_building/img/bootstrap_disable_gpu.png b/collections/manual/documentation/farmers/3node_building/img/bootstrap_disable_gpu.png new file mode 100644 index 0000000..f72f450 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/bootstrap_disable_gpu.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/bootstrap_kernel_list.png b/collections/manual/documentation/farmers/3node_building/img/bootstrap_kernel_list.png new file mode 100644 index 0000000..3f3f559 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/bootstrap_kernel_list.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/cli_display_gpu.png b/collections/manual/documentation/farmers/3node_building/img/cli_display_gpu.png new file mode 100644 index 0000000..777b68b Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/cli_display_gpu.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_1.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_1.png new file mode 100644 index 0000000..bcfed02 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_1.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_2.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_2.png new file mode 100644 index 0000000..1f6538a Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_2.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_4.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_4.png new file mode 100644 index 0000000..54c413a Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_4.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_5.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_5.png new file mode 100644 index 0000000..8bfe4e4 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_5.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_6.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_6.png new file mode 100644 index 0000000..980d6d3 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_6.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_create_farm.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_create_farm.png new file mode 100644 index 0000000..858da74 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_create_farm.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_farm_name.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_farm_name.png new file mode 100644 index 0000000..c250693 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_farm_name.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_tf_mnemonics.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_tf_mnemonics.png new file mode 100644 index 0000000..ec92cde Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_tf_mnemonics.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_tfchain_create_account.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_tfchain_create_account.png new file mode 100644 index 0000000..1bdcd95 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_tfchain_create_account.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_tfconnect_wallet_1.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_tfconnect_wallet_1.png new file mode 100644 index 0000000..52a9679 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_tfconnect_wallet_1.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_tfconnect_wallet_2.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_tfconnect_wallet_2.png new file mode 100644 index 0000000..1ef9dc9 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_tfconnect_wallet_2.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_walletaddress_1.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_walletaddress_1.png new file mode 100644 index 0000000..da65f76 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_walletaddress_1.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_walletaddress_2.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_walletaddress_2.png new file mode 100644 index 0000000..a36e095 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_walletaddress_2.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/dashboard_your_farms.png b/collections/manual/documentation/farmers/3node_building/img/dashboard_your_farms.png new file mode 100644 index 0000000..58de47a Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/dashboard_your_farms.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_001.png b/collections/manual/documentation/farmers/3node_building/img/farming_001.png new file mode 100644 index 0000000..b666fce Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_001.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_30.png b/collections/manual/documentation/farmers/3node_building/img/farming_30.png new file mode 100644 index 0000000..d810d13 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_30.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_1.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_1.png new file mode 100644 index 0000000..95545d8 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_1.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_2.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_2.png new file mode 100644 index 0000000..488f970 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_2.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_21.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_21.png new file mode 100644 index 0000000..ab36d45 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_21.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_22.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_22.png new file mode 100644 index 0000000..7cfc6ba Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_22.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_23.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_23.png new file mode 100644 index 0000000..3537771 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_23.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_24.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_24.png new file mode 100644 index 0000000..4b2da96 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_24.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_3.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_3.png new file mode 100644 index 0000000..349ec17 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_3.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_4.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_4.png new file mode 100644 index 0000000..b717bfe Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_4.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_5.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_5.png new file mode 100644 index 0000000..a022a1f Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_5.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_6.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_6.png new file mode 100644 index 0000000..922b77b Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_6.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_7.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_7.png new file mode 100644 index 0000000..e4bdb6e Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_7.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_8.png b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_8.png new file mode 100644 index 0000000..e8d023c Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/farming_createfarm_8.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_1.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_1.png new file mode 100644 index 0000000..c93f67c Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_1.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_10.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_10.png new file mode 100644 index 0000000..880b4fe Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_10.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_11.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_11.png new file mode 100644 index 0000000..f5aea83 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_11.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_12.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_12.png new file mode 100644 index 0000000..afb06e0 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_12.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_13.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_13.png new file mode 100644 index 0000000..110a97a Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_13.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_14.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_14.png new file mode 100644 index 0000000..8c2d2c6 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_14.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_15.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_15.png new file mode 100644 index 0000000..30d41f2 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_15.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_16.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_16.png new file mode 100644 index 0000000..320536f Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_16.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_2.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_2.png new file mode 100644 index 0000000..b1276d9 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_2.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_4.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_4.png new file mode 100644 index 0000000..9ce84ae Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_4.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_5.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_5.png new file mode 100644 index 0000000..488bad0 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_5.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_6.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_6.png new file mode 100644 index 0000000..77b3f03 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_6.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_7.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_7.png new file mode 100644 index 0000000..21e55fc Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_7.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_8.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_8.png new file mode 100644 index 0000000..a00dfa1 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_8.png differ diff --git a/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_9.png b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_9.png new file mode 100644 index 0000000..64b9685 Binary files /dev/null and b/collections/manual/documentation/farmers/3node_building/img/tf_dashboard_2023_9.png differ diff --git a/collections/manual/documentation/farmers/3node_building/minting_receipts.md b/collections/manual/documentation/farmers/3node_building/minting_receipts.md new file mode 100644 index 0000000..2b84f1b --- /dev/null +++ b/collections/manual/documentation/farmers/3node_building/minting_receipts.md @@ -0,0 +1,98 @@ +

Minting Receipts

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Access the Reports](#access-the-reports) +- [Available Information](#available-information) + +*** + +## Introduction + +Once you have the receipt hash of your node minting, you can get the [minting report](../../dashboard/tfchain/tf_minting_reports.md) of your node. + +## Access the Reports + +- On the Dashboard, go to **TFChain** -> **TF Minting Reports** +- Enter your receipt hash +- Consult your minting report + +## Available Information + +The ThreeFold Alpha minting tool will present the following information for each minting receipt hash: + +- Node Info + - Node ID + - Farm Name and ID + - Measured Uptime +- Node Resources + - CU + - SU + - NU + - CRU + - MRU + - SRU + - HRU +- TFT Farmed +- Payout Address + + \ No newline at end of file diff --git a/collections/manual/documentation/farmers/advanced_networking/advanced_networking_toc.md b/collections/manual/documentation/farmers/advanced_networking/advanced_networking_toc.md new file mode 100644 index 0000000..f30a86f --- /dev/null +++ b/collections/manual/documentation/farmers/advanced_networking/advanced_networking_toc.md @@ -0,0 +1,13 @@ +

Advanced Networking

+ +Welcome to the *Advanced Networking* section of the ThreeFold Manual. + +In this section, we provide advanced networking tips for farms with public IPs and in data centers (DC). We also cover the differences between IPv4 and IPv6 networking. + +

Table of Contents

+ +- [Networking Overview](./networking_overview.md) +- [Network Considerations](./network_considerations.md) +- [Network Setup](./network_setup.md) + +> Note: This documentation does not constitute a complete set of knowledge on setting farms with public IP addresses in a data center. Please make sure to do your own research and communicate with your data center and your Internet service provider for any additional information. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/advanced_networking/network_considerations.md b/collections/manual/documentation/farmers/advanced_networking/network_considerations.md new file mode 100644 index 0000000..0576fd9 --- /dev/null +++ b/collections/manual/documentation/farmers/advanced_networking/network_considerations.md @@ -0,0 +1,120 @@ +

Network Considerations

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Running ZOS (v2) at Home](#running-zos-v2-at-home) +- [Running ZOS (v2) in a Multi-Node Farm in a DC](#running-zos-v2-in-a-multi-node-farm-in-a-dc) + - [Necessities](#necessities) + - [IPv6](#ipv6) + - [Routing/Firewalling](#routingfirewalling) + - [Multi-NIC Nodes](#multi-nic-nodes) + - [Farmers and the TFGrid](#farmers-and-the-tfgrid) + +*** + +## Introduction + +Running ZOS on a node is just a matter of booting it with a USB stick, or with a dhcp/bootp/tftp server with the right configuration so that the node can start the OS. +Once it starts booting, the OS detects the NICs, and starts the network configuration. A Node can only continue it's boot process till the end when it effectively has received an IP address and a route to the Internet. Without that, the Node will retry indefinitely to obtain Internet access and not finish it's startup. + +So a Node needs to be connected to a __wired__ network, providing a dhcp server and a default gateway to the Internet, be it NATed or plainly on the public network, where any route to the Internet, be it IPv4 or IPv6 or both is sufficient. + +For a node to have that ability to host user networks, we **strongly** advise to have a working IPv6 setup, as that is the primary IP stack we're using for the User Network's Mesh to function. + +## Running ZOS (v2) at Home + +Running a ZOS Node at home is plain simple. Connect it to your router, plug it in the network, insert the preconfigured USB stick containing the bootloader and the `farmer_id`, power it on. + +## Running ZOS (v2) in a Multi-Node Farm in a DC + +Multi-Node Farms, where a farmer wants to host the nodes in a data centre, have basically the same simplicity, but the nodes can boot from a boot server that provides for DHCP, and also delivers the iPXE image to load, without the need for a USB stick in every Node. + +A boot server is not really necessary, but it helps! That server has a list of the MAC addresses of the nodes, and delivers the bootloader over PXE. The farmer is responsible to set-up the network, and configure the boot server. + +### Necessities + +The Farmer needs to: + +- Obtain an IPv6 prefix allocation from the provider. A `/64` will do, that is publicly reachable, but a `/48` is advisable if the farmer wants to provide IPv6 transit for User Networks +- If IPv6 is not an option, obtain an IPv4 subnet from the provider. At least one IPv4 address per node is needed, where all IP addresses are publicly reachable. +- Have the Nodes connected on that public network with a switch so that all Nodes are publicly reachable. +- In case of multiple NICS, also make sure his farm is properly registered in BCDB, so that the Node's public IP Addresses are registered. +- Properly list the MAC addresses of the Nodes, and configure the DHCP server to provide for an IP address, and in case of multiple NICs also provide for private IP addresses over DHCP per Node. +- Make sure that after first boot, the Nodes are reachable. + +### IPv6 + +IPv6, although already a real protocol since '98, has seen reluctant adoption over the time it exists. That mostly because ISPs and Carriers were reluctant to deploy it, and not seeing the need since the advent of NAT and private IP space, giving the false impression of security. +But this month (10/2019), RIPE sent a mail to all it's LIRs that the last consecutive /22 in IPv4 has been allocated. Needless to say, but that makes the transition to IPv6 in 2019 of utmost importance and necessity. +Hence, ZOS starts with IPv6, and IPv4 is merely an afterthought ;-) +So in a nutshell: we greatly encourage Farmers to have IPv6 on the Node's network. + +### Routing/Firewalling + +Basically, the Nodes are self-protecting, in the sense that they provide no means at all to be accessed through listening processes at all. No service is active on the node itself, and User Networks function solely on an overlay. +That also means that there is no need for a Farm admin to protect the Nodes from exterior access, albeit some DDoS protection might be a good idea. +In the first phase we will still allow the Host OS (ZOS) to reply on ICMP ping requests, but that 'feature' might as well be blocked in the future, as once a Node is able to register itself, there is no real need to ever want to try to reach it. + +### Multi-NIC Nodes + +Nodes that Farmers deploy are typically multi-NIC Nodes, where one (typically a 1GBit NIC) can be used for getting a proper DHCP server running from where the Nodes can boot, and one other NIC (1Gbit or even 10GBit), that then is used for transfers of User Data, so that there is a clean separation, and possible injections bogus data is not possible. + +That means that there would be two networks, either by different physical switches, or by port-based VLANs in the switch (if there is only one). + +- Management NICs + The Management NIC will be used by ZOS to boot, and register itself to the GRID. Also, all communications from the Node to the Grid happens from there. +- Public NICs + +### Farmers and the TFGrid + +A Node, being part of the Grid, has no concept of 'Farmer'. The only relationship for a Node with a Farmer is the fact that that is registered 'somewhere (TM)', and that a such workloads on a Node will be remunerated with Tokens. For the rest, a Node is a wholly stand-alone thing that participates in the Grid. + +```text + 172.16.1.0/24 + 2a02:1807:1100:10::/64 ++--------------------------------------+ +| +--------------+ | +-----------------------+ +| |Node ZOS | +-------+ | | +| | +-------------+1GBit +--------------------+ 1GBit switch | +| | | br-zos +-------+ | | +| | | | | | +| | | | | | +| | | | +------------------+----+ +| +--------------+ | | +-----------+ +| | OOB Network | | | +| | +----------+ ROUTER | +| | | | +| | | | +| | | | +| +------------+ | +----------+ | +| | Public | | | | | +| | container | | | +-----+-----+ +| | | | | | +| | | | | | +| +---+--------+ | +-------------------+--------+ | +| | | | 10GBit Switch | | +| br-pub| +-------+ | | | +| +-----+10GBit +-------------------+ | +----------> +| +-------+ | | Internet +| | | | +| | +----------------------------+ ++--------------------------------------+ + 185.69.167.128/26 Public network + 2a02:1807:1100:0::/64 + +``` + +Where the underlay part of the wireguard interfaces get instantiated in the Public container (namespace), and once created these wireguard interfaces get sent into the User Network (Network Resource), where a user can then configure the interface a he sees fit. + +The router of the farmer fulfills 2 roles: + +- NAT everything in the OOB network to the outside, so that nodes can start and register themselves, as well get tasks to execute from the BCDB. +- Route the assigned IPv4 subnet and IPv6 public prefix on the public segment, to which the public container is connected. + +As such, in case that the farmer wants to provide IPv4 public access for grid proxies, the node will need at least one (1) IPv4 address. It's free to the farmer to assign IPv4 addresses to only a part of the Nodes. +On the other hand, it is quite important to have a proper IPv6 setup, because things will work out better. + +It's the Farmer's task to set up the Router and the switches. + +In a simpler setup (small number of nodes for instance), the farmer could setup a single switch and make 2 port-based VLANs to separate OOB and Public, or even wit single-nic nodes, just put them directly on the public segment, but then he will have to provide a DHCP server on the Public network. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/advanced_networking/network_setup.md b/collections/manual/documentation/farmers/advanced_networking/network_setup.md new file mode 100644 index 0000000..1a0302d --- /dev/null +++ b/collections/manual/documentation/farmers/advanced_networking/network_setup.md @@ -0,0 +1,86 @@ +

Network Setup

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Network Setup for Farmers](#network-setup-for-farmers) + - [Step 1. Testing for IPv6 Availability in Your Location](#step-1-testing-for-ipv6-availability-in-your-location) + - [Step 2. Choosing the Setup to Connect Your Nodes](#step-2-choosing-the-setup-to-connect-your-nodes) + - [2.1 Home Setup](#21-home-setup) + - [2.2 Data Center/Advanced Setup](#22-data-centeradvanced-setup) +- [General Notes](#general-notes) + +*** + +# Introduction + +0-OS nodes participating in the Threefold grid, need connectivity of course. They need to be able to communicate over +the Internet with each-other in order to do various things: + +- download its OS modules +- perform OS module upgrades +- register itself to the grid, and send regular updates about it's status +- query the grid for tasks to execute +- build and run the Overlay Network +- download flists and the effective files to cache + +The nodes themselves can have connectivity in a few different ways: + +- Only have RFC1918 private addresses, connected to the Internet through NAT, NO IPv6 + Mostly, these are single-NIC (Network card) machines that can host some workloads through the Overlay Network, but + cant't expose services directly. These are HIDDEN nodes, and are mostly booted with an USB stick from + bootstrap.grid.tf . +- Dual-stacked: having RFC1918 private IPv4 and public IPv6 , where the IPv6 addresses are received from a home router, +but firewalled for outgoing traffic only. These nodes are effectively also HIDDEN +- Nodes with 2 NICs, one that has effectively a NIC connected to a segment that has real public +addresses (IPv4 and/or IPv6) and one NIC that is used for booting and local +management. (OOB) (like in the drawing for farmer setup) + +For Farmers, we need to have Nodes to be reachable over IPv6, so that the nodes can: + +- expose services to be proxied into containers/vms +- act as aggregating nodes for Overlay Networks for HIDDEN Nodes + +Some Nodes in Farms should also have a publicly reachable IPv4, to make sure that clients that only have IPv4 can +effectively reach exposed services. + +But we need to stress the importance of IPv6 availability when you're running a multi-node farm in a datacentre: as the +grid is boldly claiming to be a new Internet, we should make sure we adhere to the new protocols that are future-proof. +Hence: IPv6 is the base, and IPv4 is just there to accomodate the transition. + +Nowadays, RIPE can't even hand out consecutive /22 IPv4 blocks any more for new LIRs, so you'll be bound to market to +get IPv4, mostly at rates of 10-15 Euro per IP. Things tend to get costly that way. + +So anyway, IPv6 is not an afterthought in 0-OS, we're starting with it. + +# Network Setup for Farmers + +This is a quick manual to what is needed for connecting a node with zero-OS V2.0 + +## Step 1. Testing for IPv6 Availability in Your Location +As descibed above the network in which the node is instaleld has to be IPv6 enabled. This is not an afterthought as we are building a new internet it has to ba based on the new and forward looking IP addressing scheme. This is something you have to investigate, negotiate with you connectivity provider. Many (but not all home connectivity products and certainly most datacenters can provide you with IPv6. There are many sources of infromation on how to test and check whether your connection is IPv6 enabled, [here is a starting point](http://www.ipv6enabled.org/ipv6_enabled/ipv6_enable.php) + +## Step 2. Choosing the Setup to Connect Your Nodes + +Once you have established that you have IPv6 enabled on the network you are about to deploy, you have to make sure that there is an IPv6 DHCP facility available. Zero-OS does not work with static IPv6 addresses (at this point in time). So you have choose and create one of the following setups: + +### 2.1 Home Setup + +Use your (home) ISP router Ipv6 DHCP capabilities to provide (private) IPv6 addresses. The principle will work the same as for IPv4 home connections, everything happens enabled by Network Adress Translation (just like anything else that uses internet connectivity). This should be relatively straightforward if you have established that your conenction has IPv6 enabled. + +### 2.2 Data Center/Advanced Setup + +In this situation there are many options on how to setup you node. This requires you as the expert to make a few decisions on how to connect what what the best setup is that you can support for the operaitonal time of your farm. The same basics principles apply: + - You have to have a block of (public) IPv6 routed to you router, or you have to have your router setup to provide Network Address Translation (NAT) + - You have to have a DHCP server in your network that manages and controls IPV6 ip adress leases. Depending on your specific setup you have this DHCP server manage a public IPv6y range which makes all nodes directly connected to the public internet or you have this DHCP server manage a private block og IPv6 addresses which makes all you nodes connect to the internet through NAT. + +As a farmer you are in charge of selecting and creating the appropriate network setup for your farm. + +# General Notes + +The above setup will allows your node(s) to appear in explorer on the TFGrid and will allow you to earn farming tokens. At stated in the introduction ThreeFold is creating next generation internet capacity and therefore has IPv6 as it's base building block. Connecting to the current (dominant) IPv4 network happens for IT workloads through so called webgateways. As the word sais these are gateways that provide connectivity between the currenct leading IPv4 adressing scheme and IPv6. + +We have started a forum where people share their experiences and configurations. This will be work in progress and forever growing. + +**IMPORTANT**: You as a farmer do not need access to IPV4 to be able to rent capacity for IT workloads that need to be visible on IPV4, this is something that can happen elsewhere on the TFGrid. + \ No newline at end of file diff --git a/collections/manual/documentation/farmers/advanced_networking/networking_overview.md b/collections/manual/documentation/farmers/advanced_networking/networking_overview.md new file mode 100644 index 0000000..c4bc322 --- /dev/null +++ b/collections/manual/documentation/farmers/advanced_networking/networking_overview.md @@ -0,0 +1,94 @@ +

Networking Overview

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Possible Configurations](#possible-configurations) +- [Overall Requirements](#overall-requirements) +- [Notes and Warnings](#notes-and-warnings) + - [Management Interfaces](#management-interfaces) + - [Data Center Cable Management](#data-center-cable-management) + - [Static IP Uplink](#static-ip-uplink) +- [Testing the Setup](#testing-the-setup) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this section, we provide advanced networking tips for farms with public IPs and in data centers (DC). The information available in this section is a combination of documentation from ThreeFold and tips and advice from community members who experienced first-hand the creation of ThreeFold farms that make use of public IPs block in data centers, personal data centers and home farms. A special thank you to those who contributed to improving the TFGrid and its knowledge base documentation. + +## Possible Configurations + +For farmers who have public IPs, extra considerations are needed in setting up the network of the farm. We will go through the main considerations in this section. + +First, we must acknowledge that by the open-source and design of ThreeFold farming, a farm can range from a simple [single 3Node](../3node_building/3node_building.md) setup, to a multi-rack farm hosted in a typical data center, and everything in-between, from the farmer experiencing with public IP blocks, to the entrepreneur who builds their own data center at home. + +There are thus many types of farms and each will have varying configurations. The simplest way to set up a farm has been extensively discussed in the first steps of creating a farm. But what are the other more complex configurations possible? Let's go through some of those: + +- Network link + - DC provides a network link into the farmer's rack +- Router and switch + - The farmer provider their own router and switch + - DC provides a router and/or switch in the rack +- Gateway IP and public IP + - Gateway IP provided is in the same range as the public IPs + - Gateway IP is in a different range than the public IPs +- Segmenting + - Farmer segments the OOB ("Zos"/private) interfaces and the public interfaces into + - separate VLANs, OR; + - uses separate switches altogether + - No segmenting is actually necessary, farmer connects all interfaces to one switch + +## Overall Requirements + +There are overall requirements for any 3Node farm using IP address blocks in a data centere or at home: + +- There must be at least one interface that provide DHCP to each node +- Public IPs must be routable from at least one interface + +Note that redundancy can help in avoiding single point of failure [(SPOF)](https://en.wikipedia.org/wiki/Single_point_of_failure). + +## Notes and Warnings + +### Management Interfaces + +You should make sure to never expose management interfaces to the public internet. + + +### Data Center Cable Management + +It's important to have a good cable management, especially if you are in a data center. Proper cable management will improve the cooling streams of your farm. There shouldn't be any cable in front of the fans. This way, your servers will last longer. If you want to patch a rack, you have to have all lenght of patch cables from 30cm to 3m. Also, try to keep the cables as short as possible. Arrange the cables in bundles of eight and lead them to the sides of the rack as much as possible for optimal airflow. + + + + + +### Static IP Uplink + +If your DC uplink is established by simple static IP (which is the case in most DCs), there is a simple setup possible. Note that if you have to use PPPoE or pptp/L2TP (like a consumer internet connection at most homes), this would not work. + +If your WAN is established by static IP, you can simply attach the WAN uplink provided by the DC to one of the switches (and not to the WAN-side of your own router). Then, the WAN-side of the router needs to be attached to the switch too. By doing so, your nodes will be able to connect directly to the DC gateway, in the same way that the router is connecting its WAN-side to the gateway, without the public IP traffic being routed/bridged through the router (bypassing). + +With a network configured like this, it is absolutely not important on which ports you connect which NIC of your nodes. You can just randomly plug them anywhere. But there is one restriction: the DC uplink must use a static IP. Dynamic IP would also not work because you would then have two DHCP servers in the same physical network (the one from the DC and your own router). + +## Testing the Setup + +Manual and automatic validation of the network of a farm are possible. More information on automatic validation will be added in the future. + +You can test the network of your farm manually by deploying a workload on your 3Nodes with either a gateway or a public IP reserved. + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Farmer Chat](https://t.me/threefoldfarmers) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/3node_diy_desktop.md b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/3node_diy_desktop.md new file mode 100644 index 0000000..de74eb0 --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/3node_diy_desktop.md @@ -0,0 +1,406 @@ +

Building a DIY 3Node: Desktop Computer

+ +In the following 3Node DIY guide, you will learn how to turn a Dell Optiplex 7020 into a 3Node farming on the ThreeFold Grid. + +Note that the process is similar for other desktop computers. + +
+ + + +

Table of Contents

+ + + +- [Prerequisite](#prerequisite) + - [DIY 3Node Computer Requirements](#diy-3node-computer-requirements) + - [DIY 3Node Material List](#diy-3node-material-list) +- [1. Create a Farm](#1-create-a-farm) + - [Using Dashboard](#using-dashboard) + - [Using TF Connect App](#using-tf-connect-app) +- [2. Create a Zero-OS Bootstrap Image](#2-create-a-zero-os-bootstrap-image) + - [Download the Zero-OS Boostrap Image](#download-the-zero-os-boostrap-image) + - [Burn the Zero-OS Bootstrap Image](#burn-the-zero-os-bootstrap-image) +- [3. Set the Hardware](#3-set-the-hardware) +- [4. Wipe All the Disks](#4-wipe-all-the-disks) + - [1. Create a Linux Boostrap Image](#1-create-a-linux-boostrap-image) + - [2. Boot Linux in Try Mode](#2-boot-linux-in-try-mode) + - [3. Use wipefs to Wipe All Disks](#3-use-wipefs-to-wipe-all-disks) +- [5. Set the BIOS/UEFI](#5-set-the-biosuefi) + - [The Essential Features of BIOS/UEFI for a 3Node](#the-essential-features-of-biosuefi-for-a-3node) + - [Set the BIOS/UEFI on a Dell Optiplex 7020](#set-the-biosuefi-on-a-dell-optiplex-7020) +- [6. Boot the 3Node](#6-boot-the-3node) + - [Check the Node Status](#check-the-node-status) + - [Farming Rewards Distribution](#farming-rewards-distribution) +- [Additional Information](#additional-information) + +*** + +
+ + + +# Prerequisite + +## DIY 3Node Computer Requirements + + + +Any computer with the following specifications can be used as a DIY 3Node. + +- Any 64-bit hardware with an Intel or AMD processor chip. +- Servers, desktops and mini computers type hardware are compatible. +- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required. +- A ratio of 100GB of SSD and 8GB of RAM per thread is recommended. +- A wired ethernet connection is highly recommended to maximize reliability and the ability to farm TFT. +- A [passmark](https://www.passmark.com/) of 1000 per core is recommended and will be a minimum requirement in the future. + +In this guide, we are using a Dell Optiplex 7020. It constitutes a perfect affordable entry DIY 3Node as it can be bought refurbished with the proper ratio of 100GB of SSD and 8GB of RAM per thread, and this, without any need of upgrades or updates. + + + +## DIY 3Node Material List + + + +* Any computer respecting the DIY 3Node Computer Requirements stated above +* Ethernet cable +* Router + Modem +* Surge Protector +* 2x USB key 4 Go +* Android/iOS Phone +* Computer monitor and cable, keyboard and mouse +* MAC/Linux/Windows Computer + + + +
+ +# 1. Create a Farm + +You can create a farm with either the ThreeFold Dashboard or the ThreeFold Connect app. + +## Using Dashboard + +The Dashboard section contains all the information required to [create a farm](../../../dashboard/farms/your_farms.md). + +## Using TF Connect App + +You can [create a ThreeFold farm](../../../threefold_token/storing_tft/tf_connect_app.md) with the ThreeFold Connect App. + + +# 2. Create a Zero-OS Bootstrap Image + +## Download the Zero-OS Boostrap Image + +We will now learn how to create a Zero-OS Bootstrap Image in order to boot a DIY 3Node. + +Go on the [ThreeFold Zero-OS Bootstrap Link](https://v3.bootstrap.grid.tf) as shown above. + +![Farming_Create_Farm_21](./img/farming_createfarm_21.png) + +This is the Zero-OS v3 Bootstrapping page. + +![Farming_Create_Farm_22](./img/farming_createfarm_22.png) + +Write your farm ID and choose production mode. + +![Farming_Create_Farm_23](./img/farming_createfarm_23.png) + +Download the bootstrap image. Next, we will burn the bootstrap image. + + + +
+ +## Burn the Zero-OS Bootstrap Image + + + +For **MAC**, **Linux** and **Windows**, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. This will work for the option **EFI IMG** for UEFI boot, and with the option **USB** for BIOS boot. Simply follow the steps presented to you and make sure you select the correct bootstrap image file you downloaded previously. + +General Steps: + +1. Download BalenaEtcher at [https://balena.io/etcher](https://balena.io/etcher) + +![3node_diy_desktop_42.png](img/3node_diy_desktop_42.png) + +![3node_diy_desktop_43.png](img/3node_diy_desktop_43.png) + +![3node_diy_desktop_44.png](img/3node_diy_desktop_44.png) + +![3node_diy_desktop_45.png](img/3node_diy_desktop_45.png) + +![3node_diy_desktop_48.png](img/3node_diy_desktop_48.png) + +![3node_diy_desktop_49.png](img/3node_diy_desktop_49.png) + +2. Open BalenaEtcher + +![3node_diy_desktop_50.png](img/3node_diy_desktop_50.png) + +3. Select **Flash from file** + +![3node_diy_desktop_52.png](img/3node_diy_desktop_52.png) + +1. Find and select the bootstrap image in your computer + +2. Select **Target** (your USB key) + +![3node_diy_desktop_53.png](img/3node_diy_desktop_53.png) + +![3node_diy_desktop_54.png](img/3node_diy_desktop_54.png) + +6. Select **Flash** + +![3node_diy_desktop_55.png](img/3node_diy_desktop_55.png) + +![3node_diy_desktop_56.png](img/3node_diy_desktop_56.png) + + +That's it. Now you have a bootstrap image on Zero-OS as a bootable removable media device. + + + +
+ +# 3. Set the Hardware + +Setting the hardware of this DIY 3node is very easy as there are no updates or upgrades needed. Simply unbox the computer and plug everything. + +![3node_diy_desktop_40.png](img/3node_diy_desktop_40.jpeg) + +![3node_diy_desktop_38.png](img/3node_diy_desktop_38.jpeg) + +![3node_diy_desktop_30.png](img/3node_diy_desktop_30.jpeg) + +![3node_diy_desktop_29.png](img/3node_diy_desktop_29.jpeg) + +Plug the computer cable in the surge protector + +![3node_diy_desktop_6.png](img/3node_diy_desktop_6.png) + +Connect the computer cable, the ethernet cable, the mouse and keyboard cable and the monitor cable. + +![3node_diy_desktop_13.png](img/3node_diy_desktop_13.jpeg) + +Plug the ethernet cable in the router (or the switch) + +![3node_diy_desktop_6.png](img/3node_diy_desktop_3.png) + + + +
+ +# 4. Wipe All the Disks + +In this section, we will learn how to create a Linux bootstrap image, boot it in Try mode and then wipe all the disks in your 3Node. To create a Linux boostrap image, follow the same process as when we burnt the Zero-OS Boostrap Image. + + + +## 1. Create a Linux Boostrap Image + + + +1. Download the Ubuntu 20.04 ISO file [here](https://releases.ubuntu.com/20.04/) +2. Burn the ISO image on a USB key with Balena Etcher + + + +## 2. Boot Linux in Try Mode + + + +1. Insert your Linux boostrap image USB key in your computer and boot it +2. During boot, press F12 to enter into Settings +3. Select your booting device, here it is: *UEFI: USB DISK 2.0* + +![3node_diy_desktop_107.png](img/3node_diy_desktop_107.jpeg) + +4. Select Try or install Ubuntu + +![3node_diy_desktop_106.png](img/3node_diy_desktop_106.jpeg) + +5. Select Try Ubuntu + +![3node_diy_desktop_105.png](img/3node_diy_desktop_105.jpeg) + + + +## 3. Use wipefs to Wipe All Disks + + + +Once Ubuntu is booted, you will land on the main page. + +![3node_diy_desktop_67.png](img/3node_diy_desktop_67.png) + +At the bottom left of the screen, click on Applications. + +![3node_diy_desktop_68.png](img/3node_diy_desktop_68.png) + +In Applications, select Terminal. + +![3node_diy_desktop_69.png](img/3node_diy_desktop_69.png) + +If you don't see it, write terminal in the search box. + +![3node_diy_desktop_70.png](img/3node_diy_desktop_70.png) + +You will land in the Ubuntu Terminal. + +![3node_diy_desktop_71.png](img/3node_diy_desktop_71.png) + +Write the command line *lsblk* as shown below. You will then see the disks in your computer. You want to wipe the main disk, but not the USB key we are using, named *sdb* here. We can see here that the SSD disk, *sda*, has 3 partitions: *sda1*, *sda2*, *sda3*. Note that when wiping the disks, we want no partition. + +In this case, the disk we want to wipe is *sda*. + +![3node_diy_desktop_72.png](img/3node_diy_desktop_72.png) + +Write the command line *sudo wipefs -a /dev/sda*. This will wipe the disk *sda*. + +![3node_diy_desktop_73.png](img/3node_diy_desktop_73.png) + +If you write the command line *lsblk* once more, you should see that your SSD disk has no more partition. The disk has been properly wiped. + +![3node_diy_desktop_74.png](img/3node_diy_desktop_74.png) + +Power off the computer by selecting *Power Off* after having clicked on the button at the top right of the screen as shown below. + +![3node_diy_desktop_75.png](img/3node_diy_desktop_75.png) + +That's it! The disks are all wiped. All that is left now is to set the BIOS/UEFI settings and then boot the 3Node! + + + +
+ +# 5. Set the BIOS/UEFI + +Before booting the main operating system, in our case Zero-OS, a computer will boot in either BIOS or UEFI mode. Older systems use BIOS and newer systems uses UEFI. Both BIOS and UEFI are low-lewel softwares needed to interact between the hardware and the main OS of the computer. Note that BIOS is also called Legacy BIOS. + +## The Essential Features of BIOS/UEFI for a 3Node + + + +There are certain things that you should make sure are set properly on your 3Node. + +As a general advice, you can Load Defaults (Settings) on your BIOS, then make sure the options below are set properly. + +* Choose the correct combination of BIOS/UEFI and bootstrap image on [https://bootstrap.grid.tf/](https://bootstrap.grid.tf/) + * Newer system will use UEFI --> the Dell Optiplex 7020 uses UEFI + * Bootstrap image: *EFI IMG* and *EFI FILE* + * Older system will use Legacy BIOS + * Bootstrap image: *ISO* and *USB* +* Set *Multi-Processor* and *Hyperthreading* at Enabled + * Sometimes, it will be written *Virtual Cores*, or *Logical Cores*. +* Set *Virtualization* at Enabled + * On Intel, it is denoted as *CPU virtualization* and on ASUS, it is denoted as *SVM*. + * Make sure virtualization is enabled and look for the precise terms in your specific BIOS/UEFI. +* Enable *Network Stack* (sometimes called *Network Boot*) +* Set *AC Recovery* at *Last Power State* + * This will make sure your 3Node restarts after losing power momentarily. +* Select the proper *Boot Sequence* for the 3Node to boot Zero-OS from your bootstrap image + * e.g., if you have a USB key as a bootstrap image, select it in *Boot Sequence* +* Set *Server Lookup Method* (or the equivalent) at *DNS*. + * Only use Static IP if you know what you are doing. + * Your router will automatically assign a dynamic IP address to your 3Node when it connects to Internet. +* Set *Client Address Method* (or the equivalent) at *DHCP*. Only use Static IP if you know what you are doing. + * Your router will automatically assign a dynamic IP address to your 3Node when it connects to Internet. +* *Secure Boot* should be left at disabled + * Enable it if you know what you are doing. Otherwise, it can be set at disabled. + + + +
+ +## Set the BIOS/UEFI on a Dell Optiplex 7020 + + + +1. Insert your Zero-OS boostrap image USB key in your computer and boot it. +2. During boot, press F12 to enter into *Settings* then choose *BIOS Setup*. + +![3node_diy_desktop_104.jpeg](img/3node_diy_desktop_109.png) + +3. In BIOS Setup, click on Load Default and confirm by clicking on *OK* + +![3node_diy_desktop_115.png](img/3node_diy_desktop_115.png) + +4. Leave the BIOS Setup (Exit) and re-enter. This will set the default settings. + +5. Go through each page and make sure you are following the guidelines set in the section Essential Features as shown in the following pictures. + +![3node_diy_desktop_.png](img/3node_diy_desktop_116.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_117.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_118.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_114.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_127.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_120.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_128.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_122.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_123.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_129.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_125.png) + + +6. Once you are done, click on *Exit* and then click *Yes* to save your changes. The 3node will now boot Zero-OS. + +![3node_diy_desktop_126.png](img/3node_diy_desktop_126.png) + + + +
+ +# 6. Boot the 3Node + +If your BIOS/UEFI settings are set properly and you have the Zero-OS bootstrap image USB key plugged in, your 3node should automatically boot Zero-OS every time that it is turned on. + +1. Power on the 3Node with the Zero-OS boostrap image USB key +2. Let the 3Node load Zero-OS + * The first time it boots, the 3node will register to the TF Grid +3. Verify the 3Node's status on ThreeFold Explorer + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +This is the final screen you should see when your 3Node is connected to the ThreeFold Grid. Note that it is normal if it is written *no public config* next to *PUB* as we did not set any public IP address. + +Naturally, your node ID as well as your farm ID and name will be shown. + +![3node_diy_desktop_76.png](img/3node_diy_desktop_130.png) + +Once you have your node ID, you can also go on the ThreeFold Dashboard to see your 3Node and verify that your 3Node is online. + + + +
+ +## Check the Node Status + +You can use the [Node Finder](../../../dashboard/deploy/node_finder.md) on the [TF Dashboard](https://dashboard.grid.tf/) to verify that the node is online. + +Enter your node ID and click **Apply**. + +## Farming Rewards Distribution + + + +The farming reward will be sent once per month directly in your ThreeFold Connect App wallet. Farming rewards are usually distributed around the 5th of each month. + + + +# Additional Information + +Congratulations, you now have built your first ThreeFold 3Node server! + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Telegram Farmer Group](https://t.me/threefoldfarmers). \ No newline at end of file diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/.done b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/.done new file mode 100644 index 0000000..58c5f19 --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/.done @@ -0,0 +1,19 @@ +3node_diy_desktop_111.png +3node_diy_desktop_114.png +3node_diy_desktop_115.png +3node_diy_desktop_116.png +3node_diy_desktop_117.png +3node_diy_desktop_118.png +3node_diy_desktop_119.png +3node_diy_desktop_120.png +3node_diy_desktop_121.png +3node_diy_desktop_122.png +3node_diy_desktop_123.png +3node_diy_desktop_124.png +3node_diy_desktop_125.png +3node_diy_desktop_126.png +3node_diy_desktop_127.png +3node_diy_desktop_128.png +3node_diy_desktop_129.png +3node_diy_desktop_130.png +farming_30.png diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_104.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_104.jpg new file mode 100644 index 0000000..80e850c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_104.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_105.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_105.jpg new file mode 100644 index 0000000..088454a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_105.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_106.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_106.jpg new file mode 100644 index 0000000..a24ff11 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_106.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_107.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_107.jpg new file mode 100644 index 0000000..8ab6fdb Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_107.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_108.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_108.png new file mode 100644 index 0000000..b426984 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_108.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_109.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_109.png new file mode 100644 index 0000000..45841c5 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_109.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_110.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_110.png new file mode 100644 index 0000000..71377f1 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_110.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_111.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_111.png new file mode 100644 index 0000000..7b5eeb4 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_111.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_112.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_112.png new file mode 100644 index 0000000..ef5fcb3 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_112.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_114.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_114.png new file mode 100644 index 0000000..8ce20bd Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_114.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_115.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_115.png new file mode 100644 index 0000000..ca2260e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_115.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_116.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_116.png new file mode 100644 index 0000000..65cf981 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_116.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_117.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_117.png new file mode 100644 index 0000000..8c67944 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_117.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_118.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_118.png new file mode 100644 index 0000000..726b43c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_118.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_119.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_119.png new file mode 100644 index 0000000..2dff9b7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_119.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_120.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_120.png new file mode 100644 index 0000000..507141d Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_120.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_121.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_121.png new file mode 100644 index 0000000..53775ac Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_121.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_122.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_122.png new file mode 100644 index 0000000..8a2fbab Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_122.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_123.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_123.png new file mode 100644 index 0000000..77c28e2 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_123.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_124.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_124.png new file mode 100644 index 0000000..a4b7660 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_124.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_125.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_125.png new file mode 100644 index 0000000..d23c25a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_125.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_126.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_126.png new file mode 100644 index 0000000..c35857e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_126.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_127.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_127.png new file mode 100644 index 0000000..1a0249c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_127.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_128.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_128.png new file mode 100644 index 0000000..2e0c8c7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_128.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_129.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_129.png new file mode 100644 index 0000000..734aa84 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_129.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_13.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_13.jpg new file mode 100644 index 0000000..94ffab0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_13.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_130.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_130.png new file mode 100644 index 0000000..0630b55 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_130.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_23.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_23.jpg new file mode 100644 index 0000000..b4a953a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_23.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_25.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_25.jpg new file mode 100644 index 0000000..66fa069 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_25.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_26.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_26.jpg new file mode 100644 index 0000000..f773e8f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_26.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_27.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_27.jpg new file mode 100644 index 0000000..e8eaf53 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_27.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_28.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_28.jpg new file mode 100644 index 0000000..8333b2b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_28.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_29.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_29.jpg new file mode 100644 index 0000000..b86b9af Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_29.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.jpg new file mode 100644 index 0000000..30456be Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.png new file mode 100644 index 0000000..7442f5d Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_30.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_30.jpg new file mode 100644 index 0000000..622d1f9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_30.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_31.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_31.jpg new file mode 100644 index 0000000..2ffe2de Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_31.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_38.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_38.jpg new file mode 100644 index 0000000..a818dd6 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_38.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_40.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_40.jpg new file mode 100644 index 0000000..3093e7e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_40.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_42.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_42.png new file mode 100644 index 0000000..e7336f6 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_42.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_43.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_43.png new file mode 100644 index 0000000..1c139d8 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_43.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_44.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_44.png new file mode 100644 index 0000000..dae09b3 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_44.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_45.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_45.png new file mode 100644 index 0000000..668e6eb Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_45.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_48.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_48.png new file mode 100644 index 0000000..b789d93 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_48.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_49.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_49.png new file mode 100644 index 0000000..057fbc9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_49.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_50.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_50.png new file mode 100644 index 0000000..17b0329 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_50.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_52.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_52.png new file mode 100644 index 0000000..b19d799 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_52.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_53.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_53.png new file mode 100644 index 0000000..600c1ec Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_53.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_54.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_54.png new file mode 100644 index 0000000..56a5237 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_54.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_55.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_55.png new file mode 100644 index 0000000..81da31e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_55.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_56.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_56.png new file mode 100644 index 0000000..a2590e7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_56.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.jpg new file mode 100644 index 0000000..8bd003e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.png new file mode 100644 index 0000000..d893e54 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_67.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_67.png new file mode 100644 index 0000000..46ae916 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_67.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_68.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_68.png new file mode 100644 index 0000000..0842c19 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_68.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_69.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_69.png new file mode 100644 index 0000000..2d2b6b9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_69.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_70.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_70.png new file mode 100644 index 0000000..192a5d7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_70.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_71.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_71.png new file mode 100644 index 0000000..7609bb2 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_71.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_72.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_72.png new file mode 100644 index 0000000..14efc8b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_72.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_73.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_73.png new file mode 100644 index 0000000..d645b73 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_73.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_74.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_74.png new file mode 100644 index 0000000..439fdb7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_74.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_75.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_75.png new file mode 100644 index 0000000..7d7c8d2 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_75.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.jpg new file mode 100644 index 0000000..84cf1b0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.png new file mode 100644 index 0000000..263ddf7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_77.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_77.jpg new file mode 100644 index 0000000..25ed402 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_77.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_78.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_78.jpg new file mode 100644 index 0000000..0c61dc3 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_78.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_80.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_80.jpg new file mode 100644 index 0000000..9658fb5 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_80.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_81.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_81.jpg new file mode 100644 index 0000000..9e69a2c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_81.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_83.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_83.jpg new file mode 100644 index 0000000..b98ed0b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_83.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_86.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_86.jpg new file mode 100644 index 0000000..c3a77b6 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_86.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_88.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_88.jpg new file mode 100644 index 0000000..0ed0169 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_88.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_89.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_89.jpg new file mode 100644 index 0000000..8711387 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_89.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_90.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_90.jpg new file mode 100644 index 0000000..41e9d80 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_90.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_91.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_91.jpg new file mode 100644 index 0000000..836ffae Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_91.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_92.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_92.jpg new file mode 100644 index 0000000..a671268 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_92.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_94.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_94.jpg new file mode 100644 index 0000000..1cf820e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_94.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_95.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_95.jpg new file mode 100644 index 0000000..0e8fcb2 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_95.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_96.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_96.jpg new file mode 100644 index 0000000..71f9646 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_96.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_97.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_97.jpg new file mode 100644 index 0000000..782eaea Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_97.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_98.jpg b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_98.jpg new file mode 100644 index 0000000..187342c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_98.jpg differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_002.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_002.png new file mode 100644 index 0000000..415a4b3 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_002.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_003.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_003.png new file mode 100644 index 0000000..47d3e2f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_003.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_004.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_004.png new file mode 100644 index 0000000..0e1aa75 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_004.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_005.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_005.png new file mode 100644 index 0000000..6acfa59 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_005.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_006.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_006.png new file mode 100644 index 0000000..f874fcf Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_006.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_25.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_25.png new file mode 100644 index 0000000..d85e2bf Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_25.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_26.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_26.png new file mode 100644 index 0000000..3a22175 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_26.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_27.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_27.png new file mode 100644 index 0000000..dd04e45 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_27.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_28.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_28.png new file mode 100644 index 0000000..36f2eda Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_28.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_29.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_29.png new file mode 100644 index 0000000..e7d6d40 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_29.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_10.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_10.png new file mode 100644 index 0000000..deec259 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_10.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_11.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_11.png new file mode 100644 index 0000000..ceb60ae Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_11.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_12.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_12.png new file mode 100644 index 0000000..70e8053 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_12.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_13.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_13.png new file mode 100644 index 0000000..1016cb8 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_13.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_14.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_14.png new file mode 100644 index 0000000..7468258 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_14.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_15.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_15.png new file mode 100644 index 0000000..0a34bd0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_15.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_16.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_16.png new file mode 100644 index 0000000..1e32275 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_16.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_17.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_17.png new file mode 100644 index 0000000..0ee82ea Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_17.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_18.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_18.png new file mode 100644 index 0000000..b3f4386 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_18.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_19.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_19.png new file mode 100644 index 0000000..74d7963 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_19.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_20.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_20.png new file mode 100644 index 0000000..05a80e8 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_20.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_9.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_9.png new file mode 100644 index 0000000..b6e852a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_9.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_1.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_1.png new file mode 100644 index 0000000..f1b4b1e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_1.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_10.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_10.png new file mode 100644 index 0000000..e8c547b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_10.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_11.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_11.png new file mode 100644 index 0000000..d0e58bc Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_11.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_12.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_12.png new file mode 100644 index 0000000..980e33f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_12.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_13.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_13.png new file mode 100644 index 0000000..c4956c5 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_13.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_14.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_14.png new file mode 100644 index 0000000..f251968 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_14.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_15.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_15.png new file mode 100644 index 0000000..efd806e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_15.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_16.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_16.png new file mode 100644 index 0000000..72d1994 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_16.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_17.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_17.png new file mode 100644 index 0000000..cc9ba7f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_17.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_18.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_18.png new file mode 100644 index 0000000..8fe5d2f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_18.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_19.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_19.png new file mode 100644 index 0000000..6c6796c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_19.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_2.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_2.png new file mode 100644 index 0000000..eccb743 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_2.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_20.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_20.png new file mode 100644 index 0000000..da5c0ca Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_20.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_21.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_21.png new file mode 100644 index 0000000..27abf0c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_21.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_22.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_22.png new file mode 100644 index 0000000..adde6b0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_22.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_23.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_23.png new file mode 100644 index 0000000..aabf0a0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_23.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_24.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_24.png new file mode 100644 index 0000000..9f09996 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_24.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_25.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_25.png new file mode 100644 index 0000000..c37a89c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_25.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_26.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_26.png new file mode 100644 index 0000000..5cb2f75 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_26.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_27.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_27.png new file mode 100644 index 0000000..c790ae0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_27.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_28.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_28.png new file mode 100644 index 0000000..2648b65 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_28.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_29.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_29.png new file mode 100644 index 0000000..52a66cd Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_29.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_3.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_3.png new file mode 100644 index 0000000..f0b992c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_3.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_30.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_30.png new file mode 100644 index 0000000..2e30308 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_30.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_31.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_31.png new file mode 100644 index 0000000..ff1b9ac Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_31.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_32.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_32.png new file mode 100644 index 0000000..ab617bf Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_32.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_33.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_33.png new file mode 100644 index 0000000..3c6d591 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_33.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_34.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_34.png new file mode 100644 index 0000000..944688f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_34.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_35.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_35.png new file mode 100644 index 0000000..845bd35 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_35.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_36.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_36.png new file mode 100644 index 0000000..b294bcb Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_36.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_37.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_37.png new file mode 100644 index 0000000..c61e06e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_37.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_38.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_38.png new file mode 100644 index 0000000..732f98a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_38.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_39.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_39.png new file mode 100644 index 0000000..0c78bc0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_39.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_4.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_4.png new file mode 100644 index 0000000..5047a5c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_4.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_40.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_40.png new file mode 100644 index 0000000..6651627 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_40.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_41.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_41.png new file mode 100644 index 0000000..839e929 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_41.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_42.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_42.png new file mode 100644 index 0000000..5f84480 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_42.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_43.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_43.png new file mode 100644 index 0000000..ba1e751 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_43.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_44.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_44.png new file mode 100644 index 0000000..4a10071 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_44.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_45.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_45.png new file mode 100644 index 0000000..0d6e86d Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_45.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_46.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_46.png new file mode 100644 index 0000000..2039941 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_46.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_47.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_47.png new file mode 100644 index 0000000..71ea72e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_47.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_48.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_48.png new file mode 100644 index 0000000..fd357c7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_48.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_49.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_49.png new file mode 100644 index 0000000..0f82c97 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_49.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_5.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_5.png new file mode 100644 index 0000000..6c960d9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_5.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_50.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_50.png new file mode 100644 index 0000000..0ecbc06 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_50.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_51.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_51.png new file mode 100644 index 0000000..86ecf04 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_51.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_52.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_52.png new file mode 100644 index 0000000..9d2b5a4 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_52.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_53.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_53.png new file mode 100644 index 0000000..141c24d Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_53.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_54.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_54.png new file mode 100644 index 0000000..2c97d1b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_54.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_55.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_55.png new file mode 100644 index 0000000..2d7fe0e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_55.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_56.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_56.png new file mode 100644 index 0000000..e090e08 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_56.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_57.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_57.png new file mode 100644 index 0000000..b9289a4 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_57.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_58.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_58.png new file mode 100644 index 0000000..a78d6d8 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_58.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_59.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_59.png new file mode 100644 index 0000000..4f6e7f2 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_59.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_6.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_6.png new file mode 100644 index 0000000..1ccab0c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_6.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_7.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_7.png new file mode 100644 index 0000000..be8eea7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_7.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_8.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_8.png new file mode 100644 index 0000000..1239f37 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_8.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_9.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_9.png new file mode 100644 index 0000000..4a6acad Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_9.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_10.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_10.png new file mode 100644 index 0000000..7ae95fc Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_10.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_11.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_11.png new file mode 100644 index 0000000..6ce3313 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_11.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_12.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_12.png new file mode 100644 index 0000000..b26559e Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_12.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_13.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_13.png new file mode 100644 index 0000000..9486799 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_13.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_5.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_5.png new file mode 100644 index 0000000..e5fc90b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_5.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_7.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_7.png new file mode 100644 index 0000000..ce1f1ba Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_7.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_8.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_8.png new file mode 100644 index 0000000..b8039d9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_8.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_9.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_9.png new file mode 100644 index 0000000..882bcf7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_9.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/readme.md b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/readme.md new file mode 100644 index 0000000..6f45a2d --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/img/readme.md @@ -0,0 +1 @@ +# Images of Farming Guide documentation for Threefold Manual 3.0 diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/readme.md b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/readme.md new file mode 100644 index 0000000..362d32f --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_desktop/readme.md @@ -0,0 +1,19 @@ +# DIY 3node Desktop for the Threefold Manual 3.0 +* The **diy_3node_desktop.md** file + * contains the easiest DIY 3node guide possible + * no upgrades + * no updates + +* The **diy_3node_desktop.pdf** file + * can be used as an offline reference + * can be shared among the Threefold community + +Updates: This DIY Guide will be turned into a short 1 minute video to share. + + +# Threefold Ebooks + +1. FAQ +2. Complete Farming Guide +3. DIY 3node Desktop Computer - Farming Guide +4. DIY 3node Rack Server - Farming Guide \ No newline at end of file diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/3node_diy_rack_server.md b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/3node_diy_rack_server.md new file mode 100644 index 0000000..3686915 --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/3node_diy_rack_server.md @@ -0,0 +1,373 @@ +

Building a DIY 3Node: Rack Server

+ +In the following 3Node DIY guide, you will learn how to turn a Dell server (R620, R720) into a 3Node farming on the ThreeFold Grid. + +Note that the process is similar for other rack servers. + + +

Table of Contents

+ + + +- [Setting Up the Hardware](#setting-up-the-hardware) + - [Avoiding Static Discharge](#avoiding-static-discharge) + - [Setting the M.2 NVME SSD Disk with the PCIe Adaptor](#setting-the-m2-nvme-ssd-disk-with-the-pcie-adaptor) + - [Checking the RAM sticks](#checking-the-ram-sticks) + - [General Rules when Installing RAM Sticks](#general-rules-when-installing-ram-sticks) + - [Procedure to Install RAM Sticks](#procedure-to-install-ram-sticks) + - [Installing the SSD Disks](#installing-the-ssd-disks) + - [Plugging the 3node Server](#plugging-the-3node-server) + - [Removing the DVD Optical Drive - Installing a SSD disk in the DVD Optical Drive Slot](#removing-the-dvd-optical-drive---installing-a-ssd-disk-in-the-dvd-optical-drive-slot) + - [Using Onboard Storage - RAID Controller Details](#using-onboard-storage---raid-controller-details) +- [Zero-OS Bootstrap Image](#zero-os-bootstrap-image) + - [Creating a Farm](#creating-a-farm) + - [Using Dashboard](#using-dashboard) + - [Using TF Connect App](#using-tf-connect-app) + - [Wiping All the Disks](#wiping-all-the-disks) + - [Downloading the Zero-OS Bootstrap Image](#downloading-the-zero-os-bootstrap-image) + - [DVD ISO BIOS Image](#dvd-iso-bios-image) + - [USB BIOS Image](#usb-bios-image) +- [BIOS Settings](#bios-settings) + - [Processor Settings](#processor-settings) + - [Boot Settings](#boot-settings) +- [Booting the 3Node](#booting-the-3node) +- [Additional Information](#additional-information) + - [Differences between the R620 and the R720](#differences-between-the-r620-and-the-r720) + - [Different CPUs and RAMs Configurations for 3Node Dell Servers](#different-cpus-and-rams-configurations-for-3node-dell-servers) +- [Closing Words](#closing-words) + +*** + +# Setting Up the Hardware + +![3node_diy_rack_server_1](./img/3node_diy_rack_server_1.png) + +Dell R620 1U server + + +## Avoiding Static Discharge + +![3node_diy_rack_server_2](./img/3node_diy_rack_server_2.png) + + +Some will recommend to wear anti-static gloves as shown here. If you don’t have anti-static gloves, remember this: + +> Always touch the metal side of the server before manipulating the hardware. + +Your hands will discharge the static on the outside of the box, which is secure. + +## Setting the M.2 NVME SSD Disk with the PCIe Adaptor + +![3node_diy_rack_server_3](./img/3node_diy_rack_server_3.png) + +Here is one of the two 2TB SSD NVME m.2 that we will install on the server. Above the SSD is the PCIe Gen 3, x4 that we will use to connect the SSD to the server. + +![3node_diy_rack_server_4](./img/3node_diy_rack_server_4.png) + +You can see at the left of the adaptor that there is a metal piece that can be used to hold more firmly the PCIe adaptor and the SSD. We will remove it for this DIY build. Why? Because it is not necessary as the adaptor can hold the weight of the SSD. Also, this metal piece is full while the brackets in the server have holes in it. This will ensure a better airflow and thus less heat. + +![3node_diy_rack_server_5](./img/3node_diy_rack_server_5.png) + +We remove the screws with a star screwdriver. + +![3node_diy_rack_server_6](./img/3node_diy_rack_server_6.png) + +![3node_diy_rack_server_7](./img/3node_diy_rack_server_7.png) + +This SSD already has a heatsink. There is no need to use the heatsink included in the PCIe adaptor kit. If you remove the heatsink or the sticker under the SSD, you will lose your 5 years warranty. + +![3node_diy_rack_server_8](./img/3node_diy_rack_server_8.png) + +When you put the SSD in the adaptor, make sure you have the opening in line with the adaptor. + +![3node_diy_rack_server_9](./img/3node_diy_rack_server_9.png) + +![3node_diy_rack_server_10](./img/3node_diy_rack_server_10.png) + +Fitting in the SSD takes some force. Do not overdo it and take your time! + +![3node_diy_rack_server_11](./img/3node_diy_rack_server_11.png) + +It’s normal that the unscrewed part is lifting in the air before you screw the SSD on the adaptor. + +![3node_diy_rack_server_12](./img/3node_diy_rack_server_12.png) + +To screw the SSD in place, use the screwdriver included in the PCIe adaptor kit. + +![3node_diy_rack_server_13](./img/3node_diy_rack_server_13.png) + +![3node_diy_rack_server_14](./img/3node_diy_rack_server_14.png) + +Now that’s a steady SSD! + +## Checking the RAM sticks + +![3node_diy_rack_server_15](./img/3node_diy_rack_server_15.png) + +It’s now time to get under the hood! Make sure the case is at the unlocked position. If you need to turn it to unlocked position, use a flathead screwdriver or a similar tool. + +![3node_diy_rack_server_16](./img/3node_diy_rack_server_16.png) + +![3node_diy_rack_server_17](./img/3node_diy_rack_server_17.png) + +Lift up the lock and the top server plate should glide to the back. You can remove the top of the server. + +![3node_diy_rack_server_18](./img/3node_diy_rack_server_18.png) + +Here’s the full story! R620 and all! + +![3node_diy_rack_server_19](./img/3node_diy_rack_server_19.png) + +![3node_diy_rack_server_20](./img/3node_diy_rack_server_20.png) + +![3node_diy_rack_server_21](./img/3node_diy_rack_server_21.png) + +To remove this plastic piece, simply lift with your fingers at the designated spot (follow the blue line!). + +![3node_diy_rack_server_22](./img/3node_diy_rack_server_22.png) + +Here’s the RAMs! This R620 came already equipped with 256GB of rams dispersed in 16x16GB sticks. If you need to add the RAM sticks yourself, make sure you are doing it correctly. The FAQ covers some basic information for RAM installation. + +![3node_diy_rack_server_23](./img/3node_diy_rack_server_23.png) + +To remove a stick, push on the clips on both sides. You can do it one at a time if you want. Make sure it doesn’t pop out and fall on a hardware piece! Once the clips are opened, pull out the RAM stick by holding it on the sides. This will ensure it does not get damaged. + +![3node_diy_rack_server_24](./img/3node_diy_rack_server_24.png) + +Here’s the RAM in it’s purest form! + +![3node_diy_rack_server_25](./img/3node_diy_rack_server_25.png) + +Here you can see that the gap is not in the middle of the RAM stick. You must be careful when inserting the RAM. Make sure you have the gap aligned with the RAM holder. + +![3node_diy_rack_server_26](./img/3node_diy_rack_server_26.png) + +When you want to put a RAM stick in its slot, make sure the plastic holders on the sides are opened and insert the RAM stick. Make sure you align the RAM stick properly. You can then push on one side at a time until the RAM stick clicks in. You can do it both sides at once if you are at ease. + +### General Rules when Installing RAM Sticks + +First, always use RAM sticks of the same size and type. It should be noted on your motherboard which slots to populate first. + +As a general guide, there is usually 2 slots A and B, with each 2 memory stick entries. You must then install the ram sticks on A1 and B1 in order to achieve dual channel, then A2 and B2 if you have more (visual order: A1 A2 B1 B2). + +#### Procedure to Install RAM Sticks + +You want to start with your largest sticks, evenly distributed between both processors and work your way down to your smallest. + +As an example, let's say you have 2 processors and 4x 16GB sticks and 4x 8GB sticks. The arrangement would be A1-16GB, B1-16GB, A2-16GB, B2-16GB, A3-8GB, B3-8GB, A4-8GB, B4-8GB. + +Avoid odd numbers as well. You optimally want pairs. So if you only have 5x 8GB sticks, only install 4 until you have an even 6. + + +## Installing the SSD Disks + + +![3node_diy_rack_server_27](./img/3node_diy_rack_server_27.png) + +To put back the plastic protector, simply align the plastic piece with the two nudges in the metal case. + +![3node_diy_rack_server_28](./img/3node_diy_rack_server_28.png) + +![3node_diy_rack_server_29](./img/3node_diy_rack_server_29.png) + +We will now remove this PCIe riser in order to connect the SSDs. + +![3node_diy_rack_server_30](./img/3node_diy_rack_server_30.png) + +Optional step: put the SSDs and the PCIe riser next to each other so they can talk and break the ice. They will get to learn one another before going into the server to farm TFT. + +![3node_diy_rack_server_31](./img/3node_diy_rack_server_31.png) + +Just like with RAM sticks, you want to make sure you are aligned with the slot. + +![3node_diy_rack_server_32](./img/3node_diy_rack_server_32.png) + +Next, push the adaptor inside the riser’s opening. This takes some force too. If you are well aligned, it should be done with ease. + +![3node_diy_rack_server_33](./img/3node_diy_rack_server_33.png) + +This is what the riser looks like with the two SSDs installed. Now you simply need to put the riser back inside the server. + +![3node_diy_rack_server_34](./img/3node_diy_rack_server_34.png) + +Push down on the riser to insert it properly. + +![3node_diy_rack_server_35](./img/3node_diy_rack_server_35.png) + +It’s good to notice that the inside of the top plate of the server has great pictures showing how to manipulate the hardware. + + + +## Plugging the 3node Server + + + +![3node_diy_rack_server_36](./img/3node_diy_rack_server_36.png) + +Now you will want to plug in the power cable in the PSU. Here we show two 495W PSUs. With 256GB of RAM and two SSDs NVME, it is better to use two 750W PSUs. Note that this server will only use around 100W at idle. There are two power cables for redundancy. The unit does not need more than one to function. + +On a 15A/120V breaker, you can have more than one server. But note that, at full load, this server can use up to 400W. In this case, no more than 3 servers could be plugged on the same breaker. Make sure you adapt to your current situation (server's power consumption, electric breaker, etc.). + +![3node_diy_rack_server_37](./img/3node_diy_rack_server_37.png) + +Plugging in the power cable is pretty straightforward. Just make sure you have the 3 pins oriented properly! + +![3node_diy_rack_server_38](./img/3node_diy_rack_server_38.png) + +It is highly recommended to plug the power cable in a surge protector. If you have unsteady electricity at your location, it might be good to use a UPS, uninterrupted power supply. A surge protector is essential to avoid overpowering and damaging the server. + +![3node_diy_rack_server_39](./img/3node_diy_rack_server_39.png) + +![3node_diy_rack_server_40](./img/3node_diy_rack_server_40.png) + +![3node_diy_rack_server_41](./img/3node_diy_rack_server_41.png) + +Before starting the server, you can plug in the monitor and the keyboard as well as the ethernet cable. Make sure you plug the ethernet cable in one of the four NIC ports. + +![3node_diy_rack_server_42](./img/3node_diy_rack_server_42.png) + +Now, power it on! + +![3node_diy_rack_server_43](./img/3node_diy_rack_server_43.png) + +The server is booting. + + +## Removing the DVD Optical Drive - Installing a SSD disk in the DVD Optical Drive Slot + + +![3node_diy_rack_server_44](./img/3node_diy_rack_server_44.png) + +![3node_diy_rack_server_45](./img/3node_diy_rack_server_45.png) + +If you want to change the DVD optical drive, push where indicated and remove the power and SATA cables. + +It is possible to install a SSD disk in there. To do so, use a SATA HDD hard drive caddy CD/DVD **9.5mm** and put in a SATA III 2.5" disk. The caddy is not necessary. You could simply remove the standard CD/DVD caddy and plug the SATA disk. + +The hardware part is done. Next, you will want to set the BIOS properly as well as get the bootstrap image of Zero-OS. Before we get into this, let's have some information on using the onboard storage of your 3node server. + + +## Using Onboard Storage - RAID Controller Details + + +If you want to use the onboard storage on your server, you will probably need to flash the RAID card or do some adjustment in order for Zero-OS to recognize your disks. + +You can use the onboard storage on a server without RAID. You can [re-flash](https://fohdeesha.com/docs/perc.html) the RAID card, turn on HBA/non-RAID mode, or install a different card. It's usually easy to set servers such as a HP Proliant with the HBA mode. + +For Dell servers, you can either cross-flash the RAID controller with an “IT-mode-Firmware” (see this [video](https://www.youtube.com/watch?v=h5nb09VksYw)) or get a DELL H310-controller (which has the non-RAID option). Otherwise, as shown in this guide, you can install a NVME SSD with a PCIe adaptor, and turn off the RAID controller. + +Note that for Dell R610 and R710, you can re-flash the RAID card. For the R910, you can’t re-flash the card. In this case, you will need to get a LSI Dell card. + +# Zero-OS Bootstrap Image + +With R620 and R720 Dell servers, UEFI does not work well. You will want to use either a DVD or a USB in BIOS mode. + +Go on https://bootstrap.grid.tf/ and download the appropriate image: option **ISO** for the DVD and option **USB** for BIOS USB (not UEFI). + +Write your farmer ID and make sure you select production mode. + +## Creating a Farm + +You can create a farm with either the ThreeFold Dashboard or the ThreeFold Connect app. + +### Using Dashboard + +The Dashboard section contains all the information required to [create a farm](../../../dashboard/farms/your_farms.md). + +### Using TF Connect App + +You can [create a ThreeFold farm](../../../threefold_token/storing_tft/tf_connect_app.md) with the ThreeFold Connect App. + +## Wiping All the Disks + +You might need to wipe your disks if they are not brand new. To wipe your disks, read the section [Wipe All the Disks](../../3node_building/4_wipe_all_disks.md) of the ThreeFold Farming Documentation. + +## Downloading the Zero-OS Bootstrap Image + +You can then download the [Zero-OS bootstrap image](https://v3.bootstrap.grid.tf) for your farm. + +![3node_diy_rack_server_46](./img/3node_diy_rack_server_46.png) + +![3node_diy_rack_server_47](./img/3node_diy_rack_server_47.png) + +Use the ISO image for DVD boot and the USB image for USB BIOS boot (not UEFI). We use the farm ID 1 here as an example. Put your own farm ID. + +### DVD ISO BIOS Image +For the ISO image, download the file and burn it on a DVD. + +### USB BIOS Image +Note: the USB key must be formatted before burning the Zero-OS bootstrap image. + +For Windows, MAC and Linux, you can use [balenaEtcher](https://www.balena.io/etcher/), a free and open source software that will let you create a bootstrap image on a USB key, while also formatting the USB key at the same time. + +This is the **easiest way** to burn your Zero-OS bootstrap image. All the steps are clearly explained within the software. + +For Windows, you can also use Rufus. + +For the USB image, with Linux, you can also go through the command window and write: + +> dd status=progress if=FILELOCATION.ISO(or .IMG) of=/dev/sd*. + +Here the * is to indicate that you must adjust according to your disk. To see your disks, write **lsblk** in the command window. Make sure you select the proper disk. + + +# BIOS Settings + +Before starting the server, plug in the USB bootstrap image. You can also insert the DVD once the server is on. + +When you start the server, press F2 to get into System Setup. + +Then, select System BIOS. In System BIOS settings, select Processor Settings. + +Note: More details are available for BIOS Settings in this [documentation](../../3node_building/5_set_bios_uefi.md). + +## Processor Settings + +Make sure you have enabled the Logical Processor (Hyper Threading with HP). This turns 8 cores into 16 virtual cores. You can set QPI Speed at Maximum data rate. Make sure you set All to Number of Cores per Processor. You can adjust the Processor Core speed and Processor Bus Speed for specific uses. + +It is also good to take a look at the Processors and make sure the hardware is correct. + +## Boot Settings + +Go to System BIOS Settings and select Boot Settings. In Boot Settings, choose BIOS and not UEFI as the Boot Mode. You need to save your preferences and comeback to select BIOS Boot Settings. + +Once back in BIOS Boot Settings, go to Boot Sequence. Depending on your bootstrap image of Zero-OS, select either the USB key or the Optical Drive CD-DVD option. The name of the USB key can be Drive C or else depending on where you plugged it and your server model. + +You can also disable the booting options that are not need. It can be good to have a DVD and a USB key with the bootstrap images for redundancy. If one boot fails, the computer would try with the other options of the boot sequence. This can be done with 2 USB keys too. + +Boot Sequence Retry enabled will simply redo the booting sequence if the last time did not work. + + +That's it. You've set the BIOS settings properly and now is time to boot the 3Node and connect to the ThreeFold Grid. + +You can then save your preferences and exit. Your server should restart and load the bootstrap image. + +# Booting the 3Node + +Once you've set the BIOS settings and restarted your computer, it will download the Zero-OS bootstrap image. This takes a couple of minutes. + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +Once you have your node ID, you can also go on the ThreeFold Explorer to see your 3Node and verify that the connection is recognized by the Explorer. + +# Additional Information +## Differences between the R620 and the R720 + +Note that the main difference between the R620 and the R720 is that the former is a 1U and the latter a 2U. 2U servers are usually less noisy and generate less heat than 1U servers since they have a greater volume. In the R720, fans are bigger and thus less noisy. This can be an important factor to consider. Both offer great performances and work well with Zero-OS. + +## Different CPUs and RAMs Configurations for 3Node Dell Servers + +Different CPUs and RAMs configurations are possible for the Dell R620/R720 servers. + +For example, you could replace the E5-2640 v2 CPUs for the E5-2695 V2. This would give you 48 Threads. You could then go with 12x32GB DDR3 LRDIMM. You would also need 5TB SSD total instead to get the proper ratio, which is 100GB of SSD and 8GB of RAM per virtual core (also called thread or logical core). + +Note that you cannot have more than 16 sticks of ECC DIMM on the R620/R720. For more sticks, you need LRDIMM as stated above. + +# Closing Words +That's it. You have now built a DIY 3Node and you are farming on the ThreeFold Grid. + +If you encounter errors, you can read the section [Troubleshooting and Error Messages](../../../faq/faq.md#troubleshooting-and-error-messages) of the Farmer FAQ. + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Telegram Farmer Group](https://t.me/threefoldfarmers). + +>Welcome to the New Internet! diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/.done b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/.done new file mode 100644 index 0000000..d776170 --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/.done @@ -0,0 +1,12 @@ +3node_diy_rack_server_15.png +3node_diy_rack_server_16.png +3node_diy_rack_server_18.png +3node_diy_rack_server_22.png +3node_diy_rack_server_23.png +3node_diy_rack_server_3.png +3node_diy_rack_server_30.png +3node_diy_rack_server_33.png +3node_diy_rack_server_37.png +3node_diy_rack_server_38.png +3node_diy_rack_server_40.png +farming_30.png diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_1.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_1.png new file mode 100644 index 0000000..be135fb Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_1.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_10.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_10.png new file mode 100644 index 0000000..59fd49b Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_10.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_11.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_11.png new file mode 100644 index 0000000..e83f1a7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_11.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_12.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_12.png new file mode 100644 index 0000000..e74e59f Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_12.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_13.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_13.png new file mode 100644 index 0000000..0aeb0e5 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_13.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_14.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_14.png new file mode 100644 index 0000000..79b3798 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_14.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_15.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_15.png new file mode 100644 index 0000000..50f9191 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_15.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_16.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_16.png new file mode 100644 index 0000000..5f2c328 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_16.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_17.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_17.png new file mode 100644 index 0000000..92cef03 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_17.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_18.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_18.png new file mode 100644 index 0000000..a78b3f0 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_18.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_19.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_19.png new file mode 100644 index 0000000..78212d9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_19.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_2.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_2.png new file mode 100644 index 0000000..d459bf3 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_2.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_20.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_20.png new file mode 100644 index 0000000..5cdb630 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_20.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_21.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_21.png new file mode 100644 index 0000000..84bfe28 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_21.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_22.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_22.png new file mode 100644 index 0000000..62e7ee4 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_22.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_23.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_23.png new file mode 100644 index 0000000..bdd4063 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_23.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_24.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_24.png new file mode 100644 index 0000000..c9363fd Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_24.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_25.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_25.png new file mode 100644 index 0000000..54cf093 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_25.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_26.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_26.png new file mode 100644 index 0000000..3c4ee6d Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_26.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_27.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_27.png new file mode 100644 index 0000000..3af18ad Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_27.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_28.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_28.png new file mode 100644 index 0000000..07f7558 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_28.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_29.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_29.png new file mode 100644 index 0000000..1f610e8 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_29.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_3.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_3.png new file mode 100644 index 0000000..aa7d30d Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_3.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_30.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_30.png new file mode 100644 index 0000000..965f8bf Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_30.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_31.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_31.png new file mode 100644 index 0000000..b72cfd2 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_31.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_32.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_32.png new file mode 100644 index 0000000..143efca Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_32.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_33.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_33.png new file mode 100644 index 0000000..705e98a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_33.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_34.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_34.png new file mode 100644 index 0000000..47a2799 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_34.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_35.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_35.png new file mode 100644 index 0000000..68448ab Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_35.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_36.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_36.png new file mode 100644 index 0000000..20324e9 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_36.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_37.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_37.png new file mode 100644 index 0000000..88f7691 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_37.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_38.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_38.png new file mode 100644 index 0000000..b5210c5 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_38.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_39.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_39.png new file mode 100644 index 0000000..6804b91 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_39.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_4.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_4.png new file mode 100644 index 0000000..d5ae149 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_4.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_40.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_40.png new file mode 100644 index 0000000..671b240 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_40.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_41.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_41.png new file mode 100644 index 0000000..c090dc7 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_41.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_42.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_42.png new file mode 100644 index 0000000..04909fb Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_42.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_43.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_43.png new file mode 100644 index 0000000..5248051 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_43.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_44.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_44.png new file mode 100644 index 0000000..a6f1d64 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_44.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_45.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_45.png new file mode 100644 index 0000000..1dedf8c Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_45.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_46.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_46.png new file mode 100644 index 0000000..b0c1140 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_46.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_47.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_47.png new file mode 100644 index 0000000..9da0919 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_47.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_5.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_5.png new file mode 100644 index 0000000..3b9f8d5 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_5.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_6.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_6.png new file mode 100644 index 0000000..d7da48a Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_6.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_7.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_7.png new file mode 100644 index 0000000..c464eae Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_7.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_8.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_8.png new file mode 100644 index 0000000..e9afd38 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_8.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_9.png b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_9.png new file mode 100644 index 0000000..0527f58 Binary files /dev/null and b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_9.png differ diff --git a/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/readme.md b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/readme.md new file mode 100644 index 0000000..b525533 --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/3node_diy_rack_server/img/readme.md @@ -0,0 +1 @@ +Image folder for /wethreepedia/3node_diy_rack_server diff --git a/collections/manual/documentation/farmers/complete_diy_guides/complete_diy_guides_readme.md b/collections/manual/documentation/farmers/complete_diy_guides/complete_diy_guides_readme.md new file mode 100644 index 0000000..c3b2360 --- /dev/null +++ b/collections/manual/documentation/farmers/complete_diy_guides/complete_diy_guides_readme.md @@ -0,0 +1,10 @@ +

Complete DIY 3Node Guides

+ +This section of the ThreeFold Farmers book presents two short guides detailing how to build a DIY 3Node. + +A perfect start for newcomers is the Desktop guide. If you want to build a bigger 3Node, the Rack Server guide maybe the best fit your you! + +

Table of Contents

+ +- [3Node Desktop DIY Guide](./3node_diy_desktop/3node_diy_desktop.html) +- [3Node Rack Server DIY Guide](./3node_diy_rack_server/3node_diy_rack_server.html) \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farmerbot/farmerbot_information.md b/collections/manual/documentation/farmers/farmerbot/farmerbot_information.md new file mode 100644 index 0000000..ef86e19 --- /dev/null +++ b/collections/manual/documentation/farmers/farmerbot/farmerbot_information.md @@ -0,0 +1,436 @@ + +

Farmerbot Additional Information

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Additional Information](#additional-information) + - [General Considerations](#general-considerations) + - [YAML Configuration File Template](#yaml-configuration-file-template) + - [Supported Commands and Flags](#supported-commands-and-flags) + - [Minimum specs to run the Farmerbot](#minimum-specs-to-run-the-farmerbot) + - [How to Prepare Your Farm for the Farmerbot with WOL](#how-to-prepare-your-farm-for-the-farmerbot-with-wol) + - [WOL Requirements](#wol-requirements) + - [Enabling WOL in the BIOS](#enabling-wol-in-the-bios) + - [ZOS Nodes and NIC](#zos-nodes-and-nic) + - [NIC Firmware and WOL](#nic-firmware-and-wol) + - [How to Move Your Farm to a Different Network](#how-to-move-your-farm-to-a-different-network) + - [The differences between power "state" and power "target"](#the-differences-between-power-state-and-power-target) + - [The differences between uptime, status and power state](#the-differences-between-uptime-status-and-power-state) + - [The sequence of events for a node managed by the Farmerbot](#the-sequence-of-events-for-a-node-managed-by-the-farmerbot) + - [The problematic states of a 3node set with the Farmerbot](#the-problematic-states-of-a-3node-set-with-the-farmerbot) + - [Using the ThreeFold Node Status Bot](#using-the-threefold-node-status-bot) + - [CPU overprovisioning](#cpu-overprovisioning) + - [Seed phrase and HEX secret](#seed-phrase-and-hex-secret) + - [Farmerbot directory tree](#farmerbot-directory-tree) + - [Dedicated Nodes and the Farmerbot](#dedicated-nodes-and-the-farmerbot) + - [Periodic wakeup](#periodic-wakeup) + - [Time period between random wakeups and power target update](#time-period-between-random-wakeups-and-power-target-update) + - [Upgrade to the new Farmerbot](#upgrade-to-the-new-farmerbot) + - [Set the Farmerbot without the mnemonics of a ThreeFold Dashboard account](#set-the-farmerbot-without-the-mnemonics-of-a-threefold-dashboard-account) +- [Maintenance](#maintenance) + - [See the power state and power target of 3Nodes](#see-the-power-state-and-power-target-of-3nodes) + - [With GraphQL](#with-graphql) + - [With Grid Proxy](#with-grid-proxy) + - [Change manually the power target of a 3Node](#change-manually-the-power-target-of-a-3node) + - [Properly reboot the node if power target "Down" doesn't work](#properly-reboot-the-node-if-power-target-down-doesnt-work) + - [Add a 3Node to a running Farmerbot](#add-a-3node-to-a-running-farmerbot) + - [Update the Farmerbot with a new release](#update-the-farmerbot-with-a-new-release) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +We present some general information concerning the Farmerbot as well as some advice for proper maintenance and troubleshooting. + +# Additional Information + +We present additional information to complement the [Quick Guide](farmerbot_quick.md). + +## General Considerations + +The Farmerbot doesn’t have to run physically in the farm since it instructs nodes over RMB to power on and off. The Farmerbot should be running at all time. + +The Farmerbot uses the nodes in the farm to send WOL packets to the node that needs to wakeup. For this reason, you need at least one node per farm to be powered on at all time. If you do not specify one node to be always on, the Farmerbot will randomly choose a node to stay on for each cycle. If all nodes in a subnet are powered off, there is no way other nodes in other subnets will be able to power them on again. + +Note that if you run the Farmerbot on your farm, it is logical to set the node running the Farmerbot as always on. In this case, it will always be this node that wakes up the other nodes. + +Currently, you can run only one Farmerbot per farm. Since you can only deploy one Farmerbot per farm, the Farmerbot can only run on one node at a time. + +Since you need at least one node to power up a second node, you can't use the Farmerbot with just one node. You need at least two 3Nodes in your farm to correctly use the Farmerbot. + +The Farmerbot gets its data completely from TFChain. This means that, unlike the previous version, the Farmerbot will not start all the nodes when it restarts. + +## YAML Configuration File Template + +The quick guide showed a simple form of the YAML configuration file. Here are all the parameters that can be set for the configuration file. + +``` +farm_id: "" +included_nodes: [optional, if no nodes are added then the farmerbot will include all nodes in the farm, farm should contain at least 2 nodes] + - "" +excluded_nodes: + - "" +never_shutdown_nodes: + - "" +power: + periodic_wake_up_start: "" + wake_up_threshold: "" + periodic_wake_up_limit: "" + overprovision_cpu: "" +``` + +## Supported Commands and Flags + +We present the different commands for the Farmerbot. + +- `start`: to start (power on) a node + +```bash +farmerbot start --node -m -n dev -d +``` + +Where: + +```bash +Flags: + --node uint32 the node ID you want to use + +Global Flags: +-d, --debug by setting this flag the farmerbot will print debug logs too +-m, --mnemonic string the mnemonic of the account of the farmer +-n, --network string the grid network to use (default "main") +-s, --seed string the hex seed of the account of the farmer +-k, --key-type string key type for mnemonic (default "sr25519") +``` + +- `start all`: to start (power on) all nodes in a farm + +```bash +farmerbot start all --farm -m -n dev -d +``` + +Where: + +```bash +Flags: + --farm uint32 the farm ID you want to start your nodes ins + +Global Flags: +-d, --debug by setting this flag the farmerbot will print debug logs too +-m, --mnemonic string the mnemonic of the account of the farmer +-n, --network string the grid network to use (default "main") +-s, --seed string the hex seed of the account of the farmer +-k, --key-type string key type for mnemonic (default "sr25519") +``` + +- `version`: to get the current version of farmerbot + +```bash +farmerbot version +``` + +## Minimum specs to run the Farmerbot + +The Farmerbot can run on any computer/server, it could even run on a laptop, so to speak. As long as it has an internet connection, the Farmerbot will be working fine. + +The Farmerbot runs fine on a VM with a single vcore and 2GB of RAM. For the storage, you need to have room for Docker and its dependencies. Thus 1 or 2GB of free storage, with the OS already installed, should be sufficient. + +## How to Prepare Your Farm for the Farmerbot with WOL + +ZOS can utilize 2 NIC's (Network Interface Card) of a node (server, workstation, desktop, ..). The first NIC on the motherboard will always be what we call the ZOS/dmz NIC, the second one is used for public config's (Gateway, public IP's for workloads, ..). So if you don't have public IP's in your farm, only the first NIC of your ZOS node will be used. This subnet is where the farmerbot operates. If you do have public IP's the same applies. + +Wake On LAN (WOL) is used to be able to boot (start) a ZOS node remotely that was shut down by the farmerbot. It works by sending what is called a 'magic packet' to the NIC MAC address of a ZOS node. If that NIC is setup correctly, aka 'listening' for the packet, the node will start up, post and boot ZOS. The farmerbot will keep a list of MAC addresses for nodes under it's management, so it knows where to send the packet if it's required. + +## WOL Requirements + +WOL comes with a few requirements. We list them in the sections that follow. + +### Enabling WOL in the BIOS + +Enable WOL in the BIOS of your ZOS node. + +A ZOS node must be capable of doing WOL. Have a look at your node hardware / BIOS manual. If so make sure to enable it in the BIOS! A bit of research will quickly tell you how to enable for your hardware. Some older motherboards do not support this, sometimes you can be lucky it does after a BIOS upgrade, but that is brand/model specific. + +Some examples: + +![farmerbot_bios_1|517x291](img/farmerbot_bios_1.jpeg) + +![farmerbot_bios_2|499x375](img/farmerbot_bios_2.jpeg) + +### ZOS Nodes and NIC + +All your ZOS nodes and their first NIC (ZOS/dmz) should be in the same network subnet (also called network segment or broadcast domain). + +This requires some basic network knowledge. WOL packets can not be send across different subnets by default, it can but this requires specific configuration on the firewall that connects the two subnets. Though cross-subnet WOL is currently not supported by the farmerbot. + +A 'magic' WOL packet is sent only on networking layer 2 (L2 or the 'data link layer') based on MAC address. So not on L3 based on ip address. This is why all nodes that should be brought up via WOL, need to be in the same subnet. + +You can check if this is the case like this: if for example one node has the ip 192.168.0.20/24, then all other nodes should have an ip between 192.168.0.1 and 192.168.0.254. You can calculate subnet ranges easely here: https://www.tunnelsup.com/subnet-calculator/ + +So for the 192.168.0.0/24 example, you can see the range under 'Usable Host Range': + +![farmerbot_bios_3|499x500](img/farmerbot_bios_3.png) + +### NIC Firmware and WOL + +Some NIC's require WOL to be set on the NIC firmware. + +This is fully handled by ZOS. Every time ZOS boots it will enable WOL on links if they require it. So if a ZOS node then is added to a farmerbot, it will have WOL enabled on its NIC when it's turned off (by the farmerbot). + +Your farmerbot can be run on any system, including on a node. It doesn't have to be on the same network subnet as the nodes from the farm. The nodes of the farm on the other hand have to be in the same LAN. Don't hesitate to ask your technical questions here, we and the community will help you set things up! + +## How to Move Your Farm to a Different Network + +Note that the Farmerbot is currently available for Dev Net, QA Net, Test Net and Main Net. Thus, it might not be necessary to move your farm to a different network. + +To move your farm to a different network, you need to create a new bootstrap image for the new network instead of your current network. You should also wipe your 3Nodes' disks before moving to a different network. + +To download the Zero-OS bootstrap image, go to the usual bootstrap link [https://v3.bootstrap.grid.tf/](https://v3.bootstrap.grid.tf/) and select the network you want. + +![test_net|690x422](img/farmerbot_5.png) + +Once you have your new bootstrap image for the new network, [wipe your disks](../3node_building/4_wipe_all_disks.md), insert the new bootstrap image and reboot the 3Node. + +## The differences between power "state" and power "target" + +The target is what is set by the Farmerbot or can be set by the farmer manually on TF Chain. Power state can only be set by the node itself, in response to power targets it observes on chain. + + +## The differences between uptime, status and power state + +There are three distinctly named endpoints or fields that exist in the back end systems: + +* Uptime + * number of seconds the node was up, as of it's last uptime report. This is the same on GraphQL and Grid Proxy. +* Status + * this is a field that only exists on the Grid Proxy, which corresponds to whether the node sent an uptime report within the last 40 minutes. +* Power state + * this is a field that only exists on GraphQL, and it's the self reported power state of the node. This only goes to "down" if the node shut itself down at request of the Farmerbot. + +## The sequence of events for a node managed by the Farmerbot + +The sequence of events for a node managed by farmerbot should look like this: + +1. Node is online. Target, state, and status are all "Up". +2. Farmerbot sets node's target to "Down". +3. Node sets its state to "Down" and then shuts off. +4. Three hours later the status switches to "Down" because the node hasn't been updated. +5. At periodic wake up time, Farmerbot sets node's target to "Up". +6. Node receives WoL packet and starts booting. +7. After boot is complete, node sets its state to "Up" and also submits uptime report. +8. updatedAt is updated with time that uptime report was received and status changes to "Up". + +At that point the cycle is completed and will repeat. + +## The problematic states of a 3node set with the Farmerbot + +These are problematic states: + +1. Target is set to "Up" but state and status are "Down" for longer than normal boot time (node isn't responding). +2. Target has been set to "Down" for longer than ~23.5 hours (farmerbot isn't working properly). +3. Target is "Down" but state and status are up (Zos is potentially not responding to power target correctly). +4. State is "Up" but status is "Down" (node shutdown unexpectedly). + +## Using the ThreeFold Node Status Bot + +You can use the [ThreeFold Node Status Bot](https://t.me/tfnodestatusbot) to see the nodes' status in relation to the Farmerbot. + +## CPU overprovisioning + +In the context of the ThreeFold grid, overprovisioning a CPU means that you can allocate more than one deployment to one CPU. + +In relation to the Farmerbot, you can set a value between 1 and 4 of how much the CPU can be overprovisioned. For example, a value of 2 means that the Farmerbot can allocate up to 2 deployments to one CPU. + +## Seed phrase and HEX secret + +When setting up the Farmerbot, you will need to enter either the seed phrase or the HEX secret of your farm. For farms created in the TF Connect app, the HEX secret from the app is correct. For farms created in the TF Dashboard, you'll need the seed phrase provided when you created the account. + +## Farmerbot directory tree + +As a general template, the directory tree of the Farmerbot will look like this: + +``` +└── farmerbot_directory + ├── .env + └── conf.yml +``` + +## Dedicated Nodes and the Farmerbot + +Dedicated nodes are managed like any other node. Nodes marked as dedicated can only be rented completely. Whenever a user wants to rent a dedicated node the user sends a find_node job to the farmerbot. The farmerbot will find such a node, power it on if it is down and reserve the full node (for 30 minutes). The user can then proceed with creating a rent contract for that node. The farmerbot will get that information and keep that node powered on. It will no longer return that node as a possible node in future find_node jobs. Whenever the rent contract is canceled the farmerbot will notice this and shutdown the node if the resource usage allows it. + +## Periodic wakeup + +The minimum period between two nodes to be waken up is currently 5 minutes. This means that every 5 minutes a new node wakes up during the periodic wakeup. + +Once all nodes are awaken, they all shut down at the same time, except the node that stays awaken to wake up the other during the next periodic wake. + +## Time period between random wakeups and power target update + +The time period between a random wakeup and the moment the power target is set to down is between 30 minutes and one hour. + +Whenever a random wakeup is initiated, the Farmerbot will wait 30 minutes for the node to be up. Once the node is up, the Farmerbot will keep that node up for 30 minutes for the two following reasons: + +* The node can send uptime report +* If the node was put online for a given user deployment, this time priod gives ample time for the user to deploy their workload. + +This ensures an optimal user experience and reliablity in 3Nodes' reports. + +Note that each node managed by the Farmerbot will randomly wakeup on average 10 times a month. + +## Upgrade to the new Farmerbot + +If you are still running the old version of the Farmerbot (written in V), you can easily upgrade to the new Farmerbot (written in Go). You simply need to properly stop the old Farmerbot and then follow the new [Farmerbot guide](./farmerbot_quick.md). + +Here are the steps to properly stop the old Farmerbot. + +* Go to the diretory with the old Farmerbot docker files and fully stop the old Farmerbot: + ``` + docker compose rm -f -s -v + ``` +* You should also make sure that there are no containers left from the previous runs. First, list all containers: + ``` + docker container ls --all + ``` +* Then delete the remaining containers: + ``` + docker container rm -f -v NAME_OF_CONTAINER + ``` + +Once the old Farmerbot is properly stopped and deleted, follow the new [Farmerbot guide](./farmerbot_quick.md). + +## Set the Farmerbot without the mnemonics of a ThreeFold Dashboard account + +If you've lost the mnemonics associated with an account created on the ThreeFold Dashboard, it is still possible to set the Farmerbot with this account, but it's easier to simply create a new account and a new farm. Hopefully, the process is simple. + +- Create a new account on the Dashboard. This will generate a new twin. +- Create a new farm and create new bootstrap images of your new farm. +- Reboot your nodes with the new bootstrap images. This will automatically migrate your nodes with their current node IDs to the new farm. + +If you are using the Farmerbot, at this point, you will be able to set it with the mnemonics associated with the new farm. + +# Maintenance + +## See the power state and power target of 3Nodes + +### With GraphQL + +You can use [GraphQL](https://graphql.grid.tf/graphql) to see the power state and power target of 3Nodes. + +To find all nodes within one farm, use the following line with the proper farm ID (here we set farm **1** as an example): + +``` +query MyQuery { + nodes(where: {farmID_eq: 1}) { + power { + target + state + } + nodeID + } +} +``` + +To find a specific node, write the following with the proper nodeID (here we set node **655** as an example): + +``` +query MyQuery { + nodes(where: {nodeID_eq: 655}) { + power { + state + } + nodeID + } +} +``` + +### With Grid Proxy + +You can also see the power state and power target of a 3Node with Grid proxy. + +Use the following URL while adjusting the proper node ID (here we set node **1** as an example): + +``` +https://gridproxy.grid.tf/nodes/1 +``` + +Then, in the response, you will see the following: + +``` +"power": { +"state": "string", +"target": "string" +}, +``` + +If the state and target are not defined, the string will be empty. + +## Change manually the power target of a 3Node + +You can use the Polkadot Extrinsics for this. + +* Go to the Polkadot.js.org website's endpoint based on the network of your 3Node: + * [Main net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) + * [Test net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/extrinsics) + * [Dev net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) + * [QA net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.qa.grid.tf#/extrinsics) +* Make sure that **Developer -> Extrinsics** is selected +* Select your account +* Select **tfgridModule** +* Select **changepowertarget(nodeId,powerTarget)** +* Select the node you want to change the power target +* Select the power target (**Up** or **Down**) +* Click **Submit Transaction** at the bottom of the page + + + +## Properly reboot the node if power target "Down" doesn't work + +* Set the power target to "Down" manually +* Reboot the node and wait for it to set its power state to "Down" +* Once power target and state are both set to "Down", you can manually power off the node and reboot it + +## Add a 3Node to a running Farmerbot + +If the Farmerbot is running and you want to add a new 3Node to your farm, you can proceed as follows. + +- Boot the new 3Node + - Once the node is registered to the grid, a new node ID will be generated +- If you set the section `included_nodes` in the YAML configuration file + - Add the new node ID to the configuration file +- Restart the Farmerbot with the systemd command `restart` (in this example, the service is called `farmerbot`) + ``` + systemctl restart farmerbot + ``` + +## Update the Farmerbot with a new release + +There are only a few steps needed to update the Farmerbot to a new release. + +- Download the latest [ThreeFold tfgrid-sdk-go release](https://github.com/threefoldtech/tfgrid-sdk-go/releases) and extract the farmerbot for your specific setup (here we use `x86_64`). On the line `wget ...`, make sure to replace `` with the latest Farmerbot release. + ``` + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download//tfgrid-sdk-go_Linux_x86_64.tar.gz + tar xf tfgrid-sdk-go_Linux_x86_64.tar.gz farmerbot + ``` +- Make a copy of the old version in case you need it in the future: + ``` + mv /usr/local/bin/farmerbot /usr/local/bin/farmerbot_archive + ``` +- Move the new Farmerbot to the local bin + ``` + mv farmerbot /usr/local/bin + ``` +- Restart the bot + ``` + systemctl restart farmerbot + ``` +- Remove the tar file + ``` + rm tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` + +# Questions and Feedback + +If you have questions concerning the Farmerbot, feel free to ask for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmer chat](https://t.me/threefoldfarmers). \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farmerbot/farmerbot_intro.md b/collections/manual/documentation/farmers/farmerbot/farmerbot_intro.md new file mode 100644 index 0000000..96d19b3 --- /dev/null +++ b/collections/manual/documentation/farmers/farmerbot/farmerbot_intro.md @@ -0,0 +1,15 @@ +

Farmerbot

+ +The Farmerbot is a service that farmers can run in order to automatically manage the nodes in their farms. The behavior of the farmerbot is customizable through a YAML configuration file. + +We present here a quick guide to accompany farmers in setting up the Farmerbot. This guide contains the essential information to deploy the Farmerbot on the TFGrid. The other section contains additional information and details on the working of the Farmerbot. + +For more information on the Farmerbot, you can visit the [Farmerbot repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot) on Github. You can also consult the Farmerbot FAQ if needed. + +

Table of Contents

+ +- [Quick Guide](./farmerbot_quick.md) +- [Additional Information](./farmerbot_information.md) +- [Minting and the Farmerbot](./farmerbot_minting.md) + +> Note: The Farmerbot is an optional feature developed by ThreeFold. Please use at your own risk. While ThreeFold will do its best to fix any issues with the Farmerbot and minting, if minting is affected by the use of the Farmerbot, ThreeFold cannot be held responsible. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farmerbot/farmerbot_minting.md b/collections/manual/documentation/farmers/farmerbot/farmerbot_minting.md new file mode 100644 index 0000000..f45d13a --- /dev/null +++ b/collections/manual/documentation/farmers/farmerbot/farmerbot_minting.md @@ -0,0 +1,26 @@ +

Minting and the Farmerbot

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Minting Rules](#minting-rules) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +We cover essential features of ThreeFold minting in relation with the Farmerbot. + +## Minting Rules + +There are certain minting rules that are very important when it comes to farming on the ThreeFold Grid while using the Farmerbot. + +- The 3Node should wake up within 30 minutes of setting the power target to **Up**. + - If the 3Node does not respect this rule, the 3Node won't mint for the whole minting period. +- The 3Node must wake up at least once every 24 hours. + - If the 3Node does not respect this rule, the 3Node won't mint for a 24-hour period. + +## Disclaimer + +Please note that the Farmerbot is an optional feature developed by ThreeFold. Please use at your own risk. While ThreeFold will do its best to fix any issues with the Farmerbot and minting, if minting is affected by the use of the Farmerbot, ThreeFold cannot be held responsible. diff --git a/collections/manual/documentation/farmers/farmerbot/farmerbot_quick.md b/collections/manual/documentation/farmers/farmerbot/farmerbot_quick.md new file mode 100644 index 0000000..4177ef8 --- /dev/null +++ b/collections/manual/documentation/farmers/farmerbot/farmerbot_quick.md @@ -0,0 +1,292 @@ +

Farmerbot Quick Guide

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Farmerbot Costs on the TFGrid](#farmerbot-costs-on-the-tfgrid) +- [Enable Wake-On-Lan](#enable-wake-on-lan) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Farmerbot Setup](#farmerbot-setup) + - [Download the Farmerbot Binaries](#download-the-farmerbot-binaries) + - [Create the Farmerbot Files](#create-the-farmerbot-files) + - [Run the Farmerbot](#run-the-farmerbot) + - [Set a systemd Service](#set-a-systemd-service) + - [Check the Farmerbot Logs](#check-the-farmerbot-logs) + - [Stop the Farmerbot](#stop-the-farmerbot) +- [Farmerbot Files](#farmerbot-files) + - [Configuration File Template (config.yml)](#configuration-file-template-configyml) + - [Environment Variables File Template (.env)](#environment-variables-file-template-env) +- [Running Multiple Farmerbots on the Same VM](#running-multiple-farmerbots-on-the-same-vm) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this guide, we show how to deploy the [Farmerbot](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot) on a full VM running on the TFGrid. + +This guide can be done on bare metal or on a full VM running on the TFGrid. You need at least two 3Nodes on the same farm to make use of the Farmerbot. + +This version of the Farmerbot also works with ARM64. This means that if you have a Pi 3, 4, or Zero 2 with a 64 bit OS, you can download the appropriate release archive and it will work properly. + +Read the [Additional Information](farmerbot_information.md) section for further details concerning the Farmerbot. + +## Prerequisites + +- The TFChain account associated with the farm should have at least 5 TFT (recommended is 50 TFT) + +## Farmerbot Costs on the TFGrid + +If you run the Farmerbot on a 3Node on the TFGrid, you will have to pay TFT to deploy on that 3Node. You can run a full VM at minimum specs for the Farmerbot, that is 1vcore, 15GB of SSD storage and 512MB of RAM. Note that you can use the Planetary Network. You do not need to deploy a 3Node with IPv4. The cost on main net for this kind of workload is around 0.175TFT/hour (as of the date 11-07-23). + +Next to that, you will have to pay the transaction fees every time the Farmerbot has to wake up or shut down a node. This means that you need some TFT on the account tied to the twin of your farm. + +For the periodic wakeups, each node in the farm is shut down and powered on once a day, i.e. 30 times per month. Also, there is 10 random wakeups per month for each node. This means that each node is turned off and on 40 times per month in average. In that case, the average cost per month to power on nodes and shut them back down equals: + +> average transaction fees cost per month = 0.001 TFT (extrinsic fee) * amount of nodes * 40 * 2 (1 for powering down, one for powering up) + +## Enable Wake-On-Lan + +For a 3Node to work properly with the Farmerbot, the parameter wake-on-lan must be enabled. Enabling wake-on-lan on your 3Node may differ depending on your computer model. Please refer to the documentation of your computer if needed. + +Usually the feature will be called Wake-on-Lan and you need to set it as "enabled" in the BIOS/UEFI settings. + +Here are some examples to guide you: + +* Racker Server, Dell R720 + * Go into `System Setup -> Device Settings -> NIC Port -> NIC Configuration` + * Set Wake-on-Lan to `Enable` +* Desktop Computer, HP EliteDesk G1 + * Go to Power -> Hardware Power Management + * Disable `S5 Maximum Power Saving` + * Go to `Advanced -> Power-On Options` + * Set `Remote Wake up Boot source` to `Remote Server` + +> Hint: Check the Z-OS monitor screen and make sure that all the 3Nodes are within the same lan (e.g. all 3Nodes addresses are between 192.168.15.00 and 192.168.15.255). + +For more information on WOL, [read this section](farmerbot_information.md#how-to-prepare-your-farm-for-the-farmerbot-with-wol). + +## Deploy a Full VM + +For this guide, we run the Farmerbot on a Full VM running on the TFGrid. Note that while you do not need to run the Farmerbot on the TFGrid, the whole process is very simple as presented here. + +- Deploy a full VM on the TFGrid +- Update and upgrade the VM + ``` + apt update && apt upgrade + ``` +- Reboot and reconnect to the VM + ``` + reboot + ``` + +## Farmerbot Setup + +We present the different steps to run the Farmerbot using the binaries. + +> For a script that can help automate the steps in this guide, [check this forum post](https://forum.threefold.io/t/new-farmerbot-install-script/4207). + +### Download the Farmerbot Binaries + +- Download the latest [ThreeFold tfgrid-sdk-go release](https://github.com/threefoldtech/tfgrid-sdk-go/releases) and extract the farmerbot for your specific setup (here we use `x86_64`). On the line `wget ...`, make sure to replace `` with the latest Farmerbot release. + ``` + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download//tfgrid-sdk-go_Linux_x86_64.tar.gz + tar xf tfgrid-sdk-go_Linux_x86_64.tar.gz farmerbot + ``` +- Move the Farmerbot + ``` + mv farmerbot /usr/local/bin + ``` +- Remove the tar file + ``` + rm tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` + +### Create the Farmerbot Files + +- Create Farmerbot files directory + ``` + cd ~ + mkdir farmerbotfiles + ``` +- Create the Farmerbot `config.yml` file ([see template below](#configuration-file-template-configyml)) + ``` + nano ~/farmerbotfiles/config.yml + ``` +- Create the environment variables file and set the variables ([see template below](#environment-variables-file-template-env)) + ``` + nano ~/farmerbotfiles/.env + ``` + +### Run the Farmerbot + +We run the Farmerbot with the following command: + +``` +farmerbot run -e ~/farmerbotfiles/.env -c ~/farmerbotfiles/config.yml -d +``` + +For farmers with **ed25519** keys, the flag `-k` should be used. Note that by default, the Farmerbot uses the **sr25519** keys. + +``` +farmerbot run -k ed25519 -e ~/farmerbotfiles/.env -c ~/farmerbotfiles/config.yml -d +``` + +For more information on the supported commands, the [Additional Information section](farmerbot_information.md#supported-commands-and-flags). You can also consult the [Farmerbot repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot). + +Once you've verified that the Farmerbot runs properly, you can stop the Farmerbot and go to the next section to set a Farmerbot service. This step will ensure the Farmerbot keeps running after exiting the VM. + +### Set a systemd Service + +It is highly recommended to set a Ubuntu systemd service to keep the Farmerbot running after exiting the VM. + +* Create the service file + * ``` + nano /etc/systemd/system/farmerbot.service + ``` +* Set the Farmerbot systemd service + + ``` + [Unit] + Description=ThreeFold Farmerbot + StartLimitIntervalSec=0 + + [Service] + Restart=always + RestartSec=5 + StandardOutput=append:/root/farmerbotfiles/farmerbot.log + StandardError=append:/root/farmerbotfiles/farmerbot.log + ExecStart=/usr/local/bin/farmerbot run -e /root/farmerbotfiles/.env -c /root/farmerbotfiles/config.yml -d + + [Install] + WantedBy=multi-user.target + ``` +* Enable the Farmerbot service + ``` + systemctl daemon-reload + systemctl enable farmerbot + systemctl start farmerbot + ``` +* Verify that the Farmerbot service is properly running + ``` + systemctl status farmerbot + ``` + +### Check the Farmerbot Logs + +Once you've set a Farmerbot systemd service [as show above](#set-a-systemd-service), the Farmerbot will start writing logs to the file `farmerbot.log` in the directory `farmerbotfiles`. + +Thus, you can get more details on the operation of the Farmerbot by inspecting the log file. This can also be used to see the **Farmerbot Report Table** as this table is printed in the Farmerbot log. + +* See all logs so far + ``` + cat ~/farmerbotfiles/farmerbot.log + ``` +* See the last ten lines and new logs as they are generated + ``` + tail -f ~/farmerbotfiles/farmerbot.log + ``` +* See all logs and new lines as they are generated + ``` + tail -f -n +1 ~/farmerbotfiles/farmerbot.log + ``` +* See the last report table + ``` + tac ~/farmerbotfiles/farmerbot.log | grep -B5000 -m1 "Nodes report" | tac + ``` + +### Stop the Farmerbot + +You can stop the farmerbot with the following command: + +``` +systemctl stop farmerbot +``` + +After stopping the farmerbot, any nodes in standby mode will remain in standby. To bring them online, use this command: + +``` +farmerbot start all -e /root/farmerbotfiles/.env --farm +``` + +## Farmerbot Files + +### Configuration File Template (config.yml) + +In this example, the farm ID is 1, we are setting the Farmerbot with 4 nodes and the node 1 never shuts down, we set a periodic wakeup at 1:00PM. + +Note that the timezone of the farmerbot will be the same as the time zone of the machine the farmerbot running inside. By default, a full VM on the TFGrid will be set in UTC. + +``` +farm_id: 1 +included_nodes: + - 1 + - 2 + - 3 + - 4 +never_shutdown_nodes: + - 1 +power: + periodic_wake_up_start: 01:00PM +``` + +Note that if the user wants to include all the nodes within a farm, they can simply omit the `included_nodes` section. In this case, all nodes of the farm will be included in the Farmerbot, as shown in the example below: + +``` +farm_id: 1 +never_shutdown_nodes: + - 1 +power: + periodic_wake_up_start: 01:00PM +``` + +For more information on the configuration file, refer to the [Additional Information section](farmerbot_information.md#yaml-configuration-file-template). + +You can also consult the [Farmerbot repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot). + +### Environment Variables File Template (.env) + +The network can be either `main`, `tets`, `dev` or `qa`. The following example is with the main network. + +``` +MNEMONIC_OR_SEED="word1 word2 word3 ... word12" +NETWORK="main" +``` + +## Running Multiple Farmerbots on the Same VM + +You can run multiple instances of the Farmerbot on the same VM. + +To do so, you need to create a directory for each instance of the Farmerbot. Each directory should contain the configuration and variables files as shown above. Once you've set the files, you can simply execute the Farmerbot `run` command to start each bot in each directory. + +It's recommended to use distinct names for the directories and the services to easily differentiate the multiple farmerbots running on the VM. + +For example, the directory tree of two Farmerbots could be: + +``` +└── farmerbotfiles +    ├── farmerbot1 +    │   ├── .env +    │   └── config.yml +    └── farmerbot2 +    ├── .env +    └── config.yml +``` + +For example, the services of two Farmerbots could be named as follows: + +``` +farmerbot1.service +farmerbot2.service +``` + +## Questions and Feedback + +This guide is meant to get you started quickly with the Farmerbot. That being said, there is a lot more that can be done with the Farmerbot. + +For more information on the Farmerbot, please refer to the [Additional Information section](./farmerbot_information.md). You can also consult the [official Farmerbot Go repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot). + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmers Chat](https://t.me/threefoldfarmers) on Telegram. + +> This is the new version of the Farmerbot written in Go. If you have any feedback and issues, please let us know! \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farmerbot/img/farmerbot_4.png b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_4.png new file mode 100644 index 0000000..58d4b2c Binary files /dev/null and b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_4.png differ diff --git a/collections/manual/documentation/farmers/farmerbot/img/farmerbot_5.png b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_5.png new file mode 100644 index 0000000..439022d Binary files /dev/null and b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_5.png differ diff --git a/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_1.jpg b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_1.jpg new file mode 100644 index 0000000..7978e9c Binary files /dev/null and b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_1.jpg differ diff --git a/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_2.jpg b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_2.jpg new file mode 100644 index 0000000..207aaef Binary files /dev/null and b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_2.jpg differ diff --git a/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_3.png b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_3.png new file mode 100644 index 0000000..10ae6c9 Binary files /dev/null and b/collections/manual/documentation/farmers/farmerbot/img/farmerbot_bios_3.png differ diff --git a/collections/manual/documentation/farmers/farmers.md b/collections/manual/documentation/farmers/farmers.md new file mode 100644 index 0000000..9d5f2f8 --- /dev/null +++ b/collections/manual/documentation/farmers/farmers.md @@ -0,0 +1,36 @@ +# ThreeFold Farmers + +This section covers all practical information on how to become a cloud service provider (farmer) on the ThreeFold Grid. + +For complementary information on ThreeFold farming, refer to the [Farming](../../knowledge_base/farming/farming_toc.md) section. + +To buy a certified node from an official ThreeFold vendor, check the [ThreeFold Marketplace](https://marketplace.3node.global/). + +

Table of Contents

+ +- [Build a 3Node](./3node_building/3node_building.md) + - [1. Create a Farm](./3node_building/1_create_farm.md) + - [2. Create a Zero-OS Bootstrap Image](./3node_building/2_bootstrap_image.md) + - [3. Set the Hardware](./3node_building/3_set_hardware.md) + - [4. Wipe All the Disks](./3node_building/4_wipe_all_disks.md) + - [5. Set the BIOS/UEFI](./3node_building/5_set_bios_uefi.md) + - [6. Boot the 3Node](./3node_building/6_boot_3node.md) +- [Farming Optimization](./farming_optimization/farming_optimization.md) + - [GPU Farming](./3node_building/gpu_farming.md) + - [Set Additional Fees](./farming_optimization/set_additional_fees.md) + - [Minting Receipts](./3node_building/minting_receipts.md) + - [Minting Periods](./farming_optimization/minting_periods.md) + - [Room Parameters](./farming_optimization/farm_room_parameters.md) + - [Farming Costs](./farming_optimization/farming_costs.md) + - [Calculate Your ROI](./farming_optimization/calculate_roi.md) +- [Advanced Networking](./advanced_networking/advanced_networking_toc.md) + - [Networking Overview](./advanced_networking/networking_overview.md) + - [Network Considerations](./advanced_networking/network_considerations.md) + - [Network Setup](./advanced_networking/network_setup.md) +- [Farmerbot](./farmerbot/farmerbot_intro.md) + - [Quick Guide](./farmerbot/farmerbot_quick.md) + - [Additional Information](./farmerbot/farmerbot_information.md) + - [Minting and the Farmerbot](./farmerbot/farmerbot_minting.md) +- [Farmers FAQ](../faq/faq.md#farmers-faq) + +> Note: Bugs in the code (e.g. ZOS or other components) can happen. If this is the case, there might be a loss of tokens during minting which won't be refunded by ThreeFold. If there are minting code errors, ThreeFold will try its best to fix the minting code and remint nodes that were affected by such errors. diff --git a/collections/manual/documentation/farmers/farming_optimization/calculate_roi.md b/collections/manual/documentation/farmers/farming_optimization/calculate_roi.md new file mode 100644 index 0000000..df13bd4 --- /dev/null +++ b/collections/manual/documentation/farmers/farming_optimization/calculate_roi.md @@ -0,0 +1,17 @@ +

Calculate the ROI of a DIY 3Node

+ +To calculate the ROI of a DIY 3Node, we first calculate the Revenue per Month: + +>Revenue per month = TFT price when sold * TFT farmed per month + +The ROI of a DIY 3Node is: + +> Cost of 3Node / Revenue per month = ROI in months + +For example, a Rack Server farming 3000 TFT per month with an initial cost of 1500$USD has the following ROI: + +> 1500 / (3000 * 0.08) = 6.25 month ROI + +This calculation is based on a TFT value of 8 cents. You should adjust this according to the current market price. + +Note that this ROI equation is used to compare efficienty between different DIY 3Nodes. It does not constitute real final gains as additional costs must be taken into consideration, such as electricity for the 3Nodes, for the AC system, as well as Internet bandwidth. All those notions are covered in this part of the book. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farming_optimization/farm_room_parameters.md b/collections/manual/documentation/farmers/farming_optimization/farm_room_parameters.md new file mode 100644 index 0000000..823d84e --- /dev/null +++ b/collections/manual/documentation/farmers/farming_optimization/farm_room_parameters.md @@ -0,0 +1,139 @@ + +

Air Conditioner, Relative Humidity and Air Changes per Hour

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Calculate the Minimum BTU/h Needed for the AC](#calculate-the-minimum-btuh-needed-for-the-ac) + - [How Much BTU/h is Needed?](#how-much-btuh-is-needed) + - [Taking Utilization Into Account](#taking-utilization-into-account) + - [The General BTU/h Equation](#the-general-btuh-equation) +- [Ensure Proper Relative Humidity](#ensure-proper-relative-humidity) +- [Ensure Proper Air Changes per Hour](#ensure-proper-air-changes-per-hour) + +*** + +## Introduction + +In this section of the ThreeFold Farmers book, we cover some important notions concerning the room parameters where your 3Nodes are working. We discuss topics such as air conditioner, relative humidity and air changes per hour. + +Planning ahead the building of your ThreeFold farm with these notions in mind will ensure a smooth farming experience. + + + +## Calculate the Minimum BTU/h Needed for the AC + +Let's see how to calculate how powerful your AC unit needs to be when it comes to cooling down your server room. + +As we know, servers generate heat when they are working. While a desktop 3Node will generate under 20W at idle and a server 3Node might use 100W at **idle**, when you pile up some 3Nodes desktops/servers in the same location, things can get pretty warm when cultivation on the Grid is happening. Indeed, when your servers will be using a lot of power, especially in the summer time, you might need some additional cooling. + +A good thing about servers generating heat is that this can be used as a **heat source in the winter**. Other more advanced techniques can be used to maximize the heat production. But that's for another day! + +Note that for small farms, your current heating and cooling system may suffice. + +So let's do the calculation: + +### How Much BTU/h is Needed? + + +How much BTU/h does your ThreeFold Farm need to cool your servers? + +Calculating this is pretty simple actually. You need to keep in mind that **1 kW (1000 W) of power is equivalent to 3413 BTU/h** (Britisth Thermal Unit). + +> 1000 W = 1 kW = 3413 BTU/h +> +> 1000 Wh = 1 kWh = 3413 BTU + +So with our idle server example running at 100W, we have 0.1 kW. + +> 100 W = 0.1 kW + +We then multiply our kW by the BTU/h factor **3413** to obtain the result in BTU/h. Here we have 341.3 BTU/h: + +> 0.1 kW * 3413 = 341.3 BTU/h + +Say you have 5 servers with this same configuration. It means you have + +> (# of servers) * (BTU/h per server) = Total BTU/h + +> 5 * 341.3 = 1706.5 BTU/h + +Thus, a 2000 BTU/h air conditioner would be able to compensate for the heat when your servers are at idle. + +> Note that in general for air conditioners, it will often be written BTU instead of BTU/h as a shorthand. + + +Please take note that this does not take into account the energy needed to cool down your environment. You'd need to take into consideration **the heat of the servers and the general heat of your environment** to figure out how much BTU your AC needs in the big heat days of the summer. + +### Taking Utilization Into Account + +But then, what happens at cultivation? Well, say your server needs 400W of power when it's being fully cultivated by some lively ThreeFold Users of the New Internet. In this case, we would say that 400 W is the power consumption at **full load**. + +As we started with 100 W, and we now have 400 W, it means that you'd need four times the amount of BTU/h. + +Here we show how to calculate this with any other configuration of full load/idle. + +> Full-load / Idle Ratio = Full Load W / Idle W + +> 4 = 400 W / 100 W + +The BTU/h needed in cultivation would be + +> (Full-Load / Idle Ratio) * Idle BTU/h needed = Full Load BTU/h + +> 4 * (1706.5 BTU/h at Idle) = 6826 BTU/h at Full Load + +Thus, you would need 6826 BTU/h from the AC unit for 5 servers running each at 400W. In that case, a 8000 BTU/h AC unit would be sufficient. Let's say your environment would typically need 4000 BTU/h to cool the room, you'd need about 12000 BTU/h AC unit for the whole setup. + +> If: BTU/h needed < BTU/h AC Unit, Then: AC Unit is OK for TF farming at full load. + + + +Now you can have a better idea of how much BTU/h is necessary for your AC unit. Of course, this can be a useful piece of data to incorporate in your simulation of Revenue/Cost farming. + +### The General BTU/h Equation + +The **general equation** would then be: + +> Server Power in kW at Full Load * 3413 * Number of Servers = Total Maximum BTU/h needed per ThreeFold Farm + + + +As another example, 7 servers using 120 W of power at idle would need: + +> 0.12 * 3413 * 7 = 2866.92 BTU/h + +During cultivation, these 7 servers might use 480 W. This would be: + +> 0.48 * 3413 * 7 = 11467.68 BTU/h + +To be sure everything's OK, this set up would need a 12 000 BTU/h AC unit to compensate for the heat generated by the ThreeFold Farm during full cultivation. This example considers the environment heat to be negligible. + +> 11467.68 < 12000 --> 12K BTU/h AC Unit is OK for farm + + +That's it! It ain't any more complicated. Straight up mathematics and some judgment. + +Now, let's compute the costs of running all this! + + + +## Ensure Proper Relative Humidity + +To ensure that the relative humidity in your server room stays within a proper range, look in your server's user manual to know the proper range of relative humidity your server can handle. If necessary, use an hygrometer to measure relative humidity and make sure it stays within an acceptable range for your 3Nodes. + +Depending on your geographical location and your current situation, it could be interesting to consider having a AC unit equipped with a dehumidifier. Read your servers' manual to check the proper relative humidity range and set the unit accordingly. The maximum/minimum temperature and relative humidity a 3Node server can handle will depend on the specific server/computer you are using. You should check the server's technical guide/manual to get the proper information. The following is an example. + +We will use here the Dell R720 as an example since it is a popular 3Node choice. In this case, we use the R720's [Technical Guide](https://downloads.dell.com/manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-r720_reference-guide_en-us.pdf) as reference. + +For the R720, between 35˚C and 40˚C (or 95˚F and 104˚F), with 5% to 85% relative humidity, you can have this <10% of annual operating hours (around 36 days per year), and between 40˚C and 45˚C (or 104˚F and 113˚F), with 5 to 90% relative humidity, it’s <1% of annual operating hours (around 3.6 day per year). All this considers that there is no direct sunlight. + +From 10˚C to 35˚C (thus from 50˚F to 95˚F), it’s considered standard operating temperature. With relative humidity from 10% to 80%. + +This can give you a good idea of the conditions a 3Node can handle, but make sure you verify with your specific server's manual. + +## Ensure Proper Air Changes per Hour + +To ensure that the air changes per hour is optimal in your 3Node servers' room, and depending on your current situation, it can be recommended to ventilate the server room in other to disperse or evacuate excess heat and humidity. In those cases, ventilation flow will be set depending on the air changes per hour (ACPH) needed. Note that the [ASHRAE](https://www.ashrae.org/File%20Library/Technical%20Resources/Standards%20and%20Guidelines/Standards%20Addenda/62-2001/62-2001_Addendum-n.pdf) recommends from 10 to 15 ACPH for a computer room. + +> Note: A good AC unit will be able to regulate the heat and the relative humidity as well as ensure proper air changes per hour. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farming_optimization/farming_costs.md b/collections/manual/documentation/farmers/farming_optimization/farming_costs.md new file mode 100644 index 0000000..f10e6a8 --- /dev/null +++ b/collections/manual/documentation/farmers/farming_optimization/farming_costs.md @@ -0,0 +1,206 @@ +

Calculate the Farming Costs: Power, Internet and Total Costs

+ +

Table of Contents

+ +- [Calculate the Total Electricity Cost of Your Farm](#calculate-the-total-electricity-cost-of-your-farm) +- [Calculate the Proper Bandwidth Needed for Your Farm](#calculate-the-proper-bandwidth-needed-for-your-farm) + - [The Minimum Bandwidth per 3Node Equation](#the-minimum-bandwidth-per-3node-equation) + - [Cost per Month for a Given Bandwidth](#cost-per-month-for-a-given-bandwidth) +- [Calculate Total Cost and Revenue](#calculate-total-cost-and-revenue) + - [Check Revenue with the ThreeFold Simulator](#check-revenue-with-the-threefold-simulator) + - [Economics of Farming](#economics-of-farming) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Calculate the Total Electricity Cost of Your Farm + +The total electricity cost of your farm is the sum of all Power used by your system times the price you pay for each kWh of power. + +> Total electricity cost = Total Electricity in kWh * Cost per kWh + +> Total Electricty in kWh = 3Nodes' electricity consumption * Number of 3Nodes + Cooling system electricity consumption + +With our example, we have 5 servers running at 400 W at Full Load and we have a 12K BTU unit that is consuming in average 1000W. + +We would then have: + +> 5 * 400 W + 1000 W = 3000 W = 3 kW + +To get the kWh per day we simply multiply by 24. + +> kW * (# of hour per day) = daily kWh consumption + +> 3 kW * 24 = 72 kWh / day + +We thus have 72 kWH per day. For 30 days, this would be + +> kWh / day * (# day in a month) = kWh per month + +> 72 * 30 = 2160 kWH / month. + +At a kWh price of 0.10$ USD, we have a cost of 216 $USD per month for the electricity bill of our ThreeFold farm. + +> kWH / month of the farm * kWh Cost = Electricity Bill per month for the farm + +> 2160 * 0.1 = 216$USD / month for electricity bills + + +## Calculate the Proper Bandwidth Needed for Your Farm + +The bandwidth needed for a given 3Node is not yet set in stone and you are welcome to participate in ongoing [discussion on this subject](https://forum.threefold.io/t/storage-bandwidth-ratio/1389) on the ThreeFold Forum. + +In this section, we will give general guidelines. The goal is to have a good idea of what constitutes a proper bandwidth available for a given amount of resources utilized on the ThreeFold Grid. + +Starting with a minimum of 1 mbps per Titan, which is 1 TB SSD and 32 GB RAM, we note that this is the lowest limit that gives the opportunity for the most people possible to join the ThreeFold Grid. That being said, we could set that 10 mbps is an acceptable upper limit for 1 TB SSD and 64 GB of RAM. + +Those numbers are empirical and more information will be shared in the future. The ratio 1TB SSD/64GB RAM is in tune with the optimal TFT rewards ratio. It is thus logical to think that farmers will build 3Node based on this ratio. Giving general bandwidth guidelines based on this ratio unit could thus be efficient for the current try-and-learn situation. + +### The Minimum Bandwidth per 3Node Equation + + +Here we explore some equations that can give a general idea to farmers of the bandwidth needed for their farms. As stated, this is not yet set in stones and the TFDAO will need to discuss and clarify those notions. + +Here is a general equation that gives you a good idea of a correct bandwidth for a 3Node: + +> min Bandwidth per 3Node (mbps) = k * max((Total SSD TB / 1 Tb),(Total Threads / 8 Threads),(Total GB / 64 GB)) + k * (Total HDD TB / 2) + +Setting k = 10 mbps, we have: + +> min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2) + +As an example, a Titan, with 1TB SSD, 8 Threads and 64 GB of RAM, would need 10 mbps: + +> 10 * max(1, 1, 1) = 10 * 1 = 10 + +With the last portion of the equation, we can see that for each additional 1TB HDD storage, you would need to add 5 mbps of bandwidth. + + +Let's take a big server as another example. Say we have a server with 5TB SSD, 48 threads and 384 GB of RAM. We would then need 60 mbps of bandwidth for each of these 3Nodes: + +> 10 * max((5/5), (48/8), (384/64)) = 10 * max(5,6,6) = 10 * 6 = 60 + +This server would need 60 mbps minimum to account for a full TF Grid utilization. + +You can easily scale this equation if you have many 3Nodes. + + + +Let's say you have a 1 gbps bandwidth from your Internet Service Provider (ISP). How much of those 3Nodes could your farm have? + +> Floor (Total available bandwidth / ((Bandwidth needed per 3Nodes)) = Max servers possible + +With our example we have: + +> 1000 / 60 = 16.66... = 16 + +We note that the function Floor takes the integer without the decimals. + +Thus, a 1 gbps bandwidth farm could have 16 3Nodes with each 5TB SSD, 48 threads and 384 GB of RAM. + + + +In this section, we used **k = 10 mbps**. If you follow those guidelines, you will most probably have a decent bandwidth for your ThreeFold farm. For the time being, the goal is to have farmers building ThreeFold farms and scale them reasonably with their available bandwidth. + +Stay tuned for official bandwidth parameters in the future. + + + +### Cost per Month for a Given Bandwidth + +Once you know the general bandwidth needed for your farm, you can check with your ISP the price per month and take this into account when calculating your monthly costs. + +Let's take the example we used with 5 servers with 400 W at Full Load. Let's say these 5 servers have the same parameters we used above here. We then need 60 gbps per 3Nodes. This means we need 300 mbps. For the sake of our example, let's say this is around 100$ USD per month. + + +## Calculate Total Cost and Revenue + + +As the TFT price is fixed for 60 months when you connect your 3Node for the first time on the TF Grid, we will use the period of 60 months, or 5 years, to calculate the total cost and revenue. + +The total cost is equal to: + +> Total Cost = Initial investment + 60 * (electricity + Internet costs per month) + +In our example, we can state that we paid each server 1500$ USD and that they generate each 3000 TFT per month, with an entry price of 0.08$ USD per TFT. + +The electricity cost per month is + +> 144$ for the electricity bill +> +> 100$ for the Internet bill +> +> Total : 244 $ monthly cost for electricity and Internet + +The revenues are + +> Revenues per month = Number of 3Nodes * TFT farmed per 3Node * Price TFT Sold + +In this example, we have 5 servers generating 2000 TFT per month at 0.08$ USD per TFT: + +> 5 * 3000$ * 0.08$ = 1200$ + +The net revenue per month are thus equal to + +> Net Revenue = Gross revenue - Monthly cost. + +We thus have + +> 1200$ - 244$ = 956$ + +This means that we generate a net profit of 956$ per month, without considering the initial investment of building the 3Nodes for the farm. + +In the previous AC example, we calculate that a minimum of 12K BTU was needed for the AC system. Let's say that this would mean buying a 350$ USD 12k BTU AC unit. + +The initial cost is the cost of all the 3Nodes plus the AC system. + +> Number of 3Nodes * cost per 3Nodes + Cost of AC system = Total Cost + +In this case, it would be: + +> Total initial investment = Number of 3Nodes * Cost of 3Node + Cost of AC system + +Then we'd have: + +> 5 * 1500 + 350 = 7850 $ + +Thus, a more realistic ROI would be: + +> Total initial investment / Net Revenue per Month = ROI in months + +In our case, we would have: + +> 7850$ / 956$ = Ceiling(8.211...) = 9 + +With the function Ceiling taking the upper integer, without any decimals. + +Then within 9 months, this farm would have paid itself and from now on, it would be only positive net revenue of 956$ per month. + +We note that this takes into consideration that we are using the AC system 24/7. This would surely not be the case in real life. This means that the real ROI would be even better. It is a common practice to do estimates with stricter parameters. If you predict being profitable with strict parameters, you will surely be profitable in real life, even when "things" happen and not everything goes as planned. As always, this is not financial advice. + +We recall that in the section [Calculate the ROI of a DIY 3Node](./calculate_roi.md), we found a simpler ROI of 6.25 months, say 7 months, that wasn't taking into consideration the additional costs of Internet and electricity. We now have a more realistic ROI of 9 months based on a fixed TFT price of 0.08$ USD. You will need to use to equations and check with your current TF farm and 3Nodes, as well as the current TFT market price. + + +### Check Revenue with the ThreeFold Simulator + +To know how much TFT you will farm per month for a giving 3Node, the easiest route is to use the [ThreeFold Simulator](https://simulator.grid.tf/). You can do predictions of 60 months as the TFT price is locked at the TFT price when you first connect your 3Node, and this, for 60 months. + +To know the details of the calculations behind this simulator, you can read [this documentation](https://library.threefold.me/info/threefold#/tfgrid/farming/threefold__farming_reward). + + +### Economics of Farming + +As a brief synthesis, the following equations are used to calculate the total revenues and costs of your farm. + +``` +- Total Monthly Cost = Electricity cost + Internet Cost +- Total Electricity Used = Electricy per 3Node * Number of 3Node + Electricity for Cooling +- Total Monthly Revenue = TFT farmed per 3 node * Number of 3Nodes * TFT price when sold +- Initial Investment = Price of farm (3Nodes) + Price of AC system +- Total Return on investment = (60 * Monthly Revenue) - (60 * Monthly cost) - Initial Investment +``` + + +## Questions and Feedback + +This section constitutes a quick synthesis of the costs and revenues when running a ThreeFold Farm. As always, do your own reseaerch and don't hesitate to visit the [ThreeFold Forum](https://forum.threefold.io/) on the [ThreeFold Telegram Farmer Group](https://t.me/threefoldfarmers) if you have any questions. diff --git a/collections/manual/documentation/farmers/farming_optimization/farming_optimization.md b/collections/manual/documentation/farmers/farming_optimization/farming_optimization.md new file mode 100644 index 0000000..10f14b0 --- /dev/null +++ b/collections/manual/documentation/farmers/farming_optimization/farming_optimization.md @@ -0,0 +1,13 @@ +

Farming Optimization

+ +The section [Build a 3Node](../3node_building/3node_building.md) covered the notions necessary to build a DIY 3Node server. The following section will give you additional information with the goal of optimizing your farm while also being able to plan ahead the costs in terms of energy and capitals. We also cover how to set a GPU node and more. + +

Table of Contents

+ +- [GPU Farming](../3node_building/gpu_farming.md) +- [Set Additional Fees](./set_additional_fees.md) +- [Minting Receipts](../3node_building/minting_receipts.md) +- [Minting Periods](./minting_periods.md) +- [Room Parameters](./farm_room_parameters.md) +- [Farming Costs](./farming_costs.md) +- [Calculate Your ROI](./calculate_roi.md) \ No newline at end of file diff --git a/collections/manual/documentation/farmers/farming_optimization/minting_periods.md b/collections/manual/documentation/farmers/farming_optimization/minting_periods.md new file mode 100644 index 0000000..5979f84 --- /dev/null +++ b/collections/manual/documentation/farmers/farming_optimization/minting_periods.md @@ -0,0 +1,56 @@ +

Minting Periods

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Minting Period Length](#minting-period-length) +- [2023 Minting Periods](#2023-minting-periods) +- [2024 Minting Periods](#2024-minting-periods) + +*** + +## Introduction + +We discuss the length and the frequencies of the ThreeFold farming minting periods. + +## Minting Period Length + +Each minting period has: 2630880 seconds = 43848 minutes = 730.8 hours. + +## 2023 Minting Periods + +The minting periods for the 12 months of 2023 are the following: + +| Month | Start of the Minting Period | End of the Minting Period | +|----------|---------------------------------|---------------------------------| +| Jan 2023 | December 31, 2022 at 4\:32\:40 am | January 30, 2023 at 3\:20\:40 pm | +| Feb 2023 | January 30, 2023 at 3\:20\:40 pm | March 2, 2023 at 2\:08\:40 am | +| Mar 2023 | March 2, 2023 at 2\:08\:40 am | April 1, 2023 at 12\:56\:40 pm | +| Apr 2023 | April 1, 2023 at 12\:56\:40 pm | May 1, 2023 at 11\:44\:40 pm | +| May 2023 | May 1, 2023 at 11\:44\:40 pm | June 1, 2023 at 10\:32\:40 am | +| Jun 2023 | June 1, 2023 at 10\:32\:40 am | July 1, 2023 at 9\:20\:40 pm | +| Jul 2023 | July 1, 2023 at 9\:20\:40 pm | August 1, 2023 at 8\:08\:40 am | +| Aug 2023 | August 1, 2023 at 8\:08\:40 am | August 31, 2023 at 6\:56\:40 pm | +| Sep 2023 | August 31, 2023 at 6\:56\:40 pm | October 1, 2023 at 5\:44\:40 am | +| Oct 2023 | October 1, 2023 at 5\:44\:40 am | October 31, 2023 at 4\:32\:40 pm | +| Nov 2023 | October 31, 2023 at 4\:32\:40 pm | December 1, 2023 at 3\:20\:40 am | +| Dec 2023 | December 1, 2023 at 3\:20\:40 am | December 31, 2023 at 2\:08\:40 pm | + +## 2024 Minting Periods + +The minting periods for the 12 months of 2024 are the following: + +| Month | Start of the Minting Period | End of the Minting Period | +|----------|---------------------------------|---------------------------------| +| Jan 2024 | December 31, 2023 at 14\:08\:40 | January 31, 2024 at 00\:56\:40 | +| Feb 2024 | January 31, 2024 at 00\:56\:40 | March 1, 2024 at 11\:44\:40 | +| Mar 2024 | March 1, 2024 at 11\:44\:40 | March 31, 2024 at 22\:32\:40 | +| Apr 2024 | Marc 31, 2024 at 22\:32\:40 | May 1, 2024 at 09\:20\:40 | +| May 2024 | May 1, 2024 at 09\:20\:40 | May 31, 2024 at 20\:08\:40 | +| Jun 2024 | May 31, 2024 at 20\:08\:40 | July 1, 2024 at 06\:56\:40 | +| Jul 2024 | July 1, 2024 at 06\:56\:40 | July 31, 2024 at 17:44\:40 | +| Aug 2024 | July 31, 2024 at 17\:44\:40 | August 31, 2024 at 04\:32\:40 | +| Sep 2024 | August 31, 2024 at 04\:32\:40 | September 30, 2024 at 15\:20\:40 | +| Oct 2024 | September 30, 2024 at 15\:20\:40 | October 31, 2024 at 02\:08\:40 | +| Nov 2024 | October 31, 2024 at 02\:08\:40 | November 30, 2024 at 12\:56\:40 | +| Dec 2024 | November 30, 2024 at 12\:56\:40 | December 30, 2024 at 23\:44\:40 | diff --git a/collections/manual/documentation/farmers/farming_optimization/set_additional_fees.md b/collections/manual/documentation/farmers/farming_optimization/set_additional_fees.md new file mode 100644 index 0000000..e1b5437 --- /dev/null +++ b/collections/manual/documentation/farmers/farming_optimization/set_additional_fees.md @@ -0,0 +1,31 @@ +

Set Additional Fees

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Steps](#steps) +- [TFT Payments](#tft-payments) +- [Dedicated Nodes Notice](#dedicated-nodes-notice) + +*** + +## Introduction + +Farmers can set additional fees for their 3Nodes on the [TF Dashboard](https://dashboard.grid.tf/). By doing so, users will then be able to [reserve the 3Node and use it as a dedicated node](../../dashboard/deploy/dedicated_machines.md). +This can be useful for farmers who provide additional values to their 3Nodes, e.g. a GPU card and/or high-quality hardware. + +## Steps + +Here are the steps to [set additional fees](../../dashboard/farms/your_farms.md#extra-fees) to a 3Node. + +* On the Dashboard, go to **Farms** -> **Your Farms** +* Under the section **Your Nodes**, locate the 3Node and click **Set Additional Fees** under **Actions** +* Set a monthly fee (in USD) and click **Set** + +## TFT Payments + +When a user reserves your 3Node, you will receive TFT payments once every 24 hours. These TFT payments will be sent to the TFChain account of your farm's twin. + +## Dedicated Nodes Notice + +Note that while any 3Node that has no workload can be reserved by a TF user as a dedicated node, when a farmer sets additional fees to a 3Node, this 3Node automatically becomes a dedicated node. For a user to run workloads on this 3Node, the 3Node must then be reserved, i.e rented as a dedicated node. \ No newline at end of file diff --git a/collections/manual/documentation/farmers/img/.done b/collections/manual/documentation/farmers/img/.done new file mode 100644 index 0000000..d672ef9 --- /dev/null +++ b/collections/manual/documentation/farmers/img/.done @@ -0,0 +1 @@ +farming_30.png diff --git a/collections/manual/documentation/farmers/img/farming_30.png b/collections/manual/documentation/farmers/img/farming_30.png new file mode 100644 index 0000000..d810d13 Binary files /dev/null and b/collections/manual/documentation/farmers/img/farming_30.png differ diff --git a/collections/manual/documentation/system_administrators/advanced/advanced.md b/collections/manual/documentation/system_administrators/advanced/advanced.md new file mode 100644 index 0000000..92a95c3 --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/advanced.md @@ -0,0 +1,14 @@ +

TFGrid Advanced

+ +In this section, we delve into sophisticated topics and powerful functionalities that empower you to harness the full potential of TFGrid 3.0. Whether you're an experienced user seeking to deepen your understanding or a trailblazer venturing into uncharted territories, this manual is your gateway to mastering advanced concepts on the ThreeFold Grid. + +

Table of Contents

+ +- [Token Transfer Keygenerator](./token_transfer_keygenerator.md) +- [Cancel Contracts](./cancel_contracts.md) +- [Contract Bills Reports](./contract_bill_report.md) +- [Listing Free Public IPs](./list_public_ips.md) +- [Redis](./grid3_redis.md) +- [IPFS](./ipfs/ipfs_toc.md) + - [IPFS on a Full VM](./ipfs/ipfs_fullvm.md) + - [IPFS on a Micro VM](./ipfs/ipfs_microvm.md) diff --git a/collections/manual/documentation/system_administrators/advanced/cancel_contracts.md b/collections/manual/documentation/system_administrators/advanced/cancel_contracts.md new file mode 100644 index 0000000..7b466a0 --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/cancel_contracts.md @@ -0,0 +1,48 @@ +

Cancel Contracts

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Using the Dashboard](#using-the-dashboard) +- [Using GraphQL and Polkadot UI](#using-graphql-and-polkadot-ui) +- [Using grid3\_client\_ts](#using-grid3_client_ts) + +*** + +## Introduction + +We present different methods to delete contracts on the TFGrid. + +## Using the Dashboard + +To cancel contracts with the Dashboard, consult the [Contracts List](../../dashboard/deploy/your_contracts.md) documentation. + +## Using GraphQL and Polkadot UI + +From the QraphQL service execute the following query. + +``` +query MyQuery { + + nodeContracts(where: {twinId_eq: TWIN_ID, state_eq: Created}) { + contractId + } +} + +``` + +replace `TWIN_ID` with your twin id. The information should be available on the [Dashboard](../../dashboard/dashboard.md). + +Then from [polkadot UI](https://polkadot.js.org/apps/), add the tfchain endpoint to development. + +![](img/polka_web_add_development_url.png) + +Go to `Extrinsics`, choose the `smartContract` module and `cancelContract` extrinsic and use the IDs from GraphQL to execute the cancelation. + +![](img/polka_web_cancel_contracts.jpg) + +## Using grid3_client_ts + +In order to use the `grid3_client_ts` module, it is essential to first clone our official mono-repo containing the module and then navigate to it. If you are looking for a quick and efficient way to cancel contracts, we offer a code-based solution that can be found [here](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/scripts/delete_all_contracts.ts). + +To make the most of `grid_client`, we highly recommend following our [Grid-Client guide](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/README.md) for a comprehensive overview of the many advanced capabilities offered by this powerful tool. With features like contract creation, modification, and retrieval, `grid_client` provides an intuitive and easy-to-use solution for managing your contracts effectively. diff --git a/collections/manual/documentation/system_administrators/advanced/contract_bill_report.md b/collections/manual/documentation/system_administrators/advanced/contract_bill_report.md new file mode 100644 index 0000000..1269df2 --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/contract_bill_report.md @@ -0,0 +1,63 @@ +

Contract Bills Reports

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Contract Billing Report (GraphQL)](#contract-billing-report-graphql) +- [Consumption](#consumption) + +*** + +## Introduction + +Now you can check the billing rate of your contracts directly from the `Contracts` tab in the Dashboard. + +> It takes an hour for the contract to display the billing rate (Until it reaches the first billing cycle). + +The `Billing Rate` is displayed in `TFT/Hour` + +![image](img/billing_rate.png) + +## Contract Billing Report (GraphQL) + +- you need to find the contract ID +- ask the graphql for the consumption + +> example query for all contracts + +```graphql +query MyQuery { + contractBillReports { + contractId + amountBilled + discountReceived + } +} +``` + +And for a specific contract + +```graphql +query MyQuery { + contractBillReports(where: { contractId_eq: 10 }) { + amountBilled + discountReceived + contractId + } +} +``` + +## Consumption + +```graphql +query MyQuery { + consumptions(where: { contractId_eq: 10 }) { + contractId + cru + sru + mru + hru + nru + } +} +``` diff --git a/collections/manual/documentation/system_administrators/advanced/grid3_redis.md b/collections/manual/documentation/system_administrators/advanced/grid3_redis.md new file mode 100644 index 0000000..0d3377e --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/grid3_redis.md @@ -0,0 +1,46 @@ +

Redis

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Redis](#install-redis) + - [Linux](#linux) + - [MacOS](#macos) +- [Run Redis](#run-redis) + +*** + +## Introduction + +Redis is an open-source, in-memory data structure store that is widely used as a caching layer, message broker, and database. It is known for its speed, versatility, and support for a wide range of data structures. Redis is designed to deliver high-performance data access by storing data in memory, which allows for fast read and write operations. It supports various data types, including strings, lists, sets, hashes, and more, and provides a rich set of commands for manipulating and querying the data. + +Redis is widely used in various use cases, including caching, session management, real-time analytics, leaderboards, task queues, and more. Its simplicity, speed, and flexibility make it a popular choice for developers who need a fast and reliable data store for their applications. In Threefold's ecosystem context, Redis can be used as a backend mechanism to communicate with the nodes on the ThreeFold Grid using the Reliable Message Bus. + + + +## Install Redis + +### Linux + +If you don't find Redis in your Linux distro's package manager, check the [Redis downloads](https://redis.io/download) page for the source code and installation instructions. + +### MacOS + +On MacOS, [Homebrew](https://brew.sh/) can be used to install Redis. The steps are as follow: + +``` +brew update +brew install redis +``` + +Alternatively, it can be built from source, using the same [download page](https://redis.io/download/) as shown above. + + + +## Run Redis + +You can launch the Redis server with the following command: + +``` +redis-server +``` \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/advanced/grid3_stellar_tfchain_bridge.md b/collections/manual/documentation/system_administrators/advanced/grid3_stellar_tfchain_bridge.md new file mode 100644 index 0000000..f74cbdb --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/grid3_stellar_tfchain_bridge.md @@ -0,0 +1,39 @@ +

Transferring TFT Between Stellar and TFChain

+ +

Table of Contents

+ +- [Usage](#usage) +- [Prerequisites](#prerequisites) +- [Stellar to TFChain](#stellar-to-tfchain) +- [TFChain to Stellar](#tfchain-to-stellar) + +*** + +## Usage + +This document will explain how you can transfer TFT from Tfchain to Stellar and back. + +For more information on TFT bridges, read [this documentation](../threefold_token/tft_bridges/tft_bridges.md). + +## Prerequisites + +- [Stellar wallet](../threefold_token/storing_tft/storing_tft.md) + +- [Account on TFChain (use TF Dashboard to create one)](../dashboard/wallet_connector.md) + +![](./img/bridge.png) + +## Stellar to TFChain + +You can deposit to Tfchain using the bridge page on the TF Dashboard, click deposit: + +![bridge](./img/bridge_deposit.png) + +## TFChain to Stellar + +You can bridge back to stellar using the bridge page on the dashboard, click withdraw: + +![withdraw](./img/bridge_withdraw.png) + +A withdrawfee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT. +The amount withdrawn from TFChain will be sent to your Stellar wallet. diff --git a/collections/manual/documentation/system_administrators/advanced/img/advanced_.png b/collections/manual/documentation/system_administrators/advanced/img/advanced_.png new file mode 100644 index 0000000..0065213 Binary files /dev/null and b/collections/manual/documentation/system_administrators/advanced/img/advanced_.png differ diff --git a/collections/manual/documentation/system_administrators/advanced/img/billing_rate.png b/collections/manual/documentation/system_administrators/advanced/img/billing_rate.png new file mode 100644 index 0000000..b22a35a Binary files /dev/null and b/collections/manual/documentation/system_administrators/advanced/img/billing_rate.png differ diff --git a/collections/manual/documentation/system_administrators/advanced/img/ipfs_logo.png b/collections/manual/documentation/system_administrators/advanced/img/ipfs_logo.png new file mode 100644 index 0000000..03a5db4 Binary files /dev/null and b/collections/manual/documentation/system_administrators/advanced/img/ipfs_logo.png differ diff --git a/collections/manual/documentation/system_administrators/advanced/img/polka_web_add_development_url.png b/collections/manual/documentation/system_administrators/advanced/img/polka_web_add_development_url.png new file mode 100644 index 0000000..159e7a1 Binary files /dev/null and b/collections/manual/documentation/system_administrators/advanced/img/polka_web_add_development_url.png differ diff --git a/collections/manual/documentation/system_administrators/advanced/img/polka_web_cancel_contracts.jpg b/collections/manual/documentation/system_administrators/advanced/img/polka_web_cancel_contracts.jpg new file mode 100644 index 0000000..b704900 Binary files /dev/null and b/collections/manual/documentation/system_administrators/advanced/img/polka_web_cancel_contracts.jpg differ diff --git a/collections/manual/documentation/system_administrators/advanced/img/swap_to_stellar.png b/collections/manual/documentation/system_administrators/advanced/img/swap_to_stellar.png new file mode 100644 index 0000000..0473664 Binary files /dev/null and b/collections/manual/documentation/system_administrators/advanced/img/swap_to_stellar.png differ diff --git a/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_fullvm.md b/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_fullvm.md new file mode 100644 index 0000000..e61a173 --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_fullvm.md @@ -0,0 +1,190 @@ +

IPFS on a Full VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Create a Root-Access User](#create-a-root-access-user) +- [Set a Firewall](#set-a-firewall) + - [Additional Ports](#additional-ports) +- [Install IPFS](#install-ipfs) +- [Set IPFS](#set-ipfs) +- [Final Verification](#final-verification) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this ThreeFold guide, we explore how to set an IPFS node on a Full VM using the ThreeFold Playground. + +## Deploy a Full VM + +We start by deploying a full VM on the ThreeFold Playground. + +* Go to the [Threefold Playground](https://playground.grid.tf/#/) +* Deploy a full VM (Ubuntu 20.04) with an IPv4 address and at least the minimum specs + * IPv4 Address + * Minimum vcores: 1vcore + * Minimum MB of RAM: 1024GB + * Minimum storage: 50GB +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` + +## Create a Root-Access User + +We create a root-access user. Note that this step is optional. + +* Once connected, create a new user with root access (for this guide we use "newuser") + * ``` + adduser newuser + ``` + * You should now see the new user directory + * ``` + ls /home + ``` + * Give sudo capacity to the new user + * ``` + usermod -aG sudo newuser + ``` + * Switch to the new user + * ``` + su - newuser + ``` + * Create a directory to store the public key + * ``` + mkdir ~/.ssh + ``` + * Give read, write and execute permissions for the directory to the new user + * ``` + chmod 700 ~/.ssh + ``` + * Add the SSH public key in the file **authorized_keys** and save it + * ``` + nano ~/.ssh/authorized_keys + ``` +* Exit the VM + * ``` + exit + ``` +* Reconnect with the new user + * ``` + ssh newuser@VM_IPv4_address + ``` + +## Set a Firewall + +We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). +For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). +We thus add the following rules: +* Allow SSH (port 22) + * ``` + sudo ufw allow ssh + ``` +* Allow port 4001 + * ``` + sudo ufw allow 4001 + ``` +* To enable the firewall, write the following: + * ``` + sudo ufw enable + ``` +* To see the current security rules, write the following: + * ``` + sudo ufw status verbose + ``` +You now have enabled the firewall with proper security rules for your IPFS deployment. + +### Additional Ports + +We provided the basic firewall ports for your IPFS instance. There are other more advanced configurations possible. +If you want to access your IPFS node remotely, you can allow **port 5001**. This will allow anyone to access your IPFS node. Make sure that you know what you are doing if you go this route. You should, for example, restrict which external IP address can access port 5001. +If you want to run your deployment as a gateway node, you should allow **port 8080**. Read the IPFS documentation for more information on this. +If you want to run pubsub capabilities, you need to allow **port 8081**. For more information, read the [IPFS documentation](https://blog.ipfs.tech/25-pubsub/). + +## Install IPFS + +We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions). +* Download the binary + * ``` + wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Unzip the file + * ``` + tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Change directory + * ``` + cd kubo + ``` +* Run the install script + * ``` + sudo bash install.sh + ``` +* Verify that IPFS Kubo is properly installed + * ``` + ipfs --version + ``` + +## Set IPFS + +We initialize IPFS and run the IPFS daemon. + +* Initialize IPFS + * ``` + ipfs init --profile server + ``` +* Increase the storage capacity (optional) + * ``` + ipfs config Datastore.StorageMax 30GB + ``` +* Run the IPFS daemon + * ``` + ipfs daemon + ``` +* Set an Ubuntu systemd service to keep the IPFS daemon running after exiting the VM + * ``` + sudo nano /etc/systemd/system/ipfs.service + ``` +* Enter the systemd info + * ``` + [Unit] + Description=IPFS Daemon + [Service] + Type=simple + ExecStart=/usr/local/bin/ipfs daemon --enable-gc + Group=newuser + Restart=always + Environment="IPFS_PATH=/home/newuser/.ipfs" + [Install] + WantedBy=multi-user.target + ``` +* Enable the service + * ``` + sudo systemctl daemon-reload + sudo systemctl enable ipfs + sudo systemctl start ipfs + ``` +* Verify that the IPFS daemon is properly running + * ``` + sudo systemctl status ipfs + ``` +## Final Verification +We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification. +* Reboot the VM + * ``` + sudo reboot + ``` +* Reconnect to the VM + * ``` + ssh newuser@VM_IPv4_address + ``` +* Check that the IPFS daemon is running + * ``` + ipfs swarm peers + ``` +## Questions and Feedback +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_microvm.md b/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_microvm.md new file mode 100644 index 0000000..2f58f16 --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_microvm.md @@ -0,0 +1,167 @@ +

IPFS on a Micro VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Micro VM](#deploy-a-micro-vm) +- [Install the Prerequisites](#install-the-prerequisites) +- [Set a Firewall](#set-a-firewall) + - [Additional Ports](#additional-ports) +- [Install IPFS](#install-ipfs) +- [Set IPFS](#set-ipfs) +- [Set IPFS with zinit](#set-ipfs-with-zinit) +- [Final Verification](#final-verification) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this ThreeFold guide, we explore how to set an IPFS node on a micro VM using the ThreeFold Playground. + +## Deploy a Micro VM + +We start by deploying a micro VM on the ThreeFold Playground. + +* Go to the [Threefold Playground](https://playground.grid.tf/#/) +* Deploy a micro VM (Ubuntu 22.04) with an IPv4 address + * IPv4 Address + * Minimum vcores: 1vcore + * Minimum MB of RAM: 1024MB + * Minimum storage: 50GB +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` + +## Install the Prerequisites + +We install the prerequisites before installing and setting IPFS. + +* Update Ubuntu + * ``` + apt update + ``` +* Install nano and ufw + * ``` + apt install nano && apt install ufw -y + ``` + +## Set a Firewall + +We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). + +We thus add the following rules: + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow port 4001 + * ``` + ufw allow 4001 + ``` +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You have enabled the firewall with proper security rules for your IPFS deployment. + +### Additional Ports + +We provided the basic firewall ports for your IPFS instance. There are other more advanced configurations possible. + +If you want to access your IPFS node remotely, you can allow **port 5001**. This will allow anyone to access your IPFS node. Make sure that you know what you are doing if you go this route. You should, for example, restrict which external IP address can access port 5001. + +If you want to run your deployment as a gateway node, you should allow **port 8080**. Read the IPFS documentation for more information on this. + +If you want to run pubsub capabilities, you need to allow **port 8081**. For more information, read the [IPFS documentation](https://blog.ipfs.tech/25-pubsub/). + +## Install IPFS + +We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions). + +* Download the binary + * ``` + wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Unzip the file + * ``` + tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Change directory + * ``` + cd kubo + ``` +* Run the install script + * ``` + bash install.sh + ``` +* Verify that IPFS Kubo is properly installed + * ``` + ipfs --version + ``` + +## Set IPFS + +We initialize IPFS and run the IPFS daemon. + +* Initialize IPFS + * ``` + ipfs init --profile server + ``` +* Increase the storage capacity (optional) + * ``` + ipfs config Datastore.StorageMax 30GB + ``` +* Run the IPFS daemon + * ``` + ipfs daemon + ``` + +## Set IPFS with zinit + +We set the IPFS daemon with zinit. This will make sure that the IPFS daemon starts at each VM reboot or if it stops functioning momentarily. + +* Create the yaml file + * ``` + nano /etc/zinit/ipfs.yaml + ``` +* Set the execution command + * ``` + exec: /usr/local/bin/ipfs daemon + ``` +* Run the IPFS daemon with the zinit monitor command + * ``` + zinit monitor ipfs + ``` +* Verify that the IPFS daemon is running + * ``` + ipfs swarm peers + ``` + +## Final Verification + +We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification. + +* Reboot the VM + * ``` + reboot -f + ``` +* Reconnect to the VM and verify that the IPFS daemon is running + * ``` + ipfs swarm peers + ``` + +## Questions and Feedback + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_toc.md b/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_toc.md new file mode 100644 index 0000000..15d9c4a --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/ipfs/ipfs_toc.md @@ -0,0 +1,6 @@ +

IPFS and ThreeFold

+ +

Table of Contents

+ +- [IPFS on a Full VM](./ipfs_fullvm.md) +- [IPFS on a Micro VM](./ipfs_microvm.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/advanced/list_public_ips.md b/collections/manual/documentation/system_administrators/advanced/list_public_ips.md new file mode 100644 index 0000000..80fb72a --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/list_public_ips.md @@ -0,0 +1,22 @@ +

Listing Public IPs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +Listing public IPs can be done by asking graphQL for all IPs that has `contractId = 0` + +## Example + +```graphql +query MyQuery { + publicIps(where: {contractId_eq: 0}) { + ip + } +} +``` diff --git a/collections/manual/documentation/system_administrators/advanced/token_transfer_keygenerator.md b/collections/manual/documentation/system_administrators/advanced/token_transfer_keygenerator.md new file mode 100644 index 0000000..38c9ce1 --- /dev/null +++ b/collections/manual/documentation/system_administrators/advanced/token_transfer_keygenerator.md @@ -0,0 +1,88 @@ + +

Transfer TFT Between Networks by Using the Keygenerator

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) + - [Keypair](#keypair) + - [Stellar to TFChain](#stellar-to-tfchain) + - [Alternative Transfer to TF Chain](#alternative-transfer-to-tf-chain) +- [TFChain to Stellar](#tfchain-to-stellar) + +*** + +## Introduction + +Using this method, only transfer is possible between accounts that are generated in the same manner and that are yours. Please find the keygen tooling for it below. + +## Prerequisites + +### Keypair + +- ed25519 keypair +- Go installed on your local computer + +Create a keypair with the following tool: + +```sh +go build . +./keygen +``` + +### Stellar to TFChain + +Create a Stellar wallet from the key that you generated. +Transfer the TFT from your wallet to the bridge address. A deposit fee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT. + +Bridge addresses : + +- On Mainnet: `GBNOTAYUMXVO5QDYWYO2SOCOYIJ3XFIP65GKOQN7H65ZZSO6BK4SLWSC` on [Stellar Mainnet](https://stellar.expert/explorer/public). +- On testnet: `GA2CWNBUHX7NZ3B5GR4I23FMU7VY5RPA77IUJTIXTTTGKYSKDSV6LUA4` on [Stellar MAINnet](https://stellar.expert/explorer/public) + +The amount deposited on TF Chain minus 1 TFT will be transferred over the bridge to the TFChain account. + +Effect will be the following : + +- Transferred TFTs from Stellar will be sent to a Stellar vault account representing all tokens on TFChain +- TFTs will be minted on the TFChain for the transferred amount + +### Alternative Transfer to TF Chain + +We also enabled deposits to TF Grid objects. Following objects can be deposited to: + +- Twin +- Farm +- Node +- Entity + +To deposit to any of these objects, a memo text in format `object_objectID` must be passed on the deposit to the bridge wallet. Example: `twin_1`. + +To deposit to a TF Grid object, this object **must** exists. If the object is not found on chain, a refund is issued. + +## TFChain to Stellar + +Create a TFChain account from the key that you generated. (TF Chain raw seed). +Browse to : + +- For mainnet: +- For testnet: +- For Devnet: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/accounts + +-> Add Account -> Click on mnemonic and select `Raw Seed` -> Paste raw TF Chain seed. + +Select `Advanced creation options` -> Change `keypair crypto type` to `Edwards (ed25519)`. Click `I have saved my mnemonic seed safely` and proceed. + +Choose a name and password and proceed. + +Browse to the [extrinsics](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/extrinsics) , select tftBridgeModule and extrinsic: `swap_to_stellar`. Provide your Bridge substrate address and the amount to transfer. Sign using your password. +Again, a withdrawfee of 1 TFT will be taken, so make sure you send an amount larger than 1 TFT. + +The amount withdrawn from TFChain will be sent to your Stellar wallet. + +Behind the scenes, following will happen: + +- Transferred TFTs from Stellar will be sent from the Stellar vault account to the user's Stellar account +- TFTs will be burned on the TFChain for the transferred amount + +Example: ![swap_to_stellar](img/swap_to_stellar.png ':size=400') diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/cli_scripts_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/cli_scripts_basics.md new file mode 100644 index 0000000..dec08b2 --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/cli_scripts_basics.md @@ -0,0 +1,1138 @@ + +

CLI and Scripts Basics

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Basic Commands](#basic-commands) + - [Update and upgrade packages](#update-and-upgrade-packages) + - [Test the network connectivity of a domain or an IP address with ping](#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) + - [Install Go](#install-go) + - [Install Brew](#install-brew) + - [Brew basic commands](#brew-basic-commands) + - [Install Terraform with Brew](#install-terraform-with-brew) + - [Yarn basic commands](#yarn-basic-commands) + - [Set default terminal](#set-default-terminal) + - [See the current path](#see-the-current-path) + - [List hidden files](#list-hidden-files) + - [Display the content of a directory](#display-the-content-of-a-directory) + - [Vim modes and basic commands](#vim-modes-and-basic-commands) + - [Check the listening ports using netstat](#check-the-listening-ports-using-netstat) + - [See the disk usage of different folders](#see-the-disk-usage-of-different-folders) + - [Verify the application version](#verify-the-application-version) + - [Find the path of a file with only the file name](#find-the-path-of-a-file-with-only-the-file-name) + - [Become the superuser (su) on Linux](#become-the-superuser-su-on-linux) + - [Exit a session](#exit-a-session) + - [Know the current user](#know-the-current-user) + - [Set the path of a package](#set-the-path-of-a-package) + - [See the current path](#see-the-current-path-1) + - [Find the current shell](#find-the-current-shell) + - [SSH into Remote Server](#ssh-into-remote-server) + - [Replace a string by another string in a text file](#replace-a-string-by-another-string-in-a-text-file) + - [Replace extensions of files in a folder](#replace-extensions-of-files-in-a-folder) + - [Remove extension of files in a folder](#remove-extension-of-files-in-a-folder) + - [See the current date and time on Linux](#see-the-current-date-and-time-on-linux) + - [Special variables in Bash Shell](#special-variables-in-bash-shell) + - [Gather DNS information of a website](#gather-dns-information-of-a-website) + - [Partition and mount a disk](#partition-and-mount-a-disk) +- [Encryption](#encryption) + - [Encrypt files with Gocryptfs](#encrypt-files-with-gocryptfs) + - [Encrypt files with Veracrypt](#encrypt-files-with-veracrypt) +- [Network-related Commands](#network-related-commands) + - [See the network connections and ports](#see-the-network-connections-and-ports) + - [See identity and info of IP address](#see-identity-and-info-of-ip-address) + - [ip basic commands](#ip-basic-commands) + - [Display socket statistics](#display-socket-statistics) + - [Query or control network driver and hardware settings](#query-or-control-network-driver-and-hardware-settings) + - [See if ethernet port is active](#see-if-ethernet-port-is-active) + - [Add IP address to hardware port (ethernet)](#add-ip-address-to-hardware-port-ethernet) + - [Private IP address range](#private-ip-address-range) + - [Set IP Address manually](#set-ip-address-manually) +- [Basic Scripts](#basic-scripts) + - [Run a script with arguments](#run-a-script-with-arguments) + - [Print all arguments](#print-all-arguments) + - [Iterate over arguments](#iterate-over-arguments) + - [Count lines in files given as arguments](#count-lines-in-files-given-as-arguments) + - [Find path of a file](#find-path-of-a-file) + - [Print how many arguments are passed in a script](#print-how-many-arguments-are-passed-in-a-script) +- [Linux](#linux) + - [Install Terraform](#install-terraform) +- [MAC](#mac) + - [Enable remote login on MAC](#enable-remote-login-on-mac) + - [Find Other storage on MAC](#find-other-storage-on-mac) + - [Sort files by size and extension on MAC](#sort-files-by-size-and-extension-on-mac) +- [Windows](#windows) + - [Install Chocolatey](#install-chocolatey) + - [Install Terraform with Chocolatey](#install-terraform-with-chocolatey) + - [Find the product key](#find-the-product-key) + - [Find Windows license type](#find-windows-license-type) +- [References](#references) + +*** + +## Introduction + +We present here a quick guide on different command-line interface (CLI) commands as well as some basic scripts. + +The main goal of this guide is to demonstrate that having some core understanding of CLI and scripts can drastically increase efficiency and speed when it comes to deploying and managing workloads on the TFGrid. + +## Basic Commands + +### Update and upgrade packages + +The command **update** ensures that you have access to the latest versions of packages available. + +``` +sudo apt update +``` + +The command **upgrade** downloads and installs the updates for each outdated package and dependency on your system. + +``` +sudo apt upgrade +``` + + + +### Test the network connectivity of a domain or an IP address with ping + +To test the network connectivity of a domain or an IP address, you can use `ping` on Linux, MAC and Windows: + +* Template + ``` + ping + ``` +* Example + ``` + ping threefold.io + ``` + +On Windows, by default, the command will send 4 packets. On MAC and Linux, it will keep on sending packets, so you will need to press `Ctrl-C` to stop the command from running. + +You can also set a number of counts with `-c` on Linux and MAC and `-n` on Windows. + +* Send a given number of packets on Linux and MAC (e.g 5 packets) + ``` + ping -c 5 threefold.io + ``` +* Send a given number of packets on Windows (e.g 5 packets) + ``` + ping -n 5 threefold.io + ``` + +*** + +### Install Go + +Here are the steps to install [Go](https://go.dev/). + +* Install go + * ``` + sudo apt install golang-go + ``` +* Verify that go is properly installed + * ``` + go version + ``` + + + +### Install Brew + +Follow those steps to install [Brew](https://brew.sh/) + +* Installation command from Brew: + * ``` + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + ``` +* Add the path to the **.profile** directory. Replace by your username. + * ``` + echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> /home//.profile + ``` +* Evaluation the following: + * ``` + eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" + ``` +* Verify the installation + * ``` + brew doctor + ``` + + + +### Brew basic commands + +* To update brew in general: + * ``` + brew update + ``` +* To update a specific package: + * ``` + brew update + ``` +* To install a package: + * ``` + brew install + ``` +* To uninstall a package: + * ``` + brew uninstall + ``` +* To search a package: + * ``` + brew search + ``` +* [Uninstall Brew](https://github.com/homebrew/install#uninstall-homebrew) + * ``` + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)" + ``` + + + +### Install Terraform with Brew + +Installing Terraform with Brew is very simple by following the [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). + +* Compile HashiCorp software on Homebrew's infrastructure + * ``` + brew tap hashicorp/tap + ``` +* Install Terraform + * ``` + brew install hashicorp/tap/terraform + ``` + + + +### Yarn basic commands + +* Add a package + * ``` + yarn add + ``` +* Initialize the development of a package + * ``` + yarn init + ``` +* Install all the dependencies in the **package.json** file + * ``` + yarn install + ``` +* Publish a package to a package manager + * ``` + yarn publish + ``` +* Remove unused package from the current package + * ``` + yarn remove + ``` +* Clean the cache + * ``` + yarn cache clean + ``` + + + +### Set default terminal + +``` +update-alternatives --config x-terminal-emulator +``` + +### See the current path + +``` +pwd +``` + + + +### List hidden files + +``` +ls -ld .?* +``` + + + +### Display the content of a directory + +You can use **tree** to display the files and organization of a directory: + +* General command + * ``` + tree + ``` +* View hidden files + * ``` + tree -a + ``` + + + +### Vim modes and basic commands + +[Vim](https://www.vim.org/) is a free and open-source, screen-based text editor program. + +With Vim, you can use two modes. + +* Insert mode - normal text editor + * Press **i** +* Command mode - commands to the editor + * Press **ESC** + +Here are some basic commands: + +* Delete characters + * **x** +* Undo last command + * **u** +* Undo the whole line + * **U** +* Go to the end of line + * **A** +* Save and exit + * **:wq** +* Discard all changes + * **:q!** +* Move cursor to the start of the line + * **0** +* Delete the current word + * **dw** +* Delete the current line + * **dd** + + + +### Check the listening ports using netstat + +Use the command: + +``` +netstat +``` + + + + +### See the disk usage of different folders + +``` +du -sh * +``` + + + + +### Verify the application version + +``` +which +``` + + + +### Find the path of a file with only the file name + +On MAC and Linux, you can use **coreutils** and **realpath** from Brew: + +* ``` + brew install coreutils + ``` +* ``` + realpath file_name + ``` + + + +### Become the superuser (su) on Linux + +You can use either command: + +* Option 1 + * ``` + sudo -i + ``` +* Option 2 + * ``` + sudo -s + ``` + + + +### Exit a session + +You can use either command depending on your shell: + +* ``` + exit + ``` +* ``` + logout + ``` + + + +### Know the current user + +You can use the following command: + +* ``` + whoami + ``` + + + +### See the path of a package + +To see the path of a package, you can use the following command: + +* ``` + whereis + ``` + + + +### Set the path of a package + +``` +export PATH=$PATH:/snap/bin + +``` + + + + +### See the current path + +``` +pwd +``` + + + +### Find the current shell + +* Compact version + * ``` + echo $SHELL + ``` +* Detailed version + * ``` + ls -l /proc/$$/exe + ``` + + + +### SSH into Remote Server + +* Create SSH key pair + * ``` + ssh-keygen + ``` +* Install openssh-client on the local computer* + * ``` + sudo apt install openssh-client + ``` +* Install openssh-server on the remote computer* + * ``` + sudo apt install openssh-server + ``` +* Copy public key + * ``` + cat ~/.ssh/id_rsa.pub + ``` +* Create the ssh directory on the remote computer + * ``` + mkdir ~/.ssh + ``` +* Add public key in the file **authorized_keys** on the remote computer + * ``` + nano ~/.ssh/authorized_keys + ``` +* Check openssh-server status + * ``` + sudo service ssh status + ``` +* SSH into the remote machine + * ``` + ssh @ + ``` + +\*Note: For MAC, you can install **openssh-server** and **openssh-client** with Brew: **brew install openssh-server** and **brew install openssh-client**. + +To enable remote login on a MAC, [read this section](#enable-remote-login-on-mac). + + + +### Replace a string by another string in a text file + +* Replace one string by another (e.g. **old_string**, **new_string**) + * ``` + sed -i 's/old_string/new_string/g' / + ``` +* Use environment variables (double quotes) + * ``` + sed -i "s/old_string/$env_variable/g" / + ``` + + + +### Replace extensions of files in a folder + +Replace **ext1** and **ext2** by the extensions in question. + +``` +find ./ -depth -name "*.ext1" -exec sh -c 'mv "$1" "${1%.ext1}.ext2"' _ {} \; +``` + + + +### Remove extension of files in a folder + +Replace **ext** with the extension in question. + +```bash +for file in *.ext; do + mv -- "$file" "${file%%.ext}" +done +``` + + + +### See the current date and time on Linux + +``` +date +``` + + + +### Special variables in Bash Shell + +| Special Variables | Descriptions | +| ---------------- | ----------------------------------------------- | +| $0 | Name of the bash script | +| $1, $2...$n | Bash script arguments | +| $$ | Process id of the current shell | +| $* | String containing every command-line argument | +| $# | Total number of arguments passed to the script | +| $@ | Value of all the arguments passed to the script | +| $? | Exit status of the last executed command | +| $! | Process id of the last executed command | +| $- | Print current set of option in current shell | + + + +### Gather DNS information of a website + +You can use [Dig](https://man.archlinux.org/man/dig.1) to gather DNS information of a website + +* Template + * ``` + dig + ``` +* Example + * ``` + dig threefold.io + ``` + +You can also use online tools such as [DNS Checker](https://dnschecker.org/). + + + +### Partition and mount a disk + +We present one of many ways to partition and mount a disk. + +* Create partition with [gparted](https://gparted.org/) + * ``` + sudo gparted + ``` +* Find the disk you want to mount (e.g. **sdb**) + * ``` + sudo fdisk -l + ``` +* Create a directory to mount the disk to + * ``` + sudo mkdir /mnt/disk + ``` +* Open fstab + * ``` + sudo nano /etc/fstab + ``` +* Append the following to the fstab with the proper disk path (e.g. **/dev/sdb**) and mount point (e.g. **/mnt/disk**) + * ``` + /dev/sdb /mnt/disk ext4 defaults 0 0 + ``` +* Mount the disk + * ``` + sudo mount /mnt/disk + ``` +* Add permissions (as needed) + * ``` + sudo chmod -R 0777 /mnt/disk + ``` + + + +## Encryption + +### Encrypt files with Gocryptfs + +You can use [gocryptfs](https://github.com/rfjakob/gocryptfs) to encrypt files. + +* Install gocryptfs + * ``` + apt install gocryptfs + ``` +* Create a vault directory (e.g. **vaultdir**) and a mount directory (e.g. **mountdir**) + * ``` + mkdir vaultdir mountdir + ``` +* Initiate the vault + * ``` + gocryptfs -init vaultdir + ``` +* Mount the mount directory with the vault + * ``` + gocryptfs vaultdir mountdir + ``` +* You can now create files in the folder. For example: + * ``` + touch mountdir/test.txt + ``` +* The new file **test.txt** is now encrypted in the vault + * ``` + ls vaultdir + ``` +* To unmount the mountedvault folder: + * Option 1 + * ``` + fusermount -u mountdir + ``` + * Option 2 + * ``` + rmdir mountdir + ``` + + +### Encrypt files with Veracrypt + +To encrypt files, you can use [Veracrypt](https://www.veracrypt.fr/en/Home.html). Let's see how to download and install Veracrypt. + +* Veracrypt GUI + * Download the package + * ``` + wget https://launchpad.net/veracrypt/trunk/1.25.9/+download/veracrypt-1.25.9-Ubuntu-22.04-amd64.deb + ``` + * Install the package + * ``` + dpkg -i ./veracrypt-1.25.9-Ubuntu-22.04-amd64.deb + ``` +* Veracrypt console only + * Download the package + * ``` + wget https://launchpad.net/veracrypt/trunk/1.25.9/+download/veracrypt-console-1.25.9-Ubuntu-22.04-amd64.deb + ``` + * Install the package + * ``` + dpkg -i ./veracrypt-console-1.25.9-Ubuntu-22.04-amd64.deb + ``` + +You can visit [Veracrypt download page](https://www.veracrypt.fr/en/Downloads.html) to get the newest releases. + +* To run Veracrypt + * ``` + veracrypt + ``` +* Veracrypt documentation is very complete. To begin using the application, visit the [Beginner's Tutorial](https://www.veracrypt.fr/en/Beginner%27s%20Tutorial.html). + + + +## Network-related Commands + +### See the network connections and ports + +ifconfig + + + +### See identity and info of IP address + +* See abuses related to an IP address: + * ``` + https://www.abuseipdb.com/check/ + ``` +* See general information of an IP address: + * ``` + https://www.whois.com/whois/ + ``` + + + +### ip basic commands + +* Manage and display the state of all network + * ``` + ip link + ``` +* Display IP Addresses and property information (abbreviation of address) + * ``` + ip addr + ``` +* Display and alter the routing table + * ``` + ip route + ``` +* Manage and display multicast IP addresses + * ``` + ip maddr + ``` +* Show neighbour object + * ``` + ip neigh + ``` +* Display a list of commands and arguments for +each subcommand + * ``` + ip help + ``` +* Add an address + * Template + * ``` + ip addr add + ``` + * Example: set IP address to device **enp0** + * ``` + ip addr add 192.168.3.4/24 dev enp0 + ``` +* Delete an address + * Template + * ``` + ip addr del + ``` + * Example: set IP address to device **enp0** + * ``` + ip addr del 192.168.3.4/24 dev enp0 + ``` +* Alter the status of an interface + * Template + * ``` + ip link set + ``` + * Example 1: Bring interface online (here device **em2**) + * ``` + ip link set em2 up + ``` + * Example 2: Bring interface offline (here device **em2**) + * ``` + ip link set em2 down + ``` +* Add a multicast address + * Template + * ``` + ip maddr add + ``` + * Example : set IP address to device **em2** + * ``` + ip maddr add 33:32:00:00:00:01 dev em2 + ``` +* Delete a multicast address + * Template + * ``` + ip maddr del + ``` + * Example: set IP address to device **em2** + * ``` + ip maddr del 33:32:00:00:00:01 dev em2 + ``` +* Add a routing table entry + * Template + * ``` + ip route add + ``` + * Example 1: Add a default route (for all addresses) via a local gateway + * ``` + ip route add default via 192.168.1.1 dev em1 + ``` + * Example 2: Add a route to 192.168.3.0/24 via the gateway at 192.168.3.2 + * ``` + ip route add 192.168.3.0/24 via 192.168.3.2 + ``` + * Example 3: Add a route to 192.168.1.0/24 that can be reached on +device em1 + * ``` + ip route add 192.168.1.0/24 dev em1 + ``` +* Delete a routing table entry + * Template + * ``` + ip route delete + ``` + * Example: Delete the route for 192.168.1.0/24 via the gateway at +192.168.1.1 + * ``` + ip route delete 192.168.1.0/24 via 192.168.1.1 + ``` +* Replace, or add, a route + * Template + * ``` + ip route replace + ``` + * Example: Replace the defined route for 192.168.1.0/24 to use +device em1 + * ``` + ip route replace 192.168.1.0/24 dev em1 + ``` +* Display the route an address will take + * Template + * ``` + ip route get + ``` + * Example: Display the route taken for IP 192.168.18.25 + * ``` + ip route replace 192.168.18.25/24 dev enp0 + ``` + + + +References: https://www.commandlinux.com/man-page/man8/ip.8.html + + + +### Display socket statistics + +* Show all sockets + * ``` + ss -a + ``` +* Show detailed socket information + * ``` + ss -e + ``` +* Show timer information + * ``` + ss -o + ``` +* Do not resolve address + * ``` + ss -n + ``` +* Show process using the socket + * ``` + ss -p + ``` + +Note: You can combine parameters, e.g. **ss -aeo**. + +References: https://www.commandlinux.com/man-page/man8/ss.8.html + + + +### Query or control network driver and hardware settings + +* Display ring buffer for a device (e.g. **eth0**) + * ``` + ethtool -g eth0 + ``` +* Display driver information for a device (e.g. **eth0**) + * ``` + ethtool -i eth0 + ``` +* Identify eth0 by sight, e.g. by causing LEDs to blink on the network port + * ``` + ethtool -p eth0 + ``` +* Display network and driver statistics for a device (e.g. **eth0**) + * ``` + ethtool -S eth0 + ``` + +References: https://man.archlinux.org/man/ethtool.8.en + + + +### See if ethernet port is active + +Replace with the proper device: + +``` +cat /sys/class/net//carrier +``` + + + +### Add IP address to hardware port (ethernet) + +* Find ethernet port ID on both computers + * ``` + ip a + ``` +* Add IP address (DHCO or static) + * Computer 1 + * ``` + ip addr add /24 dev + ``` + * Computer 2 + * ``` + ip addr add /24 dev + ``` + +* [Ping](#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the address to confirm connection + * ``` + ping + ``` + +To set and view the address for either DHCP or static, go to **Networks** then **Details**. + + + +### Private IP address range + +The private IP range is the following: + +* 10.0.0.0–10.255.255.255 +* 172.16.0.0–172.31.255.255 +* 192.168.0.0–192.168.255.255 + + + +### Set IP Address manually + +You can use the following template when you set an IP address manually: + +* Address + * +* Netmask + * 255.255.255.0 +* Gateway + * optional + + + +## Basic Scripts + +### Run a script with arguments + +You can use the following template to add arguments when running a script: + +* Option 1 + * ``` + ./example_script.sh arg1 arg2 + ``` +* Option 2 + * ``` + sh example_script.sh "arg1" "arg2" + ``` + +### Print all arguments + +* Write a script + * File: `example_script.sh` + * ```bash + #!/bin/sh + echo $@ + ``` +* Give permissions + * ```bash + chmod +x ./example_script.sh + ``` +* Run the script with arguments + * ```bash + sh example_script.sh arg1 arg2 + ``` + + +### Iterate over arguments + +* Write the script + * ```bash + # iterate_script.sh + #!/bin/bash + for i; do + echo $i + done + ``` +* Give permissions + * ``` + chmod +x ./iterate_script.sh + ``` +* Run the script with arguments + * ``` + sh iterate_script.sh arg1 arg2 + ``` + +* The following script is equivalent + * ```bash + # iterate_script.sh + #/bin/bash + for i in $*; do + echo $i + done + ``` + + + +### Count lines in files given as arguments + +* Write the script + * ```bash + # count_lines.sh + #!/bin/bash + for i in $*; do + nlines=$(wc -l < $i) + echo "There are $nlines lines in $i" + done + ``` +* Give permissions + * ``` + chmod +x ./count_lines.sh + ``` +* Run the script with arguments (files). Here we use the script itself as an example. + * ``` + sh count_lines.sh count_lines.sh + ``` + + + +### Find path of a file + +* Write the script + * ```bash + # find.sh + #!/bin/bash + + find / -iname $1 2> /dev/null + ``` +* Run the script + * ``` + sh find.sh + ``` + + + +### Print how many arguments are passed in a script + +* Write the script + * ```bash + # print_qty_args.sh + #!/bin/bash + echo This script was passed $# arguments + ``` +* Run the script + * ``` + sh print_qty_args.sh + ``` + + +## Linux + +### Install Terraform + +Here are the steps to install Terraform on Linux based on the [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). + +``` +wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg +``` +``` +echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list +``` +``` +sudo apt update && sudo apt install terraform +``` + +Note that the Terraform documentation also covers other methods to install Terraform on Linux. + +## MAC + +### Enable remote login on MAC + +* Option 1: + * Use the following command line: + * ``` + systemsetup -setremotelogin on + ``` +* Option 2 + * Use **System Preferences** + * Go to **System Preferences** -> **Sharing** -> **Enable Remote Login**. + + + +### Find Other storage on MAC + +* Open **Finder** \> **Go** \> **Go to Folder** +* Paste this path + * ``` + ~/Library/Caches + ``` + + + +### Sort files by size and extension on MAC + +* From your desktop, press **Command-F**. +* Click **This Mac**. +* Click the first dropdown menu field and select **Other**. +* From the **Search Attributes** window + * tick **File Size** and **File Extension**. + + + +## Windows + +### Install Chocolatey + +To install Chocolatey on Windows, we follow the [official Chocolatey website](https://chocolatey.org/install) instructions. + +* Run PowerShell as Administrator +* Check if **Get-ExecutionPolicy** is restricted + * ``` + Get-ExecutionPolicy + ``` + * If it is restricted, run the following command: + * ``` + Set-ExecutionPolicy AllSigned + ``` +* Install Chocolatey + * ``` + Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) + ``` +* Note: You might need to restart PowerShell to use Chocolatey + + + +### Install Terraform with Chocolatey + +Once you've installed Chocolatey on Windows, installing Terraform is as simple as can be: + +* Install Terraform with Chocolatey + * ``` + choco install terraform + ``` + + + +### Find the product key + +Write the following in **Command Prompt** (run as administrator): + +``` +wmic path SoftwareLicensingService get OA3xOriginalProductKey +``` + + + +### Find Windows license type + +Write the following in **Command Prompt**: + +``` +slmgr /dli +``` + + + +## References + +* GNU Bash Manual - https://www.gnu.org/software/bash/manual/bash.html \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/computer_it_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/computer_it_basics.md new file mode 100644 index 0000000..c1dfdfd --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/computer_it_basics.md @@ -0,0 +1,15 @@ +

Computer and IT Basics

+ +Welcome to the *Computer and IT Basics* section of the ThreeFold Manual! + +In this section, tailored specifically for system administrators, we'll delve into fundamental concepts and tools that form the backbone of managing and securing infrastructure. Whether you're a seasoned sysadmin or just starting your journey, these basics are essential for navigating the intricacies of the ThreeFold Grid. + +

Table of Contents

+ +- [CLI and Scripts Basics](./cli_scripts_basics.md) +- [Docker Basics](./docker_basics.md) +- [Git and GitHub Basics](./git_github_basics.md) +- [Firewall Basics](./firewall_basics/firewall_basics.md) + - [UFW Basics](./firewall_basics/ufw_basics.md) + - [Firewalld Basics](./firewall_basics/firewalld_basics.md) +- [File Transfer](./file_transfer.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/docker_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/docker_basics.md new file mode 100644 index 0000000..5a7f297 --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/docker_basics.md @@ -0,0 +1,458 @@ +

Docker Basic Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Basic Commands](#basic-commands) + - [Install Docker Desktop and Docker Engine](#install-docker-desktop-and-docker-engine) + - [Remove completely Docker](#remove-completely-docker) + - [List containers](#list-containers) + - [Pull an image](#pull-an-image) + - [Push an image](#push-an-image) + - [Inspect and pull an image with GHCR](#inspect-and-pull-an-image-with-ghcr) + - [See a docker image (no download)](#see-a-docker-image-no-download) + - [Build a container](#build-a-container) + - [List all available docker images](#list-all-available-docker-images) + - [Run a container](#run-a-container) + - [Run a new command in an existing container](#run-a-new-command-in-an-existing-container) + - [Bash shell into container](#bash-shell-into-container) + - [Pass arguments with a bash script and a Dockerfile](#pass-arguments-with-a-bash-script-and-a-dockerfile) + - [Copy files from a container to the local computer](#copy-files-from-a-container-to-the-local-computer) + - [Delete all the containers, images and volumes](#delete-all-the-containers-images-and-volumes) + - [Kill all the Docker processes](#kill-all-the-docker-processes) + - [Output full logs for all containers](#output-full-logs-for-all-containers) +- [Resources Usage](#resources-usage) + - [Examine containers with size](#examine-containers-with-size) + - [Examine disks usage](#examine-disks-usage) +- [Wasted Resources](#wasted-resources) + - [Prune the Docker logs](#prune-the-docker-logs) + - [Prune the Docker containers](#prune-the-docker-containers) + - [Remove unused and untagged local container images](#remove-unused-and-untagged-local-container-images) + - [Clean up and delete all unused container images](#clean-up-and-delete-all-unused-container-images) + - [Clean up container images based on a given timeframe](#clean-up-container-images-based-on-a-given-timeframe) +- [Command Combinations](#command-combinations) + - [Kill all running containers](#kill-all-running-containers) + - [Stop all running containers](#stop-all-running-containers) + - [Delete all stopped containers](#delete-all-stopped-containers) + - [Delete all images](#delete-all-images) + - [Update and stop a container in a crash-loop](#update-and-stop-a-container-in-a-crash-loop) +- [References](#references) + +*** + +## Introduction + +We present here a quick introduction to Docker. We cover basic commands, as well as command combinations. Understanding the following should give system administrators confidence when it comes to using Docker efficiently. + +The following can serve as a quick reference guide when deploying workloads on the ThreeFold Grid and using Docker in general. + +We invite the readers to consult the [official Docker documentation](https://docs.docker.com/) for more information. + + + +## Basic Commands + +### Install Docker Desktop and Docker Engine + +You can install [Docker Desktop](https://docs.docker.com/get-docker/) and [Docker Engine](https://docs.docker.com/engine/install/) for Linux, MAC and Windows. Follow the official Docker documentation for the details. + +Note that the quickest way to install Docker Engine is to use the convenience script: + +``` +curl -fsSL https://get.docker.com -o get-docker.sh +sudo sh get-docker.sh +``` + + + +### Remove completely Docker + +To completely remove docker from your machine, you can follow these steps: + +* List the docker packages + * ``` + dpkg -l | grep -i docker + ``` +* Purge and autoremove docker + * ``` + apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli docker-compose-plugin + apt-get autoremove -y --purge docker-engine docker docker.io docker-ce docker-compose-plugin + ``` +* Remove the docker files and folders + * ``` + rm -rf /var/lib/docker /etc/docker + rm /etc/apparmor.d/docker + groupdel docker + rm -rf /var/run/docker.sock + ``` + +You can also use the command **whereis docker** to see if any Docker folders and files remain. If so, remove them with + + + +### List containers + +* List only running containers + * ``` + docker ps + ``` +* List all containers (running + stopped) + * ``` + docker ps -a + ``` + + + +### Pull an image + +To pull an image from [Docker Hub](https://hub.docker.com/): + +* Pull an image + * ``` + docker pull + ``` +* Pull an image with the tag + * ``` + docker pull :tag + ``` +* Pull all tags of an image + * ``` + docker pull -a + ``` + + + +### Push an image + +To pull an image to [Docker Hub](https://hub.docker.com/): + +* Push an image + * ``` + docker push + ``` +* Push an image with the tag + * ``` + docker push :tag + ``` +* Push all tags of an image + * ``` + docker pull -a + ``` + + + +### Inspect and pull an image with GHCR + +* Inspect the docker image + * ``` + docker inspect ghcr.io//: + ``` +* Pull the docker image + * ``` + docker pull ghcr.io//: + ``` + + + +### See a docker image (no download) + +If you want to see a docker image without downloading the image itself, you can use Quay's [Skopeo tool](https://github.com/containers/skopeo), a command line utility that performs various operations on container images and image repositories. + +``` +docker run --rm quay.io/skopeo/stable list-tags docker://ghcr.io// +``` + +Make sure to write the proper information for the repository and the image. + +To install Skopeo, read [this documentation](https://github.com/containers/skopeo/blob/main/install.md). + + + + +### Build a container + +Use **docker build** to build a container based on a Dockerfile + +* Build a container based on current directory Dockerfile + * ``` + docker build . + ``` +* Build a container and store the image with a given name + * Template + * ``` + docker build -t ":" + ``` + * Example + * ``` + docker build -t newimage:latest + ``` +* Build a docker container without using the cache + * ``` + docker build --no-cache + ``` + + + +### List all available docker images + +``` +docker images +``` + + + +### Run a container + +To run a container based on an image, use the command **docker run**. + +* Run an image + * ``` + docker run + ``` +* Run an image in the background (run and detach) + * ``` + docker run -d + ``` +* Run an image with CLI input + * ``` + docker run -it + ``` + +You can combine arguments, e.g. **docker run -itd**. + +You can also specify the shell, e.g. **docker run -it /bin/bash** + + + +### Run a new command in an existing container + +To run a new command in an existing container, use **docker exec**. + +* Execute interactive shell on the container + * ``` + docker exec -it sh + ``` + + + +### Bash shell into container + +* Bash shell into a container + * ``` + docker exec -i -t /bin/bash + ``` +* Bash shell into a container with root + * ``` + docker exec -i -t -u root /bin/bash + ``` + +Note: if bash is not available, you can use `/bin/sh` + + + +### Pass arguments with a bash script and a Dockerfile + +You can do the following to pass arguments with a bash script and a Dockerfile. + +```sh +# script_example.sh +#!/bin/sh + +echo This is the domain: $env_domain +echo This is the name: $env_name +echo This is the password: $env_password + +``` +* File `Dockerfile` + +```Dockerfile +FROM ubuntu:latest + +ARG domain + +ARG name + +ARG password + +ENV env_domain $domain + +ENV env_name $name + +ENV env_password $password + +COPY script_example.sh . + +RUN chmod +x /script_example.sh + +CMD ["/script_example.sh"] +``` + + + +### Copy files from a container to the local computer + +``` +docker cp : +``` + + + +### Delete all the containers, images and volumes + +* To delete all containers: + * ``` + docker compose rm -f -s -v + ``` + +* To delete all images: + * ``` + docker rmi -f $(docker images -aq) + ``` + +* To delete all volumes: + * ``` + docker volume rm $(docker volume ls -qf dangling=true) + ``` + +* To delete all containers, images and volumes: + * ``` + docker compose rm -f -s -v && docker rmi -f $(docker images -aq) && docker volume rm $(docker volume ls -qf dangling=true) + ``` + + + +### Kill all the Docker processes + +* To kill all processes: + * ``` + killall Docker && open /Applications/Docker.app + ``` + + + +### Output full logs for all containers + +The following command output the full logs for all containers in the file **containers.log**: + +``` +docker compose logs > containers.log +``` + + + +## Resources Usage + +### Examine containers with size + +``` +docker ps -s +``` + + + +### Examine disks usage + +* Basic mode + * ``` + docker system df + ``` +* Verbose mode + * ``` + docker system df -v + ``` + + + +## Wasted Resources + +### Prune the Docker logs + +``` +docker system prune +``` + +### Prune the Docker containers + +You can use the prune function to delete all stopped containers: + +``` +docker container prune +``` + +### Remove unused and untagged local container images + +The following is useful if you want to clean up local filesystem: + +``` +docker image prune +``` + +### Clean up and delete all unused container images + +``` +docker image prune -a +``` + +### Clean up container images based on a given timeframe + +To clean up container images created X hours ago, you can use the following template (replace with a number): + +``` +docker image prune -a --force --filter "until=h" +``` + +To clean up container images created before a given date, you can use the following template (replace with the complete date): + +``` +docker image prune -a --force --filter "until=" +``` + +Note: An example of a complete date would be `2023-01-04T00:00:00` + + + +## Command Combinations + +### Kill all running containers + +``` +docker kill $(docker ps -q) +``` + + + +### Stop all running containers + +``` +docker stop $(docker ps -a -q) +``` + + + +### Delete all stopped containers + +``` +docker rm $(docker ps -a -q) +``` + + +### Delete all images + +``` +docker rmi $(docker images -q) +``` + + + +### Update and stop a container in a crash-loop + +``` +docker update –restart=no && docker stop +``` + + + +## References + +* Docker Manual - https://docs.docker.com/ +* Code Notary - https://codenotary.com/blog/extremely-useful-docker-commands \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/file_transfer.md b/collections/manual/documentation/system_administrators/computer_it_basics/file_transfer.md new file mode 100644 index 0000000..41718dc --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/file_transfer.md @@ -0,0 +1,271 @@ +

File Transfer

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [SCP](#scp) + - [File transfer with IPv4](#file-transfer-with-ipv4) + - [File transfer with IPv6](#file-transfer-with-ipv6) +- [Rsync](#rsync) + - [File transfer](#file-transfer) + - [Adjust reorganization of files and folders before running rsync](#adjust-reorganization-of-files-and-folders-before-running-rsync) + - [Automate backup with rsync](#automate-backup-with-rsync) + - [Parameters --checksum and --ignore-times with rsync](#parameters---checksum-and---ignore-times-with-rsync) + - [Trailing slashes with rsync](#trailing-slashes-with-rsync) +- [SFTP](#sftp) + - [SFTP on the Terminal](#sftp-on-the-terminal) + - [SFTP Basic Commands](#sftp-basic-commands) + - [SFTP File Transfer](#sftp-file-transfer) +- [SFTP with FileZilla](#sftp-with-filezilla) + - [Install FileZilla](#install-filezilla) + - [Add a Private Key](#add-a-private-key) + - [FileZilla SFTP Connection](#filezilla-sftp-connection) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +Deploying on the TFGrid with tools such as the Playground and Terraform is easy and it's also possible to quickly transfer files between local machine and VMs deployed on 3Nodes on the TFGrid. In this section, we cover different ways to transfer files between local and remote machines. + +## SCP + +### File transfer with IPv4 + +* From local to remote, write the following on the local terminal: + * ``` + scp / @:/// + ``` +* From remote to local, you can write the following on the local terminal (more secure): + * ``` + scp @:/// / +* From remote to local, you can also write the following on the remote terminal: + * ``` + scp / @:/// + +### File transfer with IPv6 + +For IPv6, it is similar to IPv4 but you need to add `-6` after scp and add `\[` before and `\]` after the IPv6 address. + +## Rsync + +### File transfer + +[rsync](https://rsync.samba.org/) is a utility for efficiently transferring and synchronizing files between a computer and a storage drive and across networked computers by comparing the modification times and sizes of files. + +We show here how to transfer files between two computers. Note that at least one of the two computers must be local. This will transfer the content of the source directory into the destination directory. + +* From local to remote + * ``` + rsync -avz --progress --delete /path/to/local/directory/ remote_user@:/path/to/remote/directory + ``` +* From remote to local + * ``` + rsync -avz --progress --delete remote_user@:/path/to/remote/directory/ /path/to/local/directory + ``` + +Here is short description of the parameters used: + +* **-a**: archive mode, preserving the attributes of the files and directories +* **-v**: verbose mode, displaying the progress of the transfer +* **-z**: compress mode, compressing the data before transferring +* **--progress** tells rsync to print information showing the progress of the transfer +* **--delete** tells rsync to delete files that aren't on the sending side + +### Adjust reorganization of files and folders before running rsync + +[rsync-sidekick](https://github.com/m-manu/rsync-sidekick) propagates changes from source directory to destination directory. You can run rsync-sidekick before running rsync. Make sure that [Go is installed](#install-go). + +* Install rsync-sidekick + * ``` + sudo go install github.com/m-manu/rsync-sidekick@latest + ``` +* Reorganize the files and folders with rsync-sidekick + * ``` + rsync-sidekick /path/to/local/directory/ username@IP_Address:/path/to/remote/directory + ``` + +* Transfer and update files and folders with rsync + * ``` + sudo rsync -avz --progress --delete --log-file=/path/to/local/directory/rsync_storage.log /path/to/local/directory/ username@IP_Address:/path/to/remote/directory + ``` + +### Automate backup with rsync + +We show how to automate file transfers between two computers using rsync. + +* Create the script file + * ``` + nano rsync_backup.sh + ``` +* Write the following script with the proper paths. Here the log is saved in the same directory. + * ``` + # filename: rsync_backup.sh + #!/bin/bash + + sudo rsync -avz --progress --delete --log-file=/path/to/local/directory/rsync_storage.log /path/to/local/directory/ username@IP_Address:/path/to/remote/directory + ``` +* Give permission + * ``` + sudo chmod +x /path/to/script/rsync_backup.sh + ``` +* Set a cron job to run the script periodically + * Copy your .sh file to **/root**: + ``` + sudo cp path/to/script/rsync_backup.sh /root + ``` +* Open the cron file + * ``` + sudo crontab -e + ``` +* Add the following to run the script everyday. For this example, we set the time at 18:00PM + * ``` + 0 18 * * * /root/rsync_backup.sh + ``` + +### Parameters --checksum and --ignore-times with rsync + +Depending on your situation, the parameters **--checksum** or **--ignore-times** can be quite useful. Note that adding either parameter will slow the transfer. + +* With **--ignore time**, you ignore both the time and size of each file. This means that you transfer all files from source to destination. + * ``` + rsync --ignore-time source_folder/ destination_folder + ``` +* With **--checksum**, you verify with a checksum that the files from source and destination are the same. This means that you transfer all files that have a different checksum compared source to destination. + * ``` + rsync --checksum source_folder/ destination_folder + ``` + +### Trailing slashes with rsync + +rsync does not act the same whether you use or not a slash ("\/") at the end of the source path. + +* Copy content of **source_folder** into **destination_folder** to obtain the result: **destination_folder/source_folder_content** + * ``` + rsync source_folder/ destination_folder + ``` +* Copy **source_folder** into **destination_folder** to obtain the result: **destination_folder/source_folder/source_folder_content** + * ``` + rsync source_folder destination_folder + ``` + + + +## SFTP + +### SFTP on the Terminal + +Using SFTP for file transfer on the terminal is very quick since the SSH connection is already enabled by default when deploying workloads on the TFGrid. + +If you can use the following command to connect to a VM on the TFGrid: + +``` +ssh root@VM_IP +``` + +Then, it means you can use SFTP to access the same VM: + +``` +sftp root@VM_IP +``` + +Once in the server via SFTP, you can use the command line to get all the commands with `help` or `?`: + +``` +help +``` + +### SFTP Basic Commands + +Here are some common commands for SFTP. + +| Command | Function | +| --------------------------- | ----------------------------------- | +| bye | Quit sftp | +| cd path | Change remote directory to 'path' | +| help | Display this help text | +| pwd | Display remote working directory | +| lpwd | Print local working directory | +| ls [-1afhlnrSt] [path] | Display remote directory listing | +| mkdir path | Create remote directory | +| put [-afpR] local [remote] | Upload file | +| get [-afpR] remote [local] | Download file | +| quit | Quit sftp | +| rm path | Delete remote file | +| rmdir path | Remove remote directory | +| version | Show SFTP version | +| !command | Execute 'command' in local shell | + + +### SFTP File Transfer + +Using SFTP to transfer a file from the local machine to the remote VM is as simple as the following line: + +``` +put /local/path/file +``` + +This will transfer the file in the current user home directory of the remote VM. + +To transfer the file in a given directory, use the following: + +``` +put /local/path/file /remote/path/ +``` + +To transfer a file from the remote VM to the local machine, you can use the command `get`: + +``` +get /remote/path/file /local/path +``` + +To transfer (`get` or `put`) all the files within a directory, use the `-r` argument, as shown in the following example + +``` +get -r /remote/path/to/directory /local/path +``` + +## SFTP with FileZilla + +[FileZilla](https://filezilla-project.org/) is a free and open-source, cross-platform FTP application, consisting of FileZilla Client and FileZilla Server. + +It is possible to use FileZilla Client to transfer files between your local machine and a remote VM on the TFGrid. + +Since SSH is set, the user basically only needs to add the private key in FileZilla and enter the VM credentials to connect using SFTP in FileZilla. + +### Install FileZilla + +FileZilla is available on Linux, MAC and Windows on the [FileZilla website](https://filezilla-project.org/download.php?type=client). Simply follow the steps to properly download and install FileZilla Client. + +### Add a Private Key + +To prepare a connection using FileZilla, you need to add the private key of the SSH key pair. + +Simply add the file `id_rsa` in **SFTP**. + +- Open FileZilla Client +- Go to **Edit** -> **Settings** -> **Connection** -> **SFTP** +- Then click on **Add key file...** + - Search the `id.rsa` file usually located in `~/.ssh/id_rsa` +- Click on **OK** + +### FileZilla SFTP Connection + +You can set a connection between your local machine and a remote 3Node with FileZilla by using **root** as **Username** and the VM IP address as **Host**. + +- Enter the credentials + - Host + - `VM_IP_Address` + - Username + - `root` + - Password + - As set by the user. Can be empty. + - Port + - `22` +- Click on **Quickconnect** + +You can now transfer files between the local machine and the remote VM with FileZilla. + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md new file mode 100644 index 0000000..9b45f1b --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md @@ -0,0 +1,8 @@ +

Firewall Basics

+ +In this section, we cover the basic information concerning Firewall uses on Linux, most notably, we give basic commands and information on UFW and Firewalld. + +

Table of Contents

+ +- [UFW Basics](./ufw_basics.md) +- [Firewalld Basics](./firewalld_basics.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md new file mode 100644 index 0000000..53c12cd --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md @@ -0,0 +1,149 @@ +

Firewalld Basic Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Firewalld Basic Commands](#firewalld-basic-commands) + - [Install Firewalld](#install-firewalld) + - [See the Status of Firewalld](#see-the-status-of-firewalld) + - [Enable Firewalld](#enable-firewalld) + - [Stop Firewalld](#stop-firewalld) + - [Start Firewalld](#start-firewalld) + - [Disable Firewalld](#disable-firewalld) + - [Mask Firewalld](#mask-firewalld) + - [Unmask Firewalld](#unmask-firewalld) + - [Add a Service to Firewalld](#add-a-service-to-firewalld) + - [Remove a Service to Firewalld](#remove-a-service-to-firewalld) + - [Remove the Diles of a Service to Firewalld](#remove-the-diles-of-a-service-to-firewalld) + - [See if a Service is Available](#see-if-a-service-is-available) + - [Reload Firewalld](#reload-firewalld) + - [Display the Services and the Open Ports for the Public Zone](#display-the-services-and-the-open-ports-for-the-public-zone) + - [Display the Open Ports by Services and Port Numbers](#display-the-open-ports-by-services-and-port-numbers) + - [Add a Port for tcp](#add-a-port-for-tcp) + - [Add a Port for udp](#add-a-port-for-udp) + - [Add a Port for tcp and udp](#add-a-port-for-tcp-and-udp) +- [References](#references) + + +## Introduction + +We present a quick introduction to [firewalld](https://firewalld.org/), a free and open-source firewall management tool for Linux operating systems. This guide can be useful for users of the TFGrid deploying on full and micro VMs as well as other types of deployment. + +## Firewalld Basic Commands + +### Install Firewalld + + * ``` + apt install firewalld -y + ``` +### See the Status of Firewalld + + * ``` + firewall-cmd --state + ``` +### Enable Firewalld + + * ``` + systemctl enablefirewalld + ``` +### Stop Firewalld + + * ``` + systemctl stop firewalld + ``` +### Start Firewalld + + * ``` + systemctl start firewalld + ``` +### Disable Firewalld + + * ``` + systemctl disable firewalld + ``` +### Mask Firewalld + + * ``` + systemctl mask --now firewalld + ``` +### Unmask Firewalld + + * ``` + systemctl unmask --now firewalld + ``` +### Add a Service to Firewalld + + * Temporary + * ``` + firewall-cmd --add-service= + ``` + * Permanent + * ``` + firewall-cmd --add-service= --permanent + ``` + +### Remove a Service to Firewalld + + * Temporary + * ``` + firewall-cmd --remove-service= + ``` + * Permanent + * ``` + firewall-cmd --remove-service= --permanent + ``` + +### Remove the Diles of a Service to Firewalld + + * ``` + rm -f /etc/firewalld/services/.xml* + ``` + +### See if a Service is Available + + * ``` + firewall-cmd --info-service= + ``` + +### Reload Firewalld + + * ``` + firewall-cmd --reload + ``` + +### Display the Services and the Open Ports for the Public Zone + + * ``` + firewall-cmd --list-all --zone=public + ``` + +### Display the Open Ports by Services and Port Numbers + +* By services + * ``` + firewall-cmd --list-services + ``` +* By port numbers + * ``` + firewall-cmd --list-ports + ``` + +### Add a Port for tcp + + * ``` + firewall-cmd --zone=public --add-port=/tcp + ``` +### Add a Port for udp + + * ``` + firewall-cmd --zone=public --add-port=/udp + ``` +### Add a Port for tcp and udp + + * ``` + firewall-cmd --zone=public --add-port= + ``` + +## References + +ufw man pages - https://firewalld.org/documentation/man-pages/firewalld.html \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md new file mode 100644 index 0000000..c9e5076 --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md @@ -0,0 +1,256 @@ + +

Uncomplicated Firewall (ufw) Basic Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Basic Commands](#basic-commands) + - [Install ufw](#install-ufw) + - [Enable ufw](#enable-ufw) + - [Disable ufw](#disable-ufw) + - [Reset ufw](#reset-ufw) + - [Reload ufw](#reload-ufw) + - [Deny Incoming Connections](#deny-incoming-connections) + - [Allow Outgoing Connections](#allow-outgoing-connections) + - [Allow a Specific IP address](#allow-a-specific-ip-address) + - [Allow a Specific IP Address to a Given Port](#allow-a-specific-ip-address-to-a-given-port) + - [Allow a Port for tcp](#allow-a-port-for-tcp) + - [Allow a Port for tcp and udp](#allow-a-port-for-tcp-and-udp) + - [Allow a Subnet to a Given Port](#allow-a-subnet-to-a-given-port) + - [Deny an IP Address](#deny-an-ip-address) + - [Block Incoming Connections to a Network Interface](#block-incoming-connections-to-a-network-interface) + - [Delete a Rule with Number](#delete-a-rule-with-number) + - [Get App Info](#get-app-info) + - [Allow a Specific App](#allow-a-specific-app) +- [References](#references) + + +## Introduction + +We present a quick introduction to [Uncomplicated Firewall (ufw)](https://firewalld.org/), a free and open-source firewall management tool for Linux operating systems. This guide can be useful for users of the TFGrid deploying on full and micro VMs as well as other types of deployment. + +## Basic Commands + +We show here basic commands to set a firewall on Linux with Uncomplicated Firewall (ufw). + +### Install ufw + + * Update + * ``` + apt update + ``` + * Install ufw + * ``` + apt install ufw + ``` + +### Enable ufw + + * ``` + ufw enable + `````` + +### Disable ufw + + * ``` + ufw disable + ``` + +### Reset ufw + + * ``` + ufw reset + ``` + +### Reload ufw + + * ``` + ufw reload + ``` + +### Deny Incoming Connections + + * ``` + ufw default deny incoming + ``` + +### Allow Outgoing Connections + + * ``` + ufw default allow outgoing + ``` +### Allow a Specific IP address + + * ``` + ufw allow from + ``` + +### Allow a Specific IP Address to a Given Port + + * ``` + ufw allow from to any port + ``` + +### Allow a Port for tcp + +* ``` + ufw allow /tcp + ``` +### Allow a Port for udp + +* ``` + ufw allow /udp + ``` +### Allow a Port for tcp and udp + +* ``` + ufw allow + ``` + +### Allow Ports: Examples + +Here are some typical examples of ports to allow with ufw: + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow HTTPS (port 443) + * ``` + ufw allow https + ``` +* Allow mysql (port 3306) + * ``` + ufw allow 3306 + ``` + +### Allow Port Ranges + +* Template + * ``` + ufw allow : + ``` +* Example + * ``` + ufw allow 6000:6005 + ``` + +### Allow a Subnet + +* ``` + ufw allow from + ``` + +### Allow a Subnet to a Given Port + +* ``` + ufw allow from to any port + ``` + +### Deny a Port + +* ``` + ufw deny + ``` + +### Deny an IP Address + +* ``` + ufw deny + ``` + +### Deny a Subnet + +* ``` + ufw deny from + ``` + +### Block Incoming Connections to a Network Interface + +* ``` + ufw deny in on from + ``` + +### Check Rules + +Use **status** to check the current firewall configurations. Add **verbose** for more details. + +* ``` + ufw status + ``` +* ``` + ufw status verbose + ``` + +### Check Rules (Numbered) + +It can be useful to see the numbering of the rules, to remove more easily a rule for example. + +* ``` + ufw status numbered + ``` + +### Delete a Rule with Number + +It can be useful to see the numbering of the rules, to remove more easily a rule for example. + +* ``` + ufw delete + ``` + +### Delete a Rule with the Rule Name and Parameters + +You can also delete a rule by writing directly the rule name you used to add the rule. + +* Template + * ``` + ufw delete + ``` +* Example + * ``` + ufw delete allow ssh + ``` + * ``` + ufw delete allow 22 + ``` + +You can always check the current rules with **ufw status** to see if the rules are properly removed. + +### List the Available Profiles Available + +* ``` + ufw app list + ``` + +This command will give you the names of the apps present on the server. You can then use **ufw app info** to get information on the app, or allow the app with **ufw allow** + +### Get App Info + +* ``` + ufw app info + ``` + +### Set ufw in Verbose Mode + +* ``` + ufw verbose + ``` + +### Allow a Specific App + +* Template + * ``` + ufw allow "" + ``` +* Example + * ``` + ufw allow "NGINX Full" + ``` + +## References + +ufw man pages - https://manpages.ubuntu.com/manpages/trusty/man8/ufw.8.html \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/computer_it_basics/git_github_basics.md b/collections/manual/documentation/system_administrators/computer_it_basics/git_github_basics.md new file mode 100644 index 0000000..dca25e4 --- /dev/null +++ b/collections/manual/documentation/system_administrators/computer_it_basics/git_github_basics.md @@ -0,0 +1,450 @@ +

Git and GitHub Basics

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Git](#install-git) + - [Install on Linux](#install-on-linux) + - [Install on MAC](#install-on-mac) + - [Install on Windows](#install-on-windows) +- [Basic Commands](#basic-commands) + - [Check Git version](#check-git-version) + - [Clone a repository](#clone-a-repository) + - [Clone a single branch](#clone-a-single-branch) + - [Check all available branches](#check-all-available-branches) + - [Check the current branch](#check-the-current-branch) + - [Go to another branch](#go-to-another-branch) + - [Add your changes to a local branch](#add-your-changes-to-a-local-branch) + - [Push changes of a local branch to the remote Github branch](#push-changes-of-a-local-branch-to-the-remote-github-branch) + - [Reverse modifications to a file where changes haven't been staged yet](#reverse-modifications-to-a-file-where-changes-havent-been-staged-yet) + - [Download binaries from Github](#download-binaries-from-github) + - [Resolve conflicts between branches](#resolve-conflicts-between-branches) + - [Download all repositories of an organization](#download-all-repositories-of-an-organization) + - [Revert a push commited with git](#revert-a-push-commited-with-git) + - [Make a backup of a branch](#make-a-backup-of-a-branch) + - [Revert to a backup branch](#revert-to-a-backup-branch) + - [Start over local branch and pull remote branch](#start-over-local-branch-and-pull-remote-branch) + - [Overwrite local files and pull remote branch](#overwrite-local-files-and-pull-remote-branch) + - [Stash command and parameters](#stash-command-and-parameters) +- [Code Editors](#code-editors) + - [VS-Code](#vs-code) + - [VS-Codium](#vs-codium) +- [References](#references) + +*** + +## Introduction + +In this section, we cover basic commands and aspects of [GitHub](https://github.com/) and [Git](https://git-scm.com/). + +Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. + +GitHub is a platform and cloud-based service for software development and version control using Git, allowing developers to store and manage their code. + + + +## Install Git + +You can install git on MAC, Windows and Linux. You can consult Git's documentation learn how to [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + +### Install on Linux + +* Fedora distribution + * ``` + dnf install git-all + ``` +* Debian-based distribution + * ``` + apt install git-all + ``` +* Click [here](https://git-scm.com/download/linux) for other Linux distributions + +### Install on MAC + +* With Homebrew + * ``` + brew install git + ``` + +### Install on Windows + +You can download Git for Windows at [this link](https://git-scm.com/download/win). + + + +## Basic Commands + +### Check Git version + +``` +git --version +``` + + + +### Clone a repository + +``` +git clone +``` + + + +### Clone a single branch + +``` +git clone --branch --single-branch +``` + + + +### Check all available branches + +``` +git branch -r +``` + + + +### Check the current branch + +``` +git branch +``` + + + +### Go to another branch + +``` +git checkout +``` + + + +### Add your changes to a local branch + +* Add all changes + * ``` + git add . + ``` +* Add changes of a specific file + * ``` + git add / + ``` + + + +### Push changes of a local branch to the remote Github branch + +To push changes to Github, you can use the following commands: + +* ``` + git add . + ``` +* ``` + git commit -m "write your changes here in comment" + ``` +* ``` + git push + ``` + + + +### Count the differences between two branches + +Replace **branch1** and **branch2** with the appropriate branch names. + +``` +git rev-list --count branch1..branch2 +``` + +### See the default branch + +``` +git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@' +``` + + + +### Force a push + +``` +git push --force +``` + + + +### Merge a branch to a different branch + +* Checkout the branch you want to copy content TO + * ``` + git checkout branch_name + ``` +* Merge the branch you want content FROM + * ``` + git merge origin/dev_mermaid + ``` +* Push the changes + * ``` + git push -u origin/head + ``` + + + +### Clone completely one branch to another branch locally then push the changes to Github + +For this example, we copy **branchB** into **branchA**. + +* See available branches + * ``` + git branch -r + ``` +* Go to **branchA** + * ``` + git checkout branchA + ``` +* Copy **branchB** into **branchA** + * ``` + git git reset --hard branchB + ``` +* Force the push + * ``` + git push --force + ``` + + + +### The 3 levels of the command reset + +* ``` + git reset --soft + ``` + * Bring the History to the Stage/Index + * Discard last commit +* ``` + git reset --mixed + ``` + * Bring the History to the Working Directory + * Discard last commit and add +* ``` + git reset --hard + ``` + * Bring the History to the Working Directory + * Discard last commit, add and any changes you made on the codes + +Note 1: If you're using **--hard**, make sure to run git status to verify that your directory is clean, otherwise you will lose your uncommitted changes. + +Note 2: The argument **--mixed** is the default option, so **git reset** is equivalent to **git reset --mixed**. + + + +### Reverse modifications to a file where changes haven't been staged yet + +You can use the following to reverse the modifications of a file that hasn't been staged: + +``` +git checkout +``` + + + +### Download binaries from Github + +* Template: + * ``` + wget -O https://raw.githubusercontent.com//// + ``` + + + +### Resolve conflicts between branches + +We show how to resolve conflicts in a development branch (e.g. **branch_dev**) and then merging the development branch into the main branch (e.g. **branch_main**). + +* Clone the repo + * ``` + git clone + ``` +* Pull changes and potential conflicts + * ``` + git pull origin branch_main + ``` +* Checkout the development branch + * ``` + git checkout branch_dev + ``` +* Resolve conflicts in a text editor +* Save changes in the files +* Add the changes + * ``` + git add . + ``` +* Commit the changes + * ``` + git commit -m "your message here" + ``` +* Push the changes + * ``` + git push + ``` + + + +### Download all repositories of an organization + +* Log in to gh + * ``` + gh auth login + ``` +* Clone all repositories. Replace with the organization in question. + * ``` + gh repo list --limit 1000 | while read -r repo _; do + gh repo clone "$repo" "$repo" + done + ``` + + + +### Revert a push commited with git + +* Find the commit ID + * ``` + git log -p + ``` +* Revert the commit + * ``` + git revert + ``` +* Push the changes + * ``` + git push + ``` + + + +### Make a backup of a branch + +``` +git clone -b --single-branch //.git +``` + + + +### Revert to a backup branch + +* Checkout the branch you want to update (**branch**) + * ``` + git checkout + ``` +* Do a reset of your current branch based on the backup branch + * ``` + git reset --hard + ``` + + + +### Start over local branch and pull remote branch + +To start over your local branch and pull the remote branch to your working environment, thus ignoring local changes in the branch, you can do the following: + +``` +git fetch +git reset --hard +git pull +``` + +Note that this will not work for untracked and new files. See below for untracked and new files. + + + +### Overwrite local files and pull remote branch + +This method can be used to overwrite local files. This will work even if you have untracked and new files. + +* Save local changes on a stash + * ``` + git stash --include-untracked + ``` +* Discard local changes + * ``` + git reset --hard + ``` +* Discard untracked and new files + * ``` + git clean -fd + ``` +* Pull the remote branch + * ``` + git pull + ``` + +Then, to delete the stash, you can use **git stash drop**. + + + +### Stash command and parameters + +The stash command is used to record the current state of the working directory. + +* Stash a branch (equivalent to **git stash push**) + * ``` + git stash + ``` +* List the changes in the stash + * ``` + git stash list + ``` +* Inspect the changes in the stash + * ``` + git stash show + ``` +* Remove a single stashed state from the stash list and apply it on top of the current working tree state + * ``` + git stash pop + ``` +* Apply the stash on top of the current working tree state without removing the state from the stash list + * ``` + git stash apply + ``` +* Drop a stash + * ``` + git stash drop + ``` + + + +## Code Editors + +There are many code editors that can work well when working with git. + +### VS-Code + +[VS-Code](https://code.visualstudio.com/)is a source-code editor made by Microsoft with the Electron Framework, for Windows, Linux and macOS. + +To download VS-Code, visit their website and follow the given instructions. + +### VS-Codium + +[VS-Codium ](https://vscodium.com/) is a community-driven, freely-licensed binary distribution of Microsoft’s editor VS Code. + +There are many ways to install VS-Codium. Visit the [official website](https://vscodium.com/#install) for more information. + +* Install on MAC + * ``` + brew install --cask vscodium + ``` +* Install on Linux + * ``` + snap install codium --classic + ``` +* Install on Windows + * ``` + choco install vscodium + ``` + + + +## References + +Git Documentation - https://git-scm.com/docs/user-manual \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_1.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_1.png new file mode 100644 index 0000000..2e5caee Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_1.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_10.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_10.png new file mode 100644 index 0000000..cfa7edc Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_10.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_11.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_11.png new file mode 100644 index 0000000..1a0d5b9 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_11.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_12.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_12.png new file mode 100644 index 0000000..95a606e Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_12.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_13.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_13.png new file mode 100644 index 0000000..2e50989 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_13.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_14.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_14.png new file mode 100644 index 0000000..8ed439d Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_14.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_15.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_15.png new file mode 100644 index 0000000..120be6a Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_15.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_16.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_16.png new file mode 100644 index 0000000..cbd433b Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_16.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_17.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_17.png new file mode 100644 index 0000000..51130a3 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_17.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_18.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_18.png new file mode 100644 index 0000000..8fe5d2f Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_18.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_19.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_19.png new file mode 100644 index 0000000..6c6796c Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_19.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_2.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_2.png new file mode 100644 index 0000000..93d3a01 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_2.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_20.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_20.png new file mode 100644 index 0000000..70cf1cb Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_20.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_21.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_21.png new file mode 100644 index 0000000..7f9e454 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_21.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_3.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_3.png new file mode 100644 index 0000000..0a45687 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_3.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_39.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_39.png new file mode 100644 index 0000000..0c78bc0 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_39.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_4.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_4.png new file mode 100644 index 0000000..4f86de2 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_4.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_40.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_40.png new file mode 100644 index 0000000..6651627 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_40.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_41.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_41.png new file mode 100644 index 0000000..839e929 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_41.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_42.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_42.png new file mode 100644 index 0000000..5f84480 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_42.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_43.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_43.png new file mode 100644 index 0000000..eb3a017 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_43.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_5.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_5.png new file mode 100644 index 0000000..2dd845d Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_5.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_6.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_6.png new file mode 100644 index 0000000..ce39ca7 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_6.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_7.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_7.png new file mode 100644 index 0000000..c9256be Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_7.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_8.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_8.png new file mode 100644 index 0000000..0901cf6 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_8.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_9.png b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_9.png new file mode 100644 index 0000000..a913e4b Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_9.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/readme.md b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/readme.md new file mode 100644 index 0000000..098eb0e --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/readme.md @@ -0,0 +1,4 @@ +# Threefold Connect Basics Tutorial + +* Create an account +* Create a wallet \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Connect/tf_connect.md b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/tf_connect.md new file mode 100644 index 0000000..043d1dc --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/TF_Connect/tf_connect.md @@ -0,0 +1,148 @@ +

ThreeFold Connect: Create a Threefold Connect Account and Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the ThreeFold Connect App](#download-the-threefold-connect-app) +- [Create a ThreeFold Connect Account](#create-a-threefold-connect-account) +- [Verify Your Email](#verify-your-email) +- [Create a ThreeFold Connect Wallet](#create-a-threefold-connect-wallet) + +*** + +## Introduction + +The ThreeFold Connect app emerges as a dynamic and essential companion for individuals seeking seamless access to the ThreeFold ecosystem on the go. Available for free download on both iOS and Android mobile platforms, the TF Connect app ensures that users can effortlessly engage with the ThreeFold Grid, empowers users to manage their digital assets, engage in secure transactions, and explore decentralized financial opportunities, all within a unified mobile experience. + +In this tutorial, we show you how to create a ThreeFold Connect account and wallet. The main steps are simple and you will be done in no time. If you have any questions, feel free to write a post on the [ThreeFold Forum](http://forum.threefold.io/). + +## Download the ThreeFold Connect App + + +The ThreeFold Connect app is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en&gl=US) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885). + +- Note that for Android phones, you need at minimum Android 8.1 +- Note that for iOS phones, you need at minimum iOS 15 + +Either use the links above, or search for the ThreeFold Connect app on the App Store or the Google Play Store. Then install and open the app. If you want to leave a 5-star review of the app, no one here will stop you! + +![farming_tf_wallet_1](./img/farming_tf_wallet_1.png) +![farming_tf_wallet_2](./img/farming_tf_wallet_2.png) + +When you try to open the app, if you get an error message such as: "Error in initialization in Flagsmith...", you might need to upgrade your phone to a newer software version (Android 8.1 and iOS 15). + +*** + +## Create a ThreeFold Connect Account + +Once you are in the app, you will see some introduction pages to help you familiarize with the TF Connect app. You will also be asked to read and accept ThreeFold's Terms and Conditions. + +![farming_tf_wallet_3](./img/farming_tf_wallet_3.png) +![farming_tf_wallet_4](./img/farming_tf_wallet_4.png) + +You will then be asked to either *SIGN UP* or *RECOVER ACCOUNT*. To create a new account, click *SIGN UP*. + +![farming_tf_wallet_5](./img/farming_tf_wallet_5.png) + +Then, choose a *ThreeFold Connect Id*. This 3bot ID will be used, as well as the seed phrase, when you want to recover an account. Choose wisely. And do not forget it! Here we will use TFExample, as an example. + +![farming_tf_wallet_6](./img/farming_tf_wallet_6.png) + +Next, you need to add a valid email address. You will need to access your email and confirm the ThreeFold validation email to fully use the ThreeFold Connect app. + +![farming_tf_wallet_7](./img/farming_tf_wallet_7.png) + +The next step is crucial! Make sure no one is around looking at your screen. You will be shown your seed phrase. Keep this in a secure and offline place. You will need the 3bot ID and the seed phrase to recover your account. This seed phrase is of utmost important. Do not lose it nor give it to anyone. + +![farming_tf_wallet_8](./img/farming_tf_wallet_8.png) + +Once you've hit *Next*, you will be asked to write down 3 random words of your seed phrase. This is a necessary step to ensure you have taken the time to write down your seed phrase. + +![farming_tf_wallet_9](./img/farming_tf_wallet_9.png) + +Then, you'll be asked to confirm your TF 3bot ID and the associated email. + +![farming_tf_wallet_10](./img/farming_tf_wallet_10.png) + +Finally, you will be asked to choose a 4-digit pin. This will be needed to use the ThreeFold Connect app. If you ever forget this 4-digit pin, you will need to recover your account from your 3bot name and your seed phrase. You will need to confirm the new pin in the next step. + +![farming_tf_wallet_11](./img/farming_tf_wallet_11.png) + +That's it! You've created your ThreeFold Connect account. You can press the hamburger menu on the top left to explore the ThreeFold Connect app. + +![farming_tf_wallet_12](./img/farming_tf_wallet_12.png) + +In the next step, we will create a ThreeFold Connect wallet. You'll see, it's very simple! + +But first, let's see how to verify your email. + +*** + +## Verify Your Email + +Once you've created your account, an email will be sent to the email address you've chosen in the account creation process. To verify your email, go on your email inbox and open the email sent by *info@openkyc.live* with the subject *Verify your email address*. + +In this email, click on the link *Verify my email address*. This will lead you to a *login.threefold.me* link. The process should be automatic. Once this is done, you will receive a confirmation on screen, as well as on your phone. + +![farming_tf_wallet_39](./img/farming_tf_wallet_39.png) + +![farming_tf_wallet_40](./img/farming_tf_wallet_40.png) + +![farming_tf_wallet_41](./img/farming_tf_wallet_41.png) + +If, for some reason, you did not receive the verification email, simply click on *Verify* and another email will be sent. + +![farming_tf_wallet_42](./img/farming_tf_wallet_42.png) + +![farming_tf_wallet_43](./img/farming_tf_wallet_43.png) + +That's it! You've now created a ThreeFold Connect account. + +All that is left to do is to create a ThreeFold Connect wallet. This is very simple. + +Let's go! + +*** + +## Create a ThreeFold Connect Wallet + +To create a wallet, click on the ThreeFold Connect app menu, then choose *Wallet*. + +![farming_tf_wallet_13](./img/farming_tf_wallet_13.png) + +Once you are in the section *Wallet*, click on *Create Initial Wallet*. If it doesn't work the first time, retry some more. If you have trouble creating a wallet, make sure your connection is reliable. You can try a couple of minutes later if it still doesn't work. With a reliable connection, there shouldn't be any problem. Contact TF Support if problems persist. + +![farming_tf_wallet_14](./img/farming_tf_wallet_14.png) + +This is what you see when the TF Grid is initializing your wallet. + +![farming_tf_wallet_15](./img/farming_tf_wallet_15.png) + +Once your wallet is initialized, you will see *No balance found for this wallet*. You can click on this button to enter the wallet. + +![farming_tf_wallet_16](./img/farming_tf_wallet_16.png) + +Once inside your wallet, this is what you see. + +![farming_tf_wallet_17](./img/farming_tf_wallet_17.png) + +We will now see where the Stellar and the TFChain addresses and secrets are to be found. We will also change the wallet name. To do so, click on the *encircled i* at the bottom right of the screen. + +On this page, you can access your Stellar and TFChain addresses as well as your Stellar and TFChain secret keys. + +![farming_tf_wallet_18](./img/farming_tf_wallet_18.png) + +To change the name of your wallet, click on the button next to *Wallet Name*. Here we use TFWalletExample. Note that you can also use alphanumeric characters. + +![farming_tf_wallet_19](./img/farming_tf_wallet_19.png) + +![farming_tf_wallet_20](./img/farming_tf_wallet_20.png) + +At the top of the section *Wallet*, we can see that the name has changed. + +![farming_tf_wallet_21](./img/farming_tf_wallet_21.png) + +That's it! You now have a ThreeFold Connect account and wallet. +This will be very useful for your TFT transactions on the ThreeFold ecosystem. + +*** \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Dashboard/tf_dashboard.md b/collections/manual/documentation/system_administrators/getstarted/TF_Dashboard/tf_dashboard.md new file mode 100644 index 0000000..6dc12c7 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/TF_Dashboard/tf_dashboard.md @@ -0,0 +1,148 @@ +

Threefold Dashboard: Create Account and Transfer TFT

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Create Polkadot Extension Account](#create-polkadot-extension-account) +- [Transfer TFT from Stellar Chain to TFChain](#transfer-tft-from-stellar-chain-to-tfchain) + +## Introduction + +For this section, we will create an account on the TFChain and transfer TFT from Stellar chain to TFChain. We will then be able to use the TFT and deploy workloads on the Threefold Playground. + +## Create Polkadot Extension Account + +Go to the Threefold Dashboard: [dashboard.grid.tf](https://dashboard.grid.tf/) + +If you don't have the Polkadot extension installed on your browser, you will be able to click on the download link directly on the Threefold Dashboard page: + +![image](./img/dashboard_1.png) + +This link will lead you to the Polkadot extension download page: https://polkadot.js.org/extension/ + +![image](./img/dashboard_2.png) + +Then, simply click on "Add to Chrome". + +![image](./img/dashboard_3.png) + +Then, confirm by clicking on "Add extension". + +![image](./img/dashboard_4.png) + +You can now access the extension by clicking on the browser's extension button on the top right of the screen, and by then clicking on *polkadot{.js} extension*: + +![image](./img/dashboard_5.png) + +Make sure to carefully read the Polkadot message then click on **Understood, let me continue**: + +![image](./img/dashboard_6.png) + +Then click on the **plus** symbol to create a new account: + +![image](./img/dashboard_7.png) + +For this next step, you should be very careful. Your seed phrase is your only access to your account. Make sure to keep a copy somewhere safe and offline. + +![image](./img/dashboard_8.png) + +After, choose a name for your account and a password: + +![image](./img/dashboard_9.png) + +Your account is now created. You can see it when you open the Polkadot extension on your browser: + +![image](./img/dashboard_10.png) + +Now, when you go on the [Threefold Dashboard](https://dashboard.grid.tf/), you can click on the **Connect** button on the top right corner: + +![image](./img/dashboard_11.png) + +You will then need to grant the Threefold Dashboard access to your Polkadot account. + +![image](./img/dashboard_12.png) + +Then, simply click on your account name to access the Threefold Dashboard: + +![image](./img/dashboard_14.png) + +Read and accept the Terms and Conditions + +![image](./img/dashboard_15.png) + +You will be asked to confirm the transaction, write your password and click on **Sign the transaction** to confirm. + +![image](./img/dashboard_13.png) + +Once you open your account, you can choose a relay for it then click on **Create**. + +![image](./img/dashboard_relay.png) + +You will also be asked to confirm the transaction. + +![image](./img/dashboard_13.png) + +That's it! You've successfully created an account on the TFChain thanks to the Polkadot extension. You can now access the Threefold Dashboard. + +On to the next section! Where we will transfer (or swap) TFT from the Stellar Chain on your Threefold Connect app wallet to the TFChain on the Threefold Dashboard. + +You'll see, this is so easy thanks to the Threefold Dashboard configuration. + +*** + +## Transfer TFT from Stellar Chain to TFChain + +On the [Threefold Dashboard](https://dashboard.grid.tf/), click on the **Portal**, then click on **Swap**. + +Make sure the chain **stellar** is selected. Then click **Deposit**, as we want to deposit TFT from the Stellar Chain to the TFChain. + +![image](./img/dashboard_16.png) + +Next, you will scan the QR code shown on the screen with the Threefold Connect app. + +> Note that you can also manually enter the Stellar Chain address and the Twin ID. + +![image](./img/dashboard_17.png) + +To scan the QR code on the Threefold Connec app, follow those steps: + +Click on the menu button: + +![image](./img/dashboard_18.png) + +Click on **Wallet**: + +![image](./img/dashboard_19.png) + +Then, click on **Send Coins**: + +![image](./img/dashboard_20.png) + +On the next page, select the **Stellar** chain, then click on **SCAN QR**: + +![image](./img/dashboard_21.png) + + +This will automatically write the correct address and twin ID. + +You can now write the amount of TFT you wish to send, and then click **SEND** + +> We recommend to try with a small amount of TFT first to make sure everything is OK. +> +> The transfer fees are of 1 TFT per transfer. + +![image](./img/dashboard_22.png) + +You will then simply need to confirm the transaction. It is a good opportunity to make sure everything is OK. + +![image](./img/dashboard_23.png) + +You should then receive your TFT on your Dashboard account within a few minutes. + +You can see your TFT balance on the top of the screen. Here's an example of what it could look like: + +![image](./img/dashboard_24.png) + +> Note: You might need to refresh (reload) the webpage to see the new TFT added to the account. + +That's it! You've swapped TFT from Stellar Chain to TFChain. diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_1.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_1.png new file mode 100644 index 0000000..0db09d3 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_1.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_2.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_2.png new file mode 100644 index 0000000..b7b21f2 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_2.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_3.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_3.png new file mode 100644 index 0000000..85b01b2 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_3.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_4.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_4.png new file mode 100644 index 0000000..753c0dd Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_4.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_5.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_5.png new file mode 100644 index 0000000..073a4a7 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_5.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/tft_ethereum.md b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/tft_ethereum.md new file mode 100644 index 0000000..67ecd6e --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_ethereum/tft_ethereum.md @@ -0,0 +1,94 @@ +

TFT on Ethereum

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFT Ethereum-Stellar Bridge](#tft-ethereum-stellar-bridge) +- [TFT and Metamask](#tft-and-metamask) + - [Add TFT to Metamask](#add-tft-to-metamask) + - [Buy TFT on Metamask](#buy-tft-on-metamask) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +The TFT Stellar-Ethereum bridge serves as a vital link between the Stellar and Ethereum blockchains, enabling the seamless transfer of TFT tokens between these two networks. This bridge enhances interoperability and expands the utility of TFT by allowing users to leverage the strengths of both platforms. With the bridge in place, TFT holders can convert their tokens from the Stellar network to the Ethereum network and vice versa, unlocking new possibilities for engagement with decentralized applications, smart contracts, and the vibrant Ethereum ecosystem. This bridge promotes liquidity, facilitates cross-chain transactions, and encourages collaboration between the Stellar and Ethereum communities. + +*** + +## TFT Ethereum-Stellar Bridge + +The easiest way to transfer TFT between Ethereum and Stellar is to use the [TFT Ethereum Bridge](https://bridge.eth.threefold.io). We present here the main steps on how to use this bridge. + +When you go to the [TFT Ethereum-Stellar bridge website](https://bridge.eth.threefold.io/), connect your Ethereum wallet. Then the bridge will present a QR code which you scan with your Stellar wallet. This will populate a transaction with the bridge wallet as the destination and an encoded form of your Ethereum address as the memo. The bridge will scan the transaction, decode the Ethereum wallet address, and deliver newly minted TFT on Ethereum, minus the bridge fees. + +For the reverse operation, going from Ethereum to Stellar, there is a smart contract interaction that burns TFT on Ethereum while embedding your Stellar wallet address. The bridge will scan that transaction and release TFT from its vault wallet to the specified Stellar address, again minus the bridge fees. + +Note that the contract address for TFT on Ethereum is the following: `0x395E925834996e558bdeC77CD648435d620AfB5b`. + +To see the ThreeFold Token on Etherscan, check [this link](https://etherscan.io/token/0x395E925834996e558bdeC77CD648435d620AfB5b). + +*** + +## TFT and Metamask + +The ThreeFold Token (TFT) is available on Ethereum. +It is implemented as a wrapped asset with the following token address: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +We present here the basic steps to add TFT to Metamask. We also show how to buy TFT Metamask. Finally, we present the simple steps to use the [TFT Ethereum Bridge](https://bridge.eth.threefold.io/). + + +*** + +### Add TFT to Metamask + +Open Metamask and import the ThreeFold Token. First click on `import tokens`: + +![Metamask-Main|297x500](./img/tft_on_ethereum_image_1.png) + +Then, choose `Custom Token`: + +![Metamask-ImportToken|298x500](./img/tft_on_ethereum_image_2.png) + +To add the ThreeFold Token, paste its Ethereum address in the field `Token contract address field`. The address is the following: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +Once you paste the TFT contract address, the parameter `Token symbol` should automatically be filled with `TFT`. + +Click on the button `Add Custom Token`. + +![Metamask-importCustomToken|297x500](./img/tft_on_ethereum_image_3.png) + +To confirm, click on the button `Import tokens`: + +![Metamask-ImporttokensQuestion|298x500](./img/tft_on_ethereum_image_4.png) + +TFT is now added to Metamask. + +*** + +### Buy TFT on Metamask + +Liquidity is present on Ethereum so you can use the "Swap" functionality from Metamask directly or go to [Uniswap](https://app.uniswap.org/#/swap) to swap Ethereum, or any other token, to TFT. + +When using Uniswap, paste the TFT token address in the field `Select a token` to select TFT on Ethereum. The TFT token address is the following: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +![Uniswap-selecttoken|315x500](./img/tft_on_ethereum_image_5.png) + +*** + +## Questions and Feedback + +If you have any question, feel free to write a post on the [Threefold Forum](https://forum.threefold.io/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_1.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_1.png new file mode 100644 index 0000000..265dcc1 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_1.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_10.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_10.png new file mode 100644 index 0000000..37e04bb Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_10.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_11.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_11.png new file mode 100644 index 0000000..6f05038 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_11.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_12.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_12.png new file mode 100644 index 0000000..ad81a3d Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_12.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_13.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_13.png new file mode 100644 index 0000000..8d808cc Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_13.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_14.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_14.png new file mode 100644 index 0000000..014edde Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_14.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_15.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_15.png new file mode 100644 index 0000000..895d432 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_15.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_16.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_16.png new file mode 100644 index 0000000..b8ca3c9 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_16.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_17.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_17.png new file mode 100644 index 0000000..5919d0c Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_17.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_18.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_18.png new file mode 100644 index 0000000..8ea142f Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_18.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_19.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_19.png new file mode 100644 index 0000000..14688ab Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_19.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_2.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_2.png new file mode 100644 index 0000000..cb638bd Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_2.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_20.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_20.png new file mode 100644 index 0000000..b072502 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_20.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_21.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_21.png new file mode 100644 index 0000000..709e50a Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_21.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_22.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_22.png new file mode 100644 index 0000000..6e588cd Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_22.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_23.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_23.png new file mode 100644 index 0000000..b47c4f0 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_23.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_24.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_24.png new file mode 100644 index 0000000..df06bec Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_24.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_25.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_25.png new file mode 100644 index 0000000..7ba5402 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_25.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_26.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_26.png new file mode 100644 index 0000000..f34d4ff Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_26.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_27.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_27.png new file mode 100644 index 0000000..1de6ee5 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_27.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_28.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_28.png new file mode 100644 index 0000000..c3e8cd0 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_28.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_29.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_29.png new file mode 100644 index 0000000..888067f Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_29.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_3.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_3.png new file mode 100644 index 0000000..4a18f4c Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_3.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_30.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_30.png new file mode 100644 index 0000000..f28e697 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_30.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_31.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_31.png new file mode 100644 index 0000000..84fe32e Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_31.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_32.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_32.png new file mode 100644 index 0000000..3ab05eb Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_32.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_33.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_33.png new file mode 100644 index 0000000..b30050a Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_33.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_34.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_34.png new file mode 100644 index 0000000..553db13 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_34.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_4.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_4.png new file mode 100644 index 0000000..b2a0d03 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_4.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_5.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_5.png new file mode 100644 index 0000000..2b28aef Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_5.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_6.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_6.png new file mode 100644 index 0000000..a3601b3 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_6.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_7.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_7.png new file mode 100644 index 0000000..879a735 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_7.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_8.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_8.png new file mode 100644 index 0000000..b1a9321 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_8.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_9.png b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_9.png new file mode 100644 index 0000000..eb2e80e Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_9.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/tft_lobstr.md b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/tft_lobstr.md new file mode 100644 index 0000000..ddca2d4 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_lobstr/tft_lobstr.md @@ -0,0 +1,216 @@ +

Threefold Token: Buy TFT on Lobstr

+ +
+ +
+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the App and Create an Account](#download-the-app-and-create-an-account) +- [Connect Your TF Connect App Wallet](#connect-your-tf-connect-app-wallet) +- [Buy XLM with Fiat Currency](#buy-xlm-with-fiat-currency) +- [Swap XLM for TFT](#swap-xlm-for-tft) + +*** + +## Introduction + +The Threefold token (TFT) is the utility token of the Threefold Grid, a decentralized and open-source project offering network, compute and storage capacity. + +Threefold Tokens (TFT) are created (minted) by the ThreeFold Blockchain (TFChain) only when new Internet capacity is added to the ThreeFold Grid by farmers. For this reason, TFT is a pure utility token as minting is solely the result of farming on the Threefold Grid. + +* To **farm** TFT, read the [complete farming guide](https://forum.threefold.io/t/threefold-farming-guide-part-1/2989). + +* To **buy** TFT, follow this guide. + +There are many ways to buy TFT: + +* You can buy TFT on [Lobstr](https://lobstr.co/) + +* You can buy TFT at [GetTFT.com](https://gettft.com/gettft/) + +* You can buy TFT on [Pancake Swap](https://pancakeswap.finance/swap?inputCurrency=BNB&outputCurrency=0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf) + +For the current guide, we will show how to buy TFT on the [Lobstr app](https://lobstr.co/). +The process is simple. + +Note that it is possible to do these steps without connecting the Lobstr wallet to the TF Connect App wallet. But doing this has a clear advantage: when we buy and swap on Lobstr, the TFT is directly accessible on the TF Connect app wallet. + +Here we go! + +*** + +## Download the App and Create an Account + +Go on [www.lobstr.co](https://www.lobstr.co) and download the Lobstr app. +You can download it for Android or iOS. + +![image](./img/gettft_1.png) + +We will show here the steps for Android, but it is very similar with iOS. +Once you've clicked on the Android button, you can click install on the Google Store page: + +![image](./img/gettft_2.png) + +Once the app is downloaded, open it: + +![image](./img/gettft_3.png) + +On the Lobstr app, click on **Create Account**: + +![image](./img/gettft_4.png) + +You will then need to enter your email address: + +![image](./img/gettft_5.png) + +Then, choose a safe password for your account: + +![image](./img/gettft_6.png) + +Once this is done, you will need to verify your email. + +Click on **Verify Email** and then go check your email inbox. + +![image](./img/gettft_7.png) + +Simply click on **Verify Email** on the email you've received. + +![image](./img/gettft_8.png) + +Once your email is verified, you can sign in to your Lobstr account: + +![image](./img/gettft_9.png) + +![image](./img/gettft_10.png) + +*** + +## Connect Your TF Connect App Wallet + +You will then need to either create a new wallet or connect an existing wallet. + +Since we are working on the Threefold ecosystem, it is very easy and practical to simply connect your Threefold Connect app wallet. You can also create a new wallet. + +Using the TF Connect wallet is very useful and quick. When you buy XLM and swap XLM tokens for TFTs, they will be directly available on your TF Connect app wallet. + +![image](./img/gettft_11.png) + +To connect your TF Connect app wallet, you will need to find your Stellar address and chain secret key. +This is very simple to do. + +Click on **I have a public or secret key**. + +![image](./img/gettft_12.png) + +As you can see on this next picture, you need the Stellar address and secret key to properly connect your TF Connect app wallet to Lobstr: + +![image](./img/gettft_18.png) + +To find your Stellar address and secret key, go on the TF Connect app and select the **Wallet** section: + +![image](./img/gettft_13.png) + +At the top of the section, click on the **copy** button to copy your Stellar Address: + +![image](./img/gettft_17.png) + +Now, we will find the Stellar secret key. +At the botton of the section, click on the encircled **i** button: + +![image](./img/gettft_14.png) + +Next, click on the **eye** button to reveal your secret key: + +![image](./img/gettft_15.png) + +You can now simply click on the **copy** button on the right: + +![image](./img/gettft_16.png) + +That's it! You've now connected your TF Connect app wallet to your Lobstr account. + +## Buy XLM with Fiat Currency + +Now, all we need to do, is buy XLM and then swap it for TFT. +It will be directly available in your TF Connect App wallet. + +On the Lobstr app, click on the top right menu button: + +![image](./img/gettft_19.png) + +Then, click on **Buy Crypto**: + +![image](./img/gettft_20.png) + +By default, the crypto selected is XLM. This is alright for us as we will quickly swap the XLM for TFT. + +On the Buy Crypto page, you can choose the type of Fiat currency you want. +By default it is in USD. To select some othe fiat currency, you can click on **ALL** and see the available fiat currencies: + +![image](./img/gettft_21.png) + +You can search or select the current you want for the transfer: + +![image](./img/gettft_22.png) + +You will then need to decide how much XLM you want to buy. Note that there can be a minimum amount. +Once you chose the desired amount, click on **Continue**. + +![image](./img/gettft_23.png) + +Lobstr will then ask you to proceed to a payment method. In this case, it is Moonpay. +Note that in some cases, your credit card won't accept Moonpay payments. You will simply need to confirm with them that you agree with transacting with Moonpay. This can be done by phone. Check with your bank and credit card company if this applies. + +![image](./img/gettft_24.png) + +Once you've set up your Moonpay payment method, you will need to process and confirm the transaction: + +![image](./img/gettft_25.png) +![image](./img/gettft_26.png) + +You will then see a processing window. +This process is usually fast. Within a few minutes, you should receive your XLM. + +![image](./img/gettft_27.png) + +Once the XLM is delivered, you will receive a notification: + +![image](./img/gettft_28.png) + +When your transaction is complete, you will see this message: + +![image](./img/gettft_29.png) + +On the Trade History page, you can choose to download the csv file version of your transaction: + +![image](./img/gettft_30.png) + +That's it! You've bought XLM on Lobstr and Moonpay. + +## Swap XLM for TFT + +Now we want to swap the XLM tokens for the Threefold tokens (TFT). +This is even easier than the previous steps. + +Go to the Lobstr Home menu and select **Swap**: + +![image](./img/gettft_31.png) + +On the **Swap** page, write "tft" and select the Threefold token: + +![image](./img/gettft_32.png) + +Select the amount of XLM you want to swap. It is recommended to keep at least 1 XLM in your wallet for transaction fees. + +![image](./img/gettft_33.png) + +Within a few seconds, you will receive a confirmation that your swap is completed: +Note that the TFT is directly sent on your TF Connect app wallet. + +![image](./img/gettft_34.png) + +That's it. You've swapped XLM for TFT. + +You can now use your TFT to deploy workloads on the Threefold Grid. diff --git a/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_toc.md b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_toc.md new file mode 100644 index 0000000..2a25e32 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/TF_Token/tft_toc.md @@ -0,0 +1,6 @@ +

ThreeFold Token

+ +

Table of Contents

+ +- [TFT on Lobstr](../TF_Token/tft_lobstr/tft_lobstr.html) +- [TFT on Ethereum](../TF_Token/tft_ethereum/tft_ethereum.html) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/img/endlessscalable.png b/collections/manual/documentation/system_administrators/getstarted/img/endlessscalable.png new file mode 100644 index 0000000..90cede7 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/img/endlessscalable.png differ diff --git a/collections/manual/documentation/system_administrators/getstarted/img/network_concepts_.jpg b/collections/manual/documentation/system_administrators/getstarted/img/network_concepts_.jpg new file mode 100644 index 0000000..c08deae Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/img/network_concepts_.jpg differ diff --git a/collections/manual/documentation/system_administrators/getstarted/img/peer2peer_net_.jpg b/collections/manual/documentation/system_administrators/getstarted/img/peer2peer_net_.jpg new file mode 100644 index 0000000..bbc21f0 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/img/peer2peer_net_.jpg differ diff --git a/collections/manual/documentation/system_administrators/getstarted/img/stfgrid3_storage_concepts_.jpg b/collections/manual/documentation/system_administrators/getstarted/img/stfgrid3_storage_concepts_.jpg new file mode 100644 index 0000000..67f7fd8 Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/img/stfgrid3_storage_concepts_.jpg differ diff --git a/collections/manual/documentation/system_administrators/getstarted/img/webgw_.jpg b/collections/manual/documentation/system_administrators/getstarted/img/webgw_.jpg new file mode 100644 index 0000000..f555d0f Binary files /dev/null and b/collections/manual/documentation/system_administrators/getstarted/img/webgw_.jpg differ diff --git a/collections/manual/documentation/system_administrators/getstarted/planetarynetwork.md b/collections/manual/documentation/system_administrators/getstarted/planetarynetwork.md new file mode 100644 index 0000000..89ccf84 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/planetarynetwork.md @@ -0,0 +1,229 @@ + +

Planetary Network

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install](#install) +- [Run](#run) + - [Linux](#linux) + - [MacOS](#macos) +- [Test Connectivity](#test-connectivity) +- [Firewalls](#firewalls) + - [Linux](#linux-1) + - [MacOS](#macos-1) +- [Get Yggdrasil IP](#get-yggdrasil-ip) +- [Add Peers](#add-peers) +- [Clients](#clients) +- [Peers](#peers) + - [Central europe](#central-europe) + - [Ghent](#ghent) + - [Austria](#austria) +- [Peers config for usage in every Yggdrasil - Planetary Network client](#peers-config-for-usage-in-every-yggdrasil---planetary-network-client) + +*** + +## Introduction + +In a first phase, to get started, you need to launch the planetary network by running [Yggdrasil](https://yggdrasil-network.github.io) from the command line. + +Yggdrasil is an implementation of a fully end-to-end encrypted IPv6 network. It is lightweight, self-arranging, supported on multiple platforms, and allows pretty much any IPv6-capable application to communicate securely with other nodes on the network. Yggdrasil does not require you to have IPv6 Internet connectivity - it also works over IPv4. + +## Install + +Yggdrasil is necessary for communication between your local machine and the nodes on the Grid that you deploy to. Binaries and packages are available for all major operating systems, or it can be built from source. Find installation instructions here. + +After installation, you'll need to add at least one publicly available peer to your Yggdrasil configuration file. By default on Unix based systems, you'll find the file at `/etc/yggdrasil.conf`. To find peers, check this site, which compiles and displays the peer information available on Github. + +Add peers to your configuration file like so: + +``` +Peers: ["PEER_URL:PORT", "PEER_URL:PORT", ...] +``` + +Please consult [yggdrasil installation page](https://yggdrasil-network.github.io/installation.html) for more information and clients + +## Run + +### Linux + +On Linux with `systemd`, Yggdrasil can be started and enabled as a service, or run manually from the command line: + +``` +sudo yggdrasil -useconffile /etc/yggdrasil.conf +``` + +Get your IPv6 address with following command : + +``` +yggdrasilctl getSelf +``` + +### MacOS + +The MacOS package will automatically install and start the `launchd` service. After adding peers to your config file, restart Yggdrasil by stopping the service (it will be restarted automatically): + +``` +sudo launchctl stop yggdrasil +``` + +Get your IPv6 address with following command : + +``` +sudo yggdrasilctl getSelf +``` + +## Test Connectivity + +To ensure that you have successfully connected to the Yggdrasil network, try loading the site in your browser: + +``` +http://[319:3cf0:dd1d:47b9:20c:29ff:fe2c:39be]/ +``` + +## Firewalls + +Creating deployments on the Grid also requires that nodes can reach your machine as well. This means that a local firewall preventing inbound connections will cause deployments to fail. + +### Linux + +On systems using `iptables`, check: +``` +sudo ip6tables -S INPUT +``` + +If the first line is `-P INPUT DROP`, then all inbound connections over IPv6 will be blocked. To open inbound connections, run: + +``` +sudo ip6tables -P INPUT ACCEPT +``` + +To make this persist after a reboot, run: + +``` +sudo ip6tables-save +``` + +If you'd rather close the firewall again after you're done, use: + +``` +sudo ip6tables -P INPUT DROP +``` + +### MacOS + +The MacOS system firewall is disabled by default. You can check your firewall settings according to instructions here. + +## Get Yggdrasil IP + +Once Yggdrasil is installed, you can find your Yggdrasil IP address using this command on both Linux and Mac: + +``` +yggdrasil -useconffile /etc/yggdrasil.conf -address +``` + +You'll need this address when registering your twin on TFChain later. + + +## Add Peers + + + - Add the needed [peers](https://publicpeers.neilalexander.dev/) in the config file generated under Peers. + + **example**: +``` + Peers: + [ + tls://54.37.137.221:11129 + ] +``` +- Restart yggdrasil by + + systemctl restart yggdrasil + +## Clients + +- [planetary network connector](https://github.com/threefoldtech/planetary_network) + +## Peers + +### Central europe + +#### Ghent + +- tcp://gent01.grid.tf:9943 +- tcp://gent02.grid.tf:9943 +- tcp://gent03.grid.tf:9943 +- tcp://gent04.grid.tf:9943 +- tcp://gent01.test.grid.tf:9943 +- tcp://gent02.test.grid.tf:9943 +- tcp://gent01.dev.grid.tf:9943 +- tcp://gent02.dev.grid.tf:9943 + +### Austria + +- tcp://gw291.vienna1.greenedgecloud.com:9943 +- tcp://gw293.vienna1.greenedgecloud.com:9943 +- tcp://gw294.vienna1.greenedgecloud.com:9943 +- tcp://gw297.vienna1.greenedgecloud.com:9943 +- tcp://gw298.vienna1.greenedgecloud.com:9943 +- tcp://gw299.vienna2.greenedgecloud.com:9943 +- tcp://gw300.vienna2.greenedgecloud.com:9943 +- tcp://gw304.vienna2.greenedgecloud.com:9943 +- tcp://gw306.vienna2.greenedgecloud.com:9943 +- tcp://gw307.vienna2.greenedgecloud.com:9943 +- tcp://gw309.vienna2.greenedgecloud.com:9943 +- tcp://gw313.vienna2.greenedgecloud.com:9943 +- tcp://gw324.salzburg1.greenedgecloud.com:9943 +- tcp://gw326.salzburg1.greenedgecloud.com:9943 +- tcp://gw327.salzburg1.greenedgecloud.com:9943 +- tcp://gw328.salzburg1.greenedgecloud.com:9943 +- tcp://gw330.salzburg1.greenedgecloud.com:9943 +- tcp://gw331.salzburg1.greenedgecloud.com:9943 +- tcp://gw333.salzburg1.greenedgecloud.com:9943 +- tcp://gw422.vienna2.greenedgecloud.com:9943 +- tcp://gw423.vienna2.greenedgecloud.com:9943 +- tcp://gw424.vienna2.greenedgecloud.com:9943 +- tcp://gw425.vienna2.greenedgecloud.com:9943 + +## Peers config for usage in every Yggdrasil - Planetary Network client + +``` + Peers: + [ + # Threefold Lochrist + tcp://gent01.grid.tf:9943 + tcp://gent02.grid.tf:9943 + tcp://gent03.grid.tf:9943 + tcp://gent04.grid.tf:9943 + tcp://gent01.test.grid.tf:9943 + tcp://gent02.test.grid.tf:9943 + tcp://gent01.dev.grid.tf:9943 + tcp://gent02.dev.grid.tf:9943 + # GreenEdge + tcp://gw291.vienna1.greenedgecloud.com:9943 + tcp://gw293.vienna1.greenedgecloud.com:9943 + tcp://gw294.vienna1.greenedgecloud.com:9943 + tcp://gw297.vienna1.greenedgecloud.com:9943 + tcp://gw298.vienna1.greenedgecloud.com:9943 + tcp://gw299.vienna2.greenedgecloud.com:9943 + tcp://gw300.vienna2.greenedgecloud.com:9943 + tcp://gw304.vienna2.greenedgecloud.com:9943 + tcp://gw306.vienna2.greenedgecloud.com:9943 + tcp://gw307.vienna2.greenedgecloud.com:9943 + tcp://gw309.vienna2.greenedgecloud.com:9943 + tcp://gw313.vienna2.greenedgecloud.com:9943 + tcp://gw324.salzburg1.greenedgecloud.com:9943 + tcp://gw326.salzburg1.greenedgecloud.com:9943 + tcp://gw327.salzburg1.greenedgecloud.com:9943 + tcp://gw328.salzburg1.greenedgecloud.com:9943 + tcp://gw330.salzburg1.greenedgecloud.com:9943 + tcp://gw331.salzburg1.greenedgecloud.com:9943 + tcp://gw333.salzburg1.greenedgecloud.com:9943 + tcp://gw422.vienna2.greenedgecloud.com:9943 + tcp://gw423.vienna2.greenedgecloud.com:9943 + tcp://gw424.vienna2.greenedgecloud.com:9943 + tcp://gw425.vienna2.greenedgecloud.com:9943 + ] +``` + diff --git a/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md new file mode 100644 index 0000000..6a08a66 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md @@ -0,0 +1,186 @@ +

Deploy a Full VM and Run Cockpit, a Web-based Interface for Servers

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM and Create a Root-Access User](#deploy-a-full-vm-and-create-a-root-access-user) +- [Set the VM and Install Cockpit](#set-the-vm-and-install-cockpit) +- [Change the Network System Daemon](#change-the-network-system-daemon) +- [Set a Firewall](#set-a-firewall) +- [Access Cockpit](#access-cockpit) +- [Conclusion](#conclusion) +- [Acknowledgements and References](#acknowledgements-and-references) + +*** + +## Introduction + +In this Threefold Guide, we show how easy it is to deploy a full VM and access Cockpit, a web-based interface to manage servers. For more information on Cockpit, visit this [link](https://cockpit-project.org/). + +For more information on deploying a full VM and using SSH remote connection, read [this SSH guide](../../ssh_guide/ssh_guide.md). + +If you are new to the Threefold ecosystem and you want to deploy workloads on the Threefold Grid, read the [Get Started section](../../tfgrid3_getstarted.md) of the Threefold Manual. + +Note that the two sections [Change the Network System Daemon](#change-the-network-system-daemon) and [Set a Firewall](#set-a-firewall) are optional. That being said, they provide more features and security to the deployment. + + + +## Deploy a Full VM and Create a Root-Access User + +To start, you must [deploy and SSH into a full VM](../../ssh_guide/ssh_guide.md). + +* Go to the [Threefold dashboard](https://dashboard.grid.tf/#/) +* Deploy a full VM (e.g. Ubuntu 22.04) + * With an IPv4 Address +* After deployment, copy the IPv4 address +* Connect into the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` +* Create a new user with root access + * Here we use `newuser` as an example + * ``` + adduser newuser + ``` + * To see the directory of the new user + * ``` + ls /home + ``` + * Give sudo capacity to the new user + * ``` + usermod -aG sudo newuser + ``` + * Make the new user accessible by SSH + * ``` + su - newuser + ``` + * ``` + mkdir ~/.ssh + ``` + * ``` + nano ~/.ssh/authorized_keys + ``` + * add the authorized public key in the file, then save and quit + * Exit the VM and reconnect with the new user + * ``` + exit + ``` + * ``` + ssh newuser@VM_IPv4_address + ``` + + + +## Set the VM and Install Cockpit + +* Update and upgrade the VM + * ``` + sudo apt update -y && sudo apt upgrade -y && sudo apt-get update -y + ``` +* Install Cockpit + * ``` + . /etc/os-release && sudo apt install -t ${UBUNTU_CODENAME}-backports cockpit -y + ``` + + + +## Change the Network System Daemon + +We now change the system daemon that manages network configurations. We will be using [NetworkManager](https://networkmanager.dev/) instead of [networkd](https://wiki.archlinux.org/title/systemd-networkd). This will give us further possibilities on Cockpit. + +* Install NetworkManager. Note that it might already be installed. + * ``` + sudo apt install network-manager -y + ``` +* Update the `.yaml` file + * Go to netplan's directory + * ``` + cd /etc/netplan + ``` + * Search for the proper `.yaml` file name + * ``` + ls -l + ``` + * Update the `.yaml` file + * ``` + sudo nano 50-cloud-init.yaml + ``` + * Add the following lines under `network:` + * ``` + version: 2 + renderer: NetworkManager + ``` + * Note that these two lines should be aligned with `ethernets:` + * Remove `version: 2` at the bottom of the file + * Save and exit the file +* Disable networkd and enable NetworkManager + * ``` + sudo systemctl disable systemd-networkd + ``` + * ``` + sudo systemctl enable NetworkManager + ``` +* Apply netplan to set NetworkManager + * ``` + sudo netplan apply + ``` +* Reboot the system to load the new kernel and to properly set NetworkManager + * ``` + sudo reboot + ``` +* Reconnect to the VM + * ``` + ssh newuser@VM_IPv4_address + ``` + + +## Set a Firewall + +We now set a firewall. We note that [ufw](https://wiki.ubuntu.com/UncomplicatedFirewall) is not compatible with Cockpit and for this reason, we will be using [firewalld](https://firewalld.org/). + +* Install firewalld + * ``` + sudo apt install firewalld -y + ``` + +* Add Cockpit to firewalld + * ``` + sudo firewall-cmd --add-service=cockpit + ``` + * ``` + sudo firewall-cmd --add-service=cockpit --permanent + ``` +* See if Cockpit is available + * ``` + sudo firewall-cmd --info-service=cockpit + ``` + +* See the status of firewalld + * ``` + sudo firewall-cmd --state + ``` + + + +## Access Cockpit + +* On your web browser, write the following URL with the proper VM IPv4 address + * ``` + VM_IPv4_Address:9090 + ``` +* Enter the username and password of the root-access user +* You might need to grant administrative access to the user + * On the top right of the Cockpit window, click on `Limited access` + * Enter the root-access user password then click `Authenticate` + + + +## Conclusion + +You now have access to a web-based graphical interface to manage your VM. You can read [Cockpit's documentation](https://cockpit-project.org/documentation.html) to explore further this interface. + + + +## Acknowledgements and References + +A big thank you to Drew Smith for his [advice on using NetworkManager](https://forum.threefold.io/t/cockpit-managed-ubuntu-vm/3376) instead of networkd with Cockpit. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md new file mode 100644 index 0000000..d41569e --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md @@ -0,0 +1,184 @@ +

Deploy a Full VM and Run Apache Guacamole (RDP Connection, Remote Desktop)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM and Create a Root-Access User](#deploy-a-full-vm-and-create-a-root-access-user) +- [SSH with Root-Access User, Install Prerequisites and Apache Guacamole](#ssh-with-root-access-user-install-prerequisites-and-apache-guacamole) +- [Access Apache Guacamole and Create Admin-Access User](#access-apache-guacamole-and-create-admin-access-user) +- [Download the Desktop Environment and Run xrdp](#download-the-desktop-environment-and-run-xrdp) +- [Create an RDP Connection and Access the Server Remotely](#create-an-rdp-connection-and-access-the-server-remotely) +- [Feedback and Questions](#feedback-and-questions) +- [References](#references) + +*** + +## Introduction + +In this guide, we deploy a full virtual machine (Ubuntu 20.04) on the Threefold Grid with IPv4. We install and run [Apache Guacamole](https://guacamole.apache.org/) and access the VM with remote desktop connection by using [xrdp](https://www.xrdp.org/). + +The Apache Guacamole instance has a two-factor authorization to give further security to the deployment. + +With Apache Guacamole, a user can access different deployments and command servers remotely, with desktop access. + +This guide can be done on a Windows, MAC, or Linux computer. For more information on deploying a full VM and using SSH remote connection, read this [SSH guide](../../ssh_guide/ssh_guide.md). + +If you are new to the Threefold ecosystem and you want to deploy workloads on the Threefold Grid, read the [Get Started section](../../tfgrid3_getstarted.md) of the Threefold Manual. + + + +## Deploy a Full VM and Create a Root-Access User + +* Go to the [Threefold Dashboard](https://dashboard.grid.tf/#/) +* Deploy a full VM (Ubuntu 20.04) with at least the minimum specs for a desktop environment + * IPv4 Address + * Minimum vcores: 2vcores + * Minimum Gb of RAM: 4Gb + * Minimum storage: 15Gb +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` +* Once connected, create a new user with root access (for this guide we use "newuser") + * ``` + adduser newuser + ``` + * You should now see the new user directory + * ``` + ls /home + ``` + * Give sudo capacity to the new user + * ``` + usermod -aG sudo newuser + ``` + * Make the new user accessible by SSH + * ``` + su - newuser + ``` + * ``` + mkdir ~/.ssh + ``` + * Add authorized public key in the file and save it + * ``` + nano ~/.ssh/authorized_keys + ``` +* Exit the VM and reconnect with the new user + + + +## SSH with Root-Access User, Install Prerequisites and Apache Guacamole + +* SSH into the VM + * ``` + ssh newuser@VM_IPv4_address + ``` +* Update and upgrade Ubuntu + * ``` + sudo apt update && sudo apt upgrade -y && sudo apt-get install software-properties-common -y + ``` +* Download and run Apache Guacamole + * ``` + wget -O guac-install.sh https://git.io/fxZq5 + ``` + * ``` + chmod +x guac-install.sh + ``` + * ``` + sudo ./guac-install.sh + ``` + + + +## Access Apache Guacamole and Create Admin-Access User + +* On your local computer, open a browser and write the following URL with the proper IPv4 address + * ``` + https://VM_IPv4_address:8080/guacamole + ``` + * On Guacamole, enter the following for both the username and the password + * ``` + guacadmin + ``` + * Download the [TOTP](https://totp.app/) app on your Android or iOS + * Scan the QR Code + * Enter the code + * Next time you log in + * go to the TOTP app and enter the given code +* Go to the Guacamole Settings + * Users + * Create a new user with all admin privileges +* Log out of the session +* Enter with the new admin user +* Go to Settings + * Users + * Delete the default user +* Apache Guacamole is now installed + + + +## Download the Desktop Environment and Run xrdp + +* Download a Ubuntu desktop environment on the VM + * ``` + sudo apt install tasksel -y && sudo apt install lightdm -y + ``` + * Choose lightdm + * Run tasksel and choose `ubuntu desktop` + * ``` + sudo tasksel + ``` + +* Download and run xrdp + * ``` + wget https://c-nergy.be/downloads/xRDP/xrdp-installer-1.4.6.zip + ``` + * ``` + unzip xrdp-installer-1.4.6.zip + ``` + * ``` + bash xrdp-installer-1.4.6.sh + ``` + + + +## Create an RDP Connection and Access the Server Remotely + +* Create an RDP connection on Guacamole + * Open Guacamole + * ``` + http://VM_IPv4_address:8080/guacamole/ + ``` + * Go to Settings + * Click on Connections + * Click on New Connection + * Write the following parameters + * Name: Choose a name for the connection + * Location: ROOT + * Protocol: RDP + * Network + * Hostname: VM_IPv4_Address + * Port: 3389 + * Authentication + * Username: your root-access username (newuser) + * Password: your root-access username password (newuser) + * Security mode: Any + * Ignore server certificate: Yes + * Click Save + * Go to the Apache Guacamole Home menu (top right button) + * Click on the new connection + * The remote desktop access is done + + + +## Feedback and Questions + +If you have any questions, let us know by writing a post on the [Threefold Forum](https://forum.threefold.io/). + + + +## References + +Apache Guacamole for Secure Remote Access to your Computers, [https://discussion.scottibyte.com/t/apache-guacamole-for-secure-remote-access-to-your-computers/32](https://discussion.scottibyte.com/t/apache-guacamole-for-secure-remote-access-to-your-computers/32) + +MysticRyuujin's guac-install, [https://github.com/MysticRyuujin/guac-install](https://github.com/MysticRyuujin/guac-install) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/remote_desktop_gui.md b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/remote_desktop_gui.md new file mode 100644 index 0000000..d801b48 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/remote_desktop_gui.md @@ -0,0 +1,11 @@ +# Remote Desktop and GUI + +This section of the Threefold Guide provides different methods to access your 3node servers with either a remote desktop protocol or a graphical user interface (GUI). + +If you have any questions, or if you would like to see a specific guide on remote desktop connection or GUI, please let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). + +

Table of Contents

+ +- [Cockpit: a Web-based Graphical Interface for Servers](./cockpit_guide/cockpit_guide.md) +- [XRDP: an Open-Source Remote Desktop Procol](./xrdp_guide/xrdp_guide.md) +- [Apache Guacamole: a Clientless Remote Desktop Gateway.](./guacamole_guide/guacamole_guide.md) diff --git a/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md new file mode 100644 index 0000000..e1648ab --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md @@ -0,0 +1,168 @@ +

Deploy a Full VM and Run XRDP for Remote Desktop Connection

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Server Side: Deploy the Full VM, install a desktop and XRDP](#server-side-deploy-the-full-vm-install-a-desktop-and-xrdp) +- [Client Side: Install Remote Desktop Connection for Windows, MAC or Linux](#client-side-install-remote-desktop-connection-for-windows-mac-or-linux) + - [Download the App](#download-the-app) + - [Connect Remotely](#connect-remotely) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this guide, we learn how to deploy a full virtual machine on a 3node on the Threefold Grid. +We access Ubuntu with a desktop environment to offer a graphical user interface (GUI). + +This guide can be done on a Windows, MAC, or Linux computer. The only difference will be in the Remote Desktop app. The steps are very similar. + +For more information on deploying a full VM and using SSH remote connection, read this [SSH guide](../../ssh_guide/ssh_guide.md). + +If you are new to the Threefold ecosystem and you want to deploy workloads on the Threefold Grid, read the [Get Started section](../../tfgrid3_getstarted.md) of the Threefold Manual. + + + +## Server Side: Deploy the Full VM, install a desktop and XRDP + +* Go to the [Threefold Dashboard](https://dashboard.grid.tf/#/) +* Deploy a full VM (Ubuntu 20.04) + * With an IPv4 Address +* After deployment, copy the IPv4 address +* To SSH into the VM, write in the terminal + * ``` + ssh root@VM_IPv4_address + ``` +* Once connected, update, upgrade and install the desktop environment + * Update + * ``` + sudo apt update -y && sudo apt upgrade -y + ``` + * Install a light-weight desktop environment (Xfce) + * ``` + sudo apt install xfce4 xfce4-goodies -y + ``` +* Create a user with root access + * ``` + adduser newuser + ``` + * ``` + ls /home + ``` + * You should see the newuser directory + * Give sudo capacity to newuser + * ``` + usermod -aG sudo newuser + ``` + * Make newuser accessible by SSH + * ``` + su - newuser + ``` + * ``` + mkdir ~/.ssh + ``` + * ``` + nano ~/.ssh/authorized_keys + ``` + * add authorized public key in file and save + * Exit the VM and reconnect with new user + * ``` + exit + ``` +* Reconnect to the VM terminal and install XRDP + * ``` + ssh newuser@VM_IPv4_address + ``` +* Install XRDP + * ``` + sudo apt install xrdp -y + ``` +* Check XRDP status + * ``` + sudo systemctl status xrdp + ``` + * If not running, run manually: + * ``` + sudo systemctl start xrdp + ``` +* If needed, configure xrdp (optional) + * ``` + sudo nano /etc/xrdp/xrdp.ini + ``` +* Create a session with root-access user +Move to home directory + * Go to home directory of root-access user + * ``` + cd ~ + ``` +* Create session + * ``` + echo "xfce4-session" | tee .xsession + ``` +* Restart the server + * ``` + sudo systemctl restart xrdp + ``` + +* Find your local computer IP address + * On your local computer terminal, write + * ``` + curl ifconfig.me + ``` + +* On the VM terminal, allow client computer port to the firewall (ufw) + * ``` + sudo ufw allow from your_local_ip/32 to any port 3389 + ``` +* Allow SSH connection to your firewall + * ``` + sudo ufw allow ssh + ``` +* Verify status of the firewall + * ``` + sudo ufw status + ``` + * If not active, do the following: + * ``` + sudo ufw disable + ``` + * ``` + sudo ufw enable + ``` + * Then the ufw status should show changes + * ``` + sudo ufw status + ``` + + +## Client Side: Install Remote Desktop Connection for Windows, MAC or Linux + +For the client side (the local computer accessing the VM remotely), you can use remote desktop connection for Windows, MAC and Linux. The process is very similar in all three cases. + +Simply download the app, open it and write the IPv4 address of the VM. You then will need to write the username and password to enter into your VM. + +### Download the App + +* Client side Remote app + * Windows + * [Remote Desktop Connection app](https://apps.microsoft.com/store/detail/microsoft-remote-desktop/9WZDNCRFJ3PS?hl=en-ca&gl=ca&rtc=1) + * MAC + * Download in app store + * [Microsoft Remote Desktop Connection app](https://apps.apple.com/ca/app/microsoft-remote-desktop/id1295203466?mt=12) + * Linux + * [Remmina RDP Client](https://remmina.org/) + +### Connect Remotely + +* General process + * In the Remote app, enter the following: + * the IPv4 Address of the VM + * the VM root-access username and password + * You now have remote desktop connection to your VM + + + +## Conclusion + +You now have a remote access to the desktop environment of your VM. If you have any questions, let us know by writing a post on the [Threefold Forum](https://forum.threefold.io/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/sidebar.md b/collections/manual/documentation/system_administrators/getstarted/sidebar.md new file mode 100644 index 0000000..b7edb51 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/sidebar.md @@ -0,0 +1,4 @@ +- [**Manual Home**](@manual3_home_new) +--------- +**Get Started** +!!!include:getstarted_toc \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_guide.md b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_guide.md new file mode 100644 index 0000000..020e2ad --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_guide.md @@ -0,0 +1,10 @@ +

SSH Remote Connection

+ +SSH is a secure protocol used as the primary means of connecting to Linux servers remotely. It provides a text-based interface by spawning a remote shell. After connecting, all commands you type in your local terminal are sent to the remote server and executed there. + +

Table of Contents

+ +- [SSH with OpenSSH](./ssh_openssh.md) +- [SSH with PuTTY](./ssh_putty.md) +- [SSH with WSL](./ssh_wsl.md) +- [WireGuard Access](./ssh_wireguard.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_openssh.md b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_openssh.md new file mode 100644 index 0000000..6598350 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_openssh.md @@ -0,0 +1,297 @@ +

SSH Remote Connection with OpenSSH

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps and Prerequisites](#main-steps-and-prerequisites) +- [Step-by-Step Process with OpenSSH](#step-by-step-process-with-openssh) + - [Linux](#linux) + - [SSH into a 3Node with IPv4 on Linux](#ssh-into-a-3node-with-ipv4-on-linux) + - [SSH into a 3Node with the Planetary Network on Linux](#ssh-into-a-3node-with-the-planetary-network-on-linux) + - [MAC](#mac) + - [SSH into a 3Node with IPv4 on MAC](#ssh-into-a-3node-with-ipv4-on-mac) + - [SSH into a 3Node with the Planetary Network on MAC](#ssh-into-a-3node-with-the-planetary-network-on-mac) + - [Windows](#windows) + - [SSH into a 3Node with IPv4 on Windows](#ssh-into-a-3node-with-ipv4-on-windows) + - [SSH into a 3Node with the Planetary Network on Windows](#ssh-into-a-3node-with-the-planetary-network-on-windows) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node with [OpenSSH](https://www.openssh.com/) on Linux, MAC and Windows with both an IPv4 and a Planetary Network connection. To connect to the 3Node with WireGuard, read [this documentation](./ssh_wireguard.md). + +To deploy different workloads, the SSH connection process should be very similar. + +If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/). + + +# Main Steps and Prerequisites + +Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further. + +The main steps for the whole process are the following: + +* Create an SSH Key pair +* Deploy a 3Node + * Choose IPv4 or the Planetary Network +* SSH into the 3Node + * For the Planetary Network, download the Planetary Network Connector + + + +# Step-by-Step Process with OpenSSH + +## Linux + +### SSH into a 3Node with IPv4 on Linux + +Here are the steps to SSH into a 3Node with IPv4 on Linux. + +* To create the SSH key pair, write in the terminal + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the IPv4 address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@IPv4_address + ``` + +You now have an SSH connection on Linux with IPv4. + + + +### SSH into a 3Node with the Planetary Network on Linux + +Here are the steps to SSH into a 3Node with the Planetary Network on Linux. + +* To download and connect to the Threefold Planetary Network Connector + * Download the [.deb file](https://github.com/threefoldtech/planetary_network/releases/tag/v0.3-rc1-Linux) + * Right-click and select `Open with other application` + * Select `Software Install` + * Search the `Threefold Planetary Connector` and open it + * Disconnect your VPN if you have one + * In the connector, click `Connect` +* To create the SSH key pair, write in the terminal + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select Planetary Network in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the Planetary Network address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@planetary_network_address + ``` + +You now have an SSH connection on Linux with the Planetary Network. + + + +## MAC + +### SSH into a 3Node with IPv4 on MAC + +Here are the steps to SSH into a 3Node with IPv4 on MAC. + +* To create the SSH key pair, in the terminal write + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the IPv4 address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@IPv4_address + ``` + +You now have an SSH connection on MAC with IPv4. + + + +### SSH into a 3Node with the Planetary Network on MAC + +Here are the steps to SSH into a 3Node with the Planetary Network on MAC. + +* To download and connect to the Threefold Planetary Network Connector + * Download the [.dmg file](https://github.com/threefoldtech/planetary_network/releases/tag/v0.3-rc1-MacOS) + * Run the dmg installer + * Search the Threefold Planetary Connector in `Applications` and open it + * Disconnect your VPN if you have one + * In the connector, click `Connect` +* To create the SSH key pair, write in the terminal + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select Planetary Network in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the Planetary Network address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@planetary_network_address + ``` + +You now have an SSH connection on MAC with the Planetary Network. + + + +## Windows + +### SSH into a 3Node with IPv4 on Windows + +* To download OpenSSH client and OpenSSH server + * Open the `Settings` and select `Apps` + * Click `Apps & Features` + * Click `Optional Features` + * Verifiy if OpenSSH Client and OpenSSH Server are there + * If not + * Click `Add a feature` + * Search OpenSSH + * Install OpenSSH Client and OpenSSH Server +* To create the SSH key pair, open `PowerShell` and write + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in `PowerShell` + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the IPv4 address + * Open `PowerShell`, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@IPv4_address + ``` + +You now have an SSH connection on Window with IPv4. + + + +### SSH into a 3Node with the Planetary Network on Windows + +* To download and connect to the Threefold Planetary Network Connector + * Download the [.msi file](https://github.com/threefoldtech/planetary_network/releases/tag/v0.3-rc1-Windows10) + * Search the `Threefold Planetary Connector` + * Right-click and select `Install` + * Disconnect your VPN if you have one + * Open the TF connector and click `Connect` +* To download OpenSSH client and OpenSSH server + * Open the `Settings` and select `Apps` + * Click `Apps & Features` + * Click `Optional Features` + * Verifiy if OpenSSH Client and OpenSSH Server are there + * If not + * Click `Add a feature` + * Search OpenSSH + * Install OpenSSH Client and OpenSSH Server +* To create the SSH key pair, open `PowerShell` and write + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in `PowerShell` + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select Planetary Network address in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the Planetary Network address + * Open `PowerShell`, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@planetary_network_address + ``` + +You now have an SSH connection on Window with the Planetary Network. + + + +# Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_putty.md b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_putty.md new file mode 100644 index 0000000..020945a --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_putty.md @@ -0,0 +1,81 @@ +

SSH Remote Connection with PuTTY

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps and Prerequisites](#main-steps-and-prerequisites) +- [SSH with PuTTY on Windows](#ssh-with-putty-on-windows) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node on Windows with [PuTTY](https://www.putty.org/). + +To deploy different workloads, the SSH connection process should be very similar. + +If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/). + + + +## Main Steps and Prerequisites + +Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further. + +The main steps for the whole process are the following: + +* Create an SSH Key pair +* Deploy a 3Node + * Choose IPv4 or the Planetary Network +* SSH into the 3Node + * For the Planetary Network, download the Planetary Network Connector + + + +## SSH with PuTTY on Windows + +Here are the main steps to SSH into a full VM using PuTTY on a Windows machine. + +* Download [PuTTY](https://www.putty.org/) + * You can download the Windows Installer in .msi format [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) + * This will add both PuTTY and PuTTYgen to your computer + * Make sure that you have the latest version of PuTTY to avoid potential issues +* Generate an SSH key pair + * Open PuTTYgen + * In `Parameters`, you can set the type of key to `RSA` or to `EdDSA` + * Click on `Generate` + * Add a passphrase for your private key (optional) + * Take note of the generated SSH public key + * You will need to paste it to the Dashboard later + * Click `Save private key` +* To deploy a full VM + * Go to the following section of the [Threefold Dashboard](https://dashboard.grid.tf/): Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Take note of the IPv4 address +* Connect to the full VM with PuTTY + * Open PuTTY + * Go to the section `Session` + * Add the VM IPv4 address under `Host Name (or IP address)` + * Make sure `Connection type` is set to `SSH` + * Go to the section `Connection` -> `SSH` -> `Auth` -> `Credentials` + * Under `Private key file for authentication`, click on `Browse...` + * Look for the generated SSH private key in .ppk format and click `Open` + * In the main `PuTTY` window, click `Open` + * In the PuTTY terminal window, enter `root` as the login parameter + * Enter the passphrase for the private key if you set one + +You now have an SSH connection on Windows using PuTTY. + + + +## Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wireguard.md b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wireguard.md new file mode 100644 index 0000000..260fab0 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wireguard.md @@ -0,0 +1,129 @@ +

WireGuard Access

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy a Weblet with WireGuard Access](#deploy-a-weblet-with-wireguard-access) +- [Install WireGuard](#install-wireguard) +- [Set the WireGuard Configurations](#set-the-wireguard-configurations) + - [Linux and MAC](#linux-and-mac) + - [Windows](#windows) +- [Test the WireGuard Connection](#test-the-wireguard-connection) +- [SSH into the Deployment with Wireguard](#ssh-into-the-deployment-with-wireguard) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +In this Threefold Guide, we show how to set up [WireGuard](https://www.wireguard.com/) to access a 3Node deployment with an SSH connection. + +Note that WireGuard provides the connection to the 3Node deployment. It is up to you to decide which SSH client you want to use. This means that the steps to SSH into a 3Node deployment will be similar to the steps proposed in the guides for [Open-SSH](./ssh_openssh.md), [PuTTy](ssh_putty.md) and [WSL](./ssh_wsl.md). Please refer to [this documentation](./ssh_guide.md) if you have any questions concerning SSH clients. The main difference will be that we connect to the 3Node deployment using a WireGuard connection instead of an IPv4 or a Planetary Network connection. + + + +# Prerequisites + +Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further. + +* SSH client of your choice + * [Open-SSH](./ssh_openssh.md) + * [PuTTy](ssh_putty.md) + * [WSL](./ssh_wsl.md) + + + +# Deploy a Weblet with WireGuard Access + +For this guide on WireGuard access, we deploy a [Full VM](../../../dashboard/solutions/fullVm.md). Note that the whole process is similar with other types of ThreeFold weblets on the Dashboard. + +* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine +* Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb +* Select `Add WireGuard Access` in `Network` +* In `Node Selection`, click on `Load Nodes` +* Click `Deploy` + +Once the Full VM is deployed, a window named **Details** will appear. You will need to take note of the **WireGuard Config** to set the WireGuard configurations and the **WireGuard IP** to SSH into the deployment. + +> Note: At anytime, you can open the **Details** window by clicking on the button **Show Details** under **Actions** on the Dashboard weblet page. + + + +# Install WireGuard + +To install WireGuard, please refer to the official [WireGuard installation documentation](https://www.wireguard.com/install/). + + + +# Set the WireGuard Configurations + +When it comes to setting the WireGuard configurations, the steps are similar for Linux and MAC, but differ slightly for Windows. For Linux and MAC, we will be using the CLI. For Windows, we will be using the WireGuard GUI app. + +## Linux and MAC + +To set the WireGuard connection on Linux or MAC, create a WireGuard configuration file and run WireGuard via the command line: + +* Copy the content **WireGuard Config** from the Dashboard **Details** window +* Paste the content to a file with the extension `.conf` (e.g. **wg.conf**) in the directory `/etc/wireguard` + * ``` + sudo nano /etc/wireguard/wg.conf + ``` +* Start WireGuard with the command **wg-quick** and, as a parameter, pass the configuration file without the extension (e.g. *wg.conf -> wg*) + * ``` + wg-quick up wg + ``` + * Note that you can also specify a config file by path, stored in any location + * ``` + wg-quick up /etc/wireguard/wg.conf + ``` +* If you want to stop the WireGuard service, you can write the following in the terminal + * ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a WireGuard connection with the same file, write on the terminal `wg-quick down wg`, then `wg-quick up wg` to reset the connection with new configurations. + +## Windows + +To set the WireGuard connection on Windows, add and activate a tunnel with the WireGuard app: + +* Open the WireGuard GUI app +* Click on **Add Tunnel** and then **Add empty tunnel** +* Choose a name for the tunnel +* Erase the content of the main window and paste the content **WireGuard Config** from the Dashboard **Details** window +* Click **Save** and then click on **Activate**. + + + + +# Test the WireGuard Connection + +As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the WireGuard connection is properly established. Make sure to replace `VM_WireGuard_IP` with the proper WireGuard IP address: + +* Ping the deployment + * ``` + ping VM_WireGuard_IP + ``` + + + +# SSH into the Deployment with Wireguard + +To SSH into the deployment with Wireguard, use the **WireGuard IP** shown in the Dashboard **Details** window. + +* SSH into the deployment + * ``` + ssh root@VM_WireGuard_IP + ``` + +You now have access to the deployment over a WireGuard SSH connection. + + + +# Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wsl.md b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wsl.md new file mode 100644 index 0000000..793f877 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wsl.md @@ -0,0 +1,89 @@ +

SSH Remote Connection with WSL

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [SSH Key Generation](#ssh-key-generation) +- [Connect to Remote Host with SSH](#connect-to-remote-host-with-ssh) +- [Enable Port 22 in Windows Firewall](#enable-port-22-in-windows-firewall) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this Threefold Guide, we show how easy it is to SSH into a 3node on Windows with [Windows Subsystem for Linux (WSL)](https://ubuntu.com/wsl). + +If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/). + +## SSH Key Generation + +Make sure SSH is installed by entering following command at the command prompt: + +```sh +sudo apt install openssh-client +``` + +The key generation process is identical to the process on a native Linux or Ubuntu installation. +With SSH installed, run the SSH key generator by typing the following: + +```sh +ssh-keygen -t rsa +``` + +Then choose the key name and passphrase or simply press return twice to accept the default values (`key name = id_rsa` and `no passphrase`). +When the process has finished, the private key and the public key can be found in the `~/.ssh` directory accessible from the Ubuntu terminal. +You can also access the key from Windows file manager in the following folder: + +```sh +\\wsl$\\Ubuntu\home\\.ssh\ +``` + +Your private key will be generated using the default name (`id_rsa`) or the filename you specified. +The corresponding public key will be generated using the same filename but with a `.pub` extension added. +If you open the public key in a text editor it should contain something similar to this: + +``` +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com +``` + +Copying the entire text you can specify your public SSH key while connecting your wallet before deploying a VM. + +## Connect to Remote Host with SSH + +With the SSH key you should be able to SSH to your account on the remote system from the computer that has your private key using the following command: + +```sh +ssh username@remote_IP_host +``` + +If the private key you're using does not have the default name, or is not stored in the default path (not `~/.ssh/id_rsa`), you must explicitly invoke it! +On the SSH command line add the `-i` flag and the path to your private key. +For example, to invoke the private key `my_key`, stored in the `~/.ssh/keys` directory, when connecting to your account on a remote host, enter: + +```sh +ssh -i ~/.ssh/keys/my_key username@remote_IP_host +``` + +## Enable Port 22 in Windows Firewall + +The port 22 is used for Secure Shell (SSH) communication and allows remote administration access to the VM. +Sometimes it needs and can be unblocked as follows: + +- open Windows Firewall Advance Settings +- click on `New Rule…` under `Inbound Rules` to create a new firewall rule +- under `Rule Type` select `Port` +- under `Protocol and Ports` select `TCP`, `Specific local Ports` and enter `22` +- under `Action` select `Allow the connection` +- under `Profile` make sure to only select `Domain` and `Private` + +NB: do not select `Public` unless you absolutely require a direct connection form the outside world. +This is not recommend especially for portable device (Laptop, Tablets) that connect to random Wi-fi hotspots. + +- under `Name` + - Name: `SSH Server` + - Description: `SSH Server` + +## Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/tfgrid3_getstarted.md b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_getstarted.md new file mode 100644 index 0000000..dcfd756 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_getstarted.md @@ -0,0 +1,32 @@ +# TFGrid Manual - Get Started + +## Get Started - Your First Deployment + +It's easy to get started on the TFGrid and deploy applications. + +- [Create a TFChain Account](../../dashboard/wallet_connector.md) +- [Get TFT](../../threefold_token/buy_sell_tft/buy_sell_tft.md) +- [Bridge TFT to TChain](../../threefold_token/tft_bridges/tft_bridges.md) +- [Deploy an Application](../../dashboard/deploy/deploy.md) +- [SSH Remote Connection](./ssh_guide/ssh_guide.md) + - [SSH with OpenSSH](./ssh_guide/ssh_openssh.md) + - [SSH with PuTTY](./ssh_guide/ssh_putty.md) + - [SSH with WSL](./ssh_guide/ssh_wsl.md) + - [SSH and WireGuard](./ssh_guide/ssh_wireguard.md) + +## Grid Platforms + +- [TF Dashboard](../../dashboard/dashboard.md) +- [TF Flist Hub](../../developers/flist/flist_hub/zos_hub.md) + +## TFGrid Services and Resources + +- [TFGrid Services](./tfgrid_services/tf_grid_services_readme.md) + +## Advanced Deployment Techniques + +- [Advanced Topics](../advanced/advanced.md) + +*** + +If you have any question, feel free to ask for help on the [Threefold Forum](https://forum.threefold.io/c/threefold-grid-utilization/support/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/tfgrid3_network_concepts.md b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_network_concepts.md new file mode 100644 index 0000000..b3cb580 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_network_concepts.md @@ -0,0 +1,23 @@ +![ ](./getstarted/img/network_concepts_.jpg) + +# TFGrid Network Concepts + +## Peer 2 Peer Private Network + +![ ](./getstarted/img/peer2peer_net_.jpg) + +All Zmachines (or kubernetes nodes) are connected to each other over private networks. + +When you use IAC or our clients then you can manually specify the name and IP Network Addr to be used. + +## The Planetary Network + +## Web Gateway (experts only) + +![ ](./getstarted/img/webgw_.jpg) + +## More Info + +- [Planetary Network](https://library.threefold.me/info/threefold#/technology/threefold__planetary_network) +- [Web Gateway](https://library.threefold.me/info/threefold#/technology/threefold__webgw) +- [Z-Net = secure network between Z-Machines](https://library.threefold.me/info/threefold#/technology/threefold__znet) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/getstarted/tfgrid3_storage_concepts.md b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_storage_concepts.md new file mode 100644 index 0000000..22ce7bd --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_storage_concepts.md @@ -0,0 +1,8 @@ +# TFGrid Storaeg Concepts + +![ ](./getstarted/img/stfgrid3_storage_concepts_.jpg) + + +## ThreeFold Flist HUB + +see https://hub.grid.tf/ diff --git a/collections/manual/documentation/system_administrators/getstarted/tfgrid3_what_to_know.md b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_what_to_know.md new file mode 100644 index 0000000..300caa1 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/tfgrid3_what_to_know.md @@ -0,0 +1,58 @@ +# TFGrid 3.0 Whats There To Know + +- [Storage Concepts](./tfgrid3_storage_concepts.md) +- [Network Concepts](./tfgrid3_network_concepts.md) + +## Networking + +### Private network (ZNET) + +For a project that needs a private network, we need a network that can span across multiple nodes, this can be achieved by the network workload reservation [Network](/getstarted/tfgrid3_network_concepts.md) + +### Planetary network + +For a project that want their workloads directly connected to the planetary network we will need the option planetary enabled when deploying a VM or kubernetes. Check [Planetary network](https://library.threefold.me/info/threefold#/technology/threefold__planetary_network) for more info about + +### Public IPs +When you want to have a public IP assigned to your workload, you need to reserve the number of IPs along with your contract and then you can attach it to the VM workload + +## Exposing the workloads to the public + +Typically, if you reserved a public IP you can do that directly and create a domain referencing you public IP. Threefold provides also [Webgateway technology](https://library.threefold.me/info/threefold#/technology/threefold__webgw) a very cost-efficient technology to help with exposing your workloads + +### how it works +Basically you create a `domain reservation` that can be +- `prefix` based e.g `mywebsite` that will internally translate to `mywebsite.ghent01.devnet.grid.tf` +- `full domain` e.g `mysuperwebsite.com` (this needs to point to the gateway IP) + +And then you need to specify the yggrassil IP of your backend service, so the gateway knows where to redirect the traffic + +#### TLS +As a user, you have two options +- let the gateway terminate the TLS traffic for you and communicate with your workloads directly +- let the gateway forward the traffic to your backend and you do the termination yourself (the recommended way if you are doing any sensitive business) + + +## Compute + +VM workload is the only workload that you will need to run a full blown VM or an [flist-based](/flist_hub/flist_hub.md) container + +### How can I create an flist? + +The easiest way is by converting existing docker images using [the hub](https://hub.grid.tf/docker-convert) + + +### How flist-based container run in a VM? +ZOS injects its own generic kernel while booting the container based on the content of the filesystem + +### kubernetes +We leverage the VM primitive to allow provisioning kubernetes clusters across multiple nodes based on k3os flist. + + +## Exploring the capacity +You can easily check using [explorer-ui](dashboard/explorer/explorer_home.md) , also to plan your deployment you can use these [example queries](dashboard/explorer/explorer_graphql_examples.md) + +## Getting started + +Please check [Getting started](/getstarted/tfgrid3_getstarted.md) to get the necessary software / configurations + diff --git a/collections/manual/documentation/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md b/collections/manual/documentation/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md new file mode 100644 index 0000000..0e732c3 --- /dev/null +++ b/collections/manual/documentation/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md @@ -0,0 +1,95 @@ +

ThreeFold Grid Services

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Devnet](#devnet) +- [QAnet](#qanet) +- [Testnet](#testnet) +- [Mainnet](#mainnet) + - [Supported Planetary Network Nodes](#supported-planetary-network-nodes) + +*** + +## Introduction + +On this article we have aggregated a list of all of the services running on Threefold Grid 3 infrastructure for your convenience + +> Note: the usage of `dev` indicates a devnet service. +> and usage of `test` indicates a testnet service. + +## Devnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer) `wss://tfchain.dev.grid.tf` +- [GraphQL](https://graphql.dev.grid.tf/graphql) +- [Activation Service](https://activation.dev.grid.tf/activation/) +- [TFGrid Proxy](https://gridproxy.dev.grid.tf) +- [Grid Dashboard](https://dashboard.dev.grid.tf) + + +## QAnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer) `wss://tfchain.qa.grid.tf` +- [GraphQL](https://graphql.qa.grid.tf/graphql) +- [Activation Service](https://activation.qa.grid.tf/activation/) +- [TFGrid Proxy](https://gridproxy.qa.grid.tf) +- [Grid Dashboard](https://dashboard.qa.grid.tf) + +## Testnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer) `wss://tfchain.test.grid.tf` +- [GraphQL](https://graphql.test.grid.tf/graphql) +- [Activation Service](https://activation.test.grid.tf/activation/) +- [TFGrid Proxy](https://gridproxy.test.grid.tf) +- [Grid Dashboard](https://dashboard.test.grid.tf) + +## Mainnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer) `wss://tfchain.grid.tf` +- [GraphQL](https://graphql.grid.tf/graphql) +- [Activation Service](https://activation.grid.tf/activation/) +- [TFChain-Stellar Bridge](https://bridge.bsc.threefold.io/) +- [TFChain-Ethereum Bridge](https://bridge.eth.threefold.io/) +- [TFGrid Proxy](https://gridproxy.grid.tf) +- [Grid Dashboard](https://dashboard.grid.tf) + +### Supported Planetary Network Nodes + +``` + Peers: + [ + # Threefold Lochrist + tcp://gent01.grid.tf:9943 + tcp://gent02.grid.tf:9943 + tcp://gent03.grid.tf:9943 + tcp://gent04.grid.tf:9943 + tcp://gent01.test.grid.tf:9943 + tcp://gent02.test.grid.tf:9943 + tcp://gent01.dev.grid.tf:9943 + tcp://gent02.dev.grid.tf:9943 + # GreenEdge + tcp://gw291.vienna1.greenedgecloud.com:9943 + tcp://gw293.vienna1.greenedgecloud.com:9943 + tcp://gw294.vienna1.greenedgecloud.com:9943 + tcp://gw297.vienna1.greenedgecloud.com:9943 + tcp://gw298.vienna1.greenedgecloud.com:9943 + tcp://gw299.vienna2.greenedgecloud.com:9943 + tcp://gw300.vienna2.greenedgecloud.com:9943 + tcp://gw304.vienna2.greenedgecloud.com:9943 + tcp://gw306.vienna2.greenedgecloud.com:9943 + tcp://gw307.vienna2.greenedgecloud.com:9943 + tcp://gw309.vienna2.greenedgecloud.com:9943 + tcp://gw313.vienna2.greenedgecloud.com:9943 + tcp://gw324.salzburg1.greenedgecloud.com:9943 + tcp://gw326.salzburg1.greenedgecloud.com:9943 + tcp://gw327.salzburg1.greenedgecloud.com:9943 + tcp://gw328.salzburg1.greenedgecloud.com:9943 + tcp://gw330.salzburg1.greenedgecloud.com:9943 + tcp://gw331.salzburg1.greenedgecloud.com:9943 + tcp://gw333.salzburg1.greenedgecloud.com:9943 + tcp://gw422.vienna2.greenedgecloud.com:9943 + tcp://gw423.vienna2.greenedgecloud.com:9943 + tcp://gw424.vienna2.greenedgecloud.com:9943 + tcp://gw425.vienna2.greenedgecloud.com:9943 + ] +``` diff --git a/collections/manual/documentation/system_administrators/gpu/gpu.md b/collections/manual/documentation/system_administrators/gpu/gpu.md new file mode 100644 index 0000000..0d85112 --- /dev/null +++ b/collections/manual/documentation/system_administrators/gpu/gpu.md @@ -0,0 +1,123 @@ +

GPU Support

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Filter and Reserve a GPU Node](#filter-and-reserve-a-gpu-node) + - [Filter Nodes](#filter-nodes) + - [Reserve a Node](#reserve-a-node) +- [Deploy a VM with GPU](#deploy-a-vm-with-gpu) +- [Install the GPU Driver](#install-the-gpu-driver) + - [AMD Driver](#amd-driver) + - [Nvidia Driver](#nvidia-driver) + - [With an AI Model](#with-an-ai-model) +- [Troubleshooting](#troubleshooting) +- [GPU Support Links](#gpu-support-links) + +*** + +## Introduction + +This section covers the essential information to deploy a node with a GPU. We also provide links to other parts of the manual covering GPU support. + +To use a GPU on the TFGrid, users need to rent a dedicated node. Once they have rented a dedicated node equipped with a GPU, users can deploy workloads on their dedicated GPU node. + + +## Filter and Reserve a GPU Node + +You can filter and reserve a GPU node using the [Dedicated Nodes section](../../dashboard/deploy/dedicated_machines.md) of the **ThreeFold Dashboard**. + +### Filter Nodes + +* Filter nodes using the vendor name + * In **Filters**, select **GPU's vendor name** + * A new window will appear below named **GPU'S VENDOR NAME** + * Write the name of the vendor desired (e.g. **nvidia**, **amd**) + +![image](./img/gpu_8.png) + +* Filter nodes using the device name + * In **Filters**, select **GPU's device name** + * A new window will appear below named **GPU'S DEVICE NAME** + * Write the name of the device desired (e.g. **GT218**) + +![image](./img/gpu_9.png) + +### Reserve a Node + +When you have decided which node to reserve, click on **Reserve** under the column **Actions**. Once you've rented a dedicated node that has a GPU, you can deploy GPU workloads. + +![image](./img/gpu_2.png) + + +## Deploy a VM with GPU + +Now that you've reserverd a dedicated GPU node, it's time to deploy a VM to make use of the GPU! There are many ways to proceed. You can use the [Dashboard](../../dashboard/solutions/fullVm.md), [Go](../../developers/go/grid3_go_gpu.md), [Terraform](../terraform/terraform_gpu_support.md), etc. + +For example, deploying a VM with GPU on the Dashboard is easy. Simply set the GPU option and make sure to select your dedicated node, as show here: +![image](./img/gpu_3.png) + +## Install the GPU Driver + +Once you've deployed a VM with GPU, you want to SSH into the VM and install the GPU driver. + +- SSH to the VM and get your system updated +```bash +dpkg --add-architecture i386 +apt-get update +apt-get dist-upgrade +reboot +``` +- Find your Driver installer + - [AMD driver](https://www.amd.com/en/support/linux-drivers) + - [Nvidia driver](https://www.nvidia.com/download/index.aspx) + +- You can see the node card details on the ThreeFold Dashboard or by using the following command lines: +```bash +lspci | grep VGA +lshw -c video +``` + +### AMD Driver + +- Download the GPU driver using `wget` + - For example: `wget https://repo.radeon.com/amdgpu-install/23.30.2/ubuntu/focal/amdgpu-install_5.7.50702-1_all.deb` +- Install the GPU driver using `apt-get`. Make sure to update ``. +```bash +apt-get install ./amdgpu-install_.deb +amdgpu-install --usecase="dkms,graphics,opencl,hip,rocm,rocmdev,opencl,hiplibsdk,mllib,mlsdk" --opencl=rocr --vulkan=pro --opengl=mesa +``` +- To verify that the GPU is properly installed, use the following command lines: +```bash +rocm-smi +rocminfo +``` +- You should something like this: +![image](./img/gpu_4.png) +![image](./img/gpu_5.png) + +### Nvidia Driver + +For Nvidia, you can follow [those steps](https://linuxize.com/post/how-to-nvidia-drivers-on-ubuntu-20-04/#installing-the-nvidia-drivers-using-the-command-line). +- To verify that the GPU is properly installed, you can use `nvidia-smi`. You should something like this: + + ![image](./img/gpu_6.png) + +### With an AI Model + +You can also try this [AI model](https://github.com/invoke-ai/InvokeAI#getting-started-with-invokeai) to install your driver. + +## Troubleshooting + +Here are some useful links to troubleshoot your GPU installation. + +- [Steps to install the driver](https://amdgpu-install.readthedocs.io/en/latest/index.html) +- Changing kernel version + - [Link 1](https://linux.how2shout.com/how-to-change-default-kernel-in-ubuntu-22-04-20-04-lts/) + - [Link 2](https://gist.github.com/chaiyujin/c08e59752c3e238ff3b1a5098322b363) + +> Note: It is recommended to use Ubuntu 22.04.2 LTS (GNU/Linux 5.18.13-051813-generic x86_64) + +## GPU Support Links + +You can consult the [GPU Table of Contents](./gpu_toc.md) to see all available GPU support links on the ThreeFold Manual. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/gpu/gpu_toc.md b/collections/manual/documentation/system_administrators/gpu/gpu_toc.md new file mode 100644 index 0000000..073c79e --- /dev/null +++ b/collections/manual/documentation/system_administrators/gpu/gpu_toc.md @@ -0,0 +1,19 @@ +

GPU on the TFGrid

+ +The ThreeFold Manual covers many ways to use a GPU node on the TFGrid. A good place to start would be the **GPU Introduction** section. + +Feel free to explore the different possibilities! + +

Table of Contents

+ +- [GPU Support](./gpu.md) +- [Node Finder and GPU](../../dashboard/deploy/node_finder.md#gpu-support) +- [Javascript Client and GPU](../../developers/javascript/grid3_javascript_gpu_support.md) +- [GPU and Go](../../developers/go/grid3_go_gpu.md) + - [GPU Support](../../developers/go/grid3_go_gpu_support.md) + - [Deploy a VM with GPU](../../developers/go/grid3_go_vm_with_gpu.md) +- [TFCMD and GPU](../../developers/tfcmd/tfcmd_vm.md#deploy-a-vm-with-gpu) +- [Terraform and GPU](../terraform/terraform_gpu_support.md) +- [Full VM and GPU](../../dashboard/solutions/fullVm.md) +- [Zero-OS API and GPU](../../developers/internals/zos/manual/api.md#gpus) +- [GPU Farming](../../farmers/3node_building/gpu_farming.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_1.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_1.png new file mode 100644 index 0000000..226811d Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_1.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_2.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_2.png new file mode 100644 index 0000000..c755a8e Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_2.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_3.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_3.png new file mode 100644 index 0000000..865e543 Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_3.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_4.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_4.png new file mode 100644 index 0000000..de931fd Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_4.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_5.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_5.png new file mode 100644 index 0000000..2738475 Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_5.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_6.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_6.png new file mode 100644 index 0000000..9bc04f8 Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_6.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_7.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_7.png new file mode 100644 index 0000000..3a6fecb Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_7.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_8.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_8.png new file mode 100644 index 0000000..483c514 Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_8.png differ diff --git a/collections/manual/documentation/system_administrators/gpu/img/gpu_9.png b/collections/manual/documentation/system_administrators/gpu/img/gpu_9.png new file mode 100644 index 0000000..3cdd3ee Binary files /dev/null and b/collections/manual/documentation/system_administrators/gpu/img/gpu_9.png differ diff --git a/collections/manual/documentation/system_administrators/mycelium/api_yaml.md b/collections/manual/documentation/system_administrators/mycelium/api_yaml.md new file mode 100644 index 0000000..db405ba --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/api_yaml.md @@ -0,0 +1,431 @@ +

API

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [File Example](#file-example) + +*** + +## Introduction + +We provide an example of a YAML API file. + + +## File Example + +``` + +openapi: 3.0.2 +info: + version: '1.0.0' + + title: Mycelium management + contact: + url: 'https://github.com/threefoldtech/mycelium' + license: + name: Apache 2.0 + url: 'https://github.com/threefoldtech/mycelium/blob/master/LICENSE' + + description: | + This is the specification of the **mycelium** management API. It is used to perform admin tasks on the system, and + to perform administrative duties. + +externalDocs: + description: For full documentation, check out the mycelium github repo. + url: 'https://github.com/threefoldtech/mycelium' + +tags: + - name: Admin + description: Administrative operations + - name: Peer + description: Operations related to peer management + - name: Message + description: Operations on the embedded message subsystem + +servers: + - url: 'http://localhost:8989' + +paths: + '/api/v1/peers': + get: + tags: + - Admin + - Peer + summary: List known peers + description: | + List all peers known in the system, and info about their connection. + This includes the endpoint, how we know about the peer, the connection state, and if the connection is alive the amount + of bytes we've sent to and received from the peer. + operationId: getPeers + responses: + '200': + description: Succes + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PeerStats' + + '/api/v1/messages': + get: + tags: + - Message + summary: Get a message from the inbound message queue + description: | + Get a message from the inbound message queue. By default, the message is removed from the queue and won't be shown again. + If the peek query parameter is set to true, the message will be peeked, and the next call to this endpoint will show the same message. + This method returns immediately by default: a message is returned if one is ready, and if there isn't nothing is returned. If the timeout + query parameter is set, this call won't return for the given amount of seconds, unless a message is received + operationId: popMessage + parameters: + - in: query + name: peek + required: false + schema: + type: boolean + description: Whether to peek the message or not. If this is true, the message won't be removed from the inbound queue when it is read + example: true + - in: query + name: timeout + required: false + schema: + type: integer + format: int64 + minimum: 0 + description: | + Amount of seconds to wait for a message to arrive if one is not available. Setting this to 0 is valid and will return + a message if present, or return immediately if there isn't + example: 60 + - in: query + name: topic + required: false + schema: + type: string + format: byte + minLength: 0 + maxLength: 340 + description: | + Optional filter for loading messages. If set, the system checks if the message has the given string at the start. This way + a topic can be encoded. + example: example.topic + responses: + '200': + description: Message retrieved + content: + application/json: + schema: + $ref: '#/components/schemas/InboundMessage' + '204': + description: No message ready + post: + tags: + - Message + summary: Submit a new message to the system. + description: | + Push a new message to the systems outbound message queue. The system will continuously attempt to send the message until + it is either fully transmitted, or the send deadline is expired. + operationId: pushMessage + parameters: + - in: query + name: reply_timeout + required: false + schema: + type: integer + format: int64 + minimum: 0 + description: | + Amount of seconds to wait for a reply to this message to come in. If not set, the system won't wait for a reply and return + the ID of the message, which can be used later. If set, the system will wait for at most the given amount of seconds for a reply + to come in. If a reply arrives, it is returned to the client. If not, the message ID is returned for later use. + example: 120 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageBody' + responses: + '200': + description: We received a reply within the specified timeout + content: + application/json: + schema: + $ref: '#/components/schemas/InboundMessage' + + '201': + description: Message pushed successfully, and not waiting for a reply + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageResponseId' + '408': + description: The system timed out waiting for a reply to the message + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageResponseId' + + '/api/v1/messsages/reply/{id}': + post: + tags: + - Message + summary: Reply to a message with the given ID + description: | + Submits a reply message to the system, where ID is an id of a previously received message. If the sender is waiting + for a reply, it will bypass the queue of open messages. + operationId: pushMessageReply + parameters: + - in: path + name: id + required: true + schema: + type: string + format: hex + minLength: 16 + maxLength: 16 + example: abcdef0123456789 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageBody' + responses: + '204': + description: successfully submitted the reply + + '/api/v1/messages/status/{id}': + get: + tags: + - Message + summary: Get the status of an outbound message + description: | + Get information about the current state of an outbound message. This can be used to check the transmission + state, size and destination of the message. + operationId: getMessageInfo + parameters: + - in: path + name: id + required: true + schema: + type: string + format: hex + minLength: 16 + maxLength: 16 + example: abcdef0123456789 + responses: + '200': + description: Success + content: + application/json: + schema: + $ref: '#/components/schemas/MessageStatusResponse' + '404': + description: Message not found + + +components: + schemas: + Endpoint: + description: Identification to connect to a peer + type: object + properties: + proto: + description: Protocol used + type: string + enum: + - 'tcp' + - 'quic' + example: tcp + socketAddr: + description: The socket address used + type: string + example: 192.0.2.6:9651 + + PeerStats: + description: Info about a peer + type: object + properties: + endpoint: + $ref: '#/components/schemas/Endpoint' + type: + description: How we know about this peer + type: string + enum: + - 'static' + - 'inbound' + - 'linkLocalDiscovery' + example: static + connectionState: + description: The current state of the connection to the peer + type: string + enum: + - 'alive' + - 'connecting' + - 'dead' + example: alive + connectionTxBytes: + description: The amount of bytes transmitted on the current connection + type: integer + format: int64 + minimum: 0 + example: 464531564 + connectionRxBytes: + description: The amount of bytes received on the current connection + type: integer + format: int64 + minimum: 0 + example: 64645089 + + InboundMessage: + description: A message received by the system + type: object + properties: + id: + description: Id of the message, hex encoded + type: string + format: hex + minLength: 16 + maxLength: 16 + example: 0123456789abcdef + srcIp: + description: Sender overlay IP address + type: string + format: ipv6 + example: 249:abcd:0123:defa::1 + srcPk: + description: Sender public key, hex encoded + type: string + format: hex + minLength: 64 + maxLength: 64 + example: fedbca9876543210fedbca9876543210fedbca9876543210fedbca9876543210 + dstIp: + description: Receiver overlay IP address + type: string + format: ipv6 + example: 34f:b680:ba6e:7ced:355f:346f:d97b:eecb + dstPk: + description: Receiver public key, hex encoded. This is the public key of the system + type: string + format: hex + minLength: 64 + maxLength: 64 + example: 02468ace13579bdf02468ace13579bdf02468ace13579bdf02468ace13579bdf + topic: + description: An optional message topic + type: string + format: byte + minLength: 0 + maxLength: 340 + example: hpV+ + payload: + description: The message payload, encoded in standard alphabet base64 + type: string + format: byte + example: xuV+ + + PushMessageBody: + description: A message to send to a given receiver + type: object + properties: + dst: + $ref: '#/components/schemas/MessageDestination' + topic: + description: An optional message topic + type: string + format: byte + minLength: 0 + maxLength: 340 + example: hpV+ + payload: + description: The message to send, base64 encoded + type: string + format: byte + example: xuV+ + + MessageDestination: + oneOf: + - description: An IP in the subnet of the receiver node + type: object + properties: + ip: + description: The target IP of the message + format: ipv6 + example: 249:abcd:0123:defa::1 + - description: The hex encoded public key of the receiver node + type: object + properties: + pk: + description: The hex encoded public key of the target node + type: string + minLength: 64 + maxLength: 64 + example: bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32 + + PushMessageResponseId: + description: The ID generated for a message after pushing it to the system + type: object + properties: + id: + description: Id of the message, hex encoded + type: string + format: hex + minLength: 16 + maxLength: 16 + example: 0123456789abcdef + + MessageStatusResponse: + description: Information about an outobund message + type: object + properties: + dst: + description: Ip address of the receiving node + type: string + format: ipv6 + example: 249:abcd:0123:defa::1 + state: + $ref: '#/components/schemas/TransmissionState' + created: + description: Unix timestamp of when this message was created + type: integer + format: int64 + example: 1649512789 + deadline: + description: Unix timestamp of when this message will expire. If the message is not received before this, the system will give up + type: integer + format: int64 + example: 1649513089 + msgLen: + description: Length of the message in bytes + type: integer + minimum: 0 + example: 27 + + TransmissionState: + description: The state of an outbound message in it's lifetime + oneOf: + - type: string + enum: ['pending', 'received', 'read', 'aborted'] + example: 'received' + - type: object + properties: + sending: + type: object + properties: + pending: + type: integer + minimum: 0 + example: 5 + sent: + type: integer + minimum: 0 + example: 17 + acked: + type: integer + minimum: 0 + example: 3 + example: 'received' + + +``` \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/mycelium/data_packet.md b/collections/manual/documentation/system_administrators/mycelium/data_packet.md new file mode 100644 index 0000000..529a4b5 --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/data_packet.md @@ -0,0 +1,66 @@ + +

Data Packet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Packet Header](#packet-header) +- [Body](#body) + +*** + +## Introduction + + +A `data packet` contains user specified data. This can be any data, as long as the sender and receiver +both understand what it is, without further help. Intermediate hops, which route the data have sufficient +information with the header to know where to forward the packet. In practice, the data will be encrypted +to avoid eavesdropping by intermediate hops. + +## Packet Header + +The packet header has a fixed size of 36 bytes, with the following layout: + +``` + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +|Reserved | Length | Hop Limit | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +| | ++ + +| | ++ Source IP + +| | ++ + +| | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +| | ++ + +| | ++ Destination IP + +| | ++ + +| | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +``` + +The first 5 bits are reserved and must be set to 0. + +The next 19 bits are used to specify the length of the body. It is expected that +the actual length of a packet does not exceed 256K right now, so the 19th bit is +only needed because we have to account for some overhead related to the encryption. + +The next byte is the hop-limit. Every node decrements this value by 1 before sending +the packet. If a node decrements this value to 0, the packet is discarded. + +The next 16 bytes contain the sender IP address. + +The final 16 bytes contain the destination IP address. + +## Body + +Following the header is a variable length body. The protocol does not have any requirements for the +body, and the only requirement imposed is that the body is as long as specified in the header length +field. It is technically legal according to the protocol to transmit a data packet without a body, +i.e. a body length of 0. This is useless however, as there will not be any data to interpret. diff --git a/collections/manual/documentation/system_administrators/mycelium/information.md b/collections/manual/documentation/system_administrators/mycelium/information.md new file mode 100644 index 0000000..d900e82 --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/information.md @@ -0,0 +1,139 @@ + +

Additional Information

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Connect to Other Nodes](#connect-to-other-nodes) +- [Possible Peers](#possible-peers) +- [Default Port](#default-port) +- [Check Network Information](#check-network-information) +- [Test the Network](#test-the-network) +- [Key Pair](#key-pair) +- [Running without TUN interface](#running-without-tun-interface) +- [API](#api) +- [Message System](#message-system) +- [Inspecting Node Keys](#inspecting-node-keys) + +*** + +## Introduction + +We provide additional information concerning Mycelium and how to properly use it. + +## Connect to Other Nodes + +If you want to connect to other nodes, you can specify their listening address as +part of the command (combined with the protocol they are listening on, usually TCP); + +```sh +mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 +``` + +If you are using other tun inferface, e.g. utun3 (default), you can set a different utun inferface + +```sh +mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 --tun-name utun9 +``` + +## Possible Peers + +Here are some possible peers. + +``` +tcp://146.185.93.83:9651 +quic://83.231.240.31:9651 +quic://185.206.122.71:9651 +tcp://[2a04:f340:c0:71:28cc:b2ff:fe63:dd1c]:9651 +tcp://[2001:728:1000:402:78d3:cdff:fe63:e07e]:9651 +quic://[2a10:b600:1:0:ec4:7aff:fe30:8235]:9651 +``` + +## Default Port + +By default, the node will listen on port `9651`, though this can be overwritten with the `-p` flag. + +## Check Network Information + +You can check your Mycelium network information by running the following line: + +```bash +mycelium inspect --json +``` + +Where a typical output would be: + +``` +{ + "publicKey": "abd16194646defe7ad2318a0f0a69eb2e3fe939c3b0b51cf0bb88bb8028ecd1d", + "address": "3c4:c176:bf44:b2ab:5e7e:f6a:b7e2:11ca" +} +``` + +## Test the Network + +You can easily test that the network works by pinging to anyone in the network. + +``` +ping6 3c4:c176:bf44:b2ab:5e7e:f6a:b7e2:11ca +``` + +## Key Pair + +The node uses a `x25519` key pair from which its identity is derived. The private key of this key pair +is saved in a local file (32 bytes in binary format). You can specify the path to this file with the +`-k` flag. By default, the file is saved in the current working directory as `priv_key.bin`. + +## Running without TUN interface + +It is possible to run the system without creating a TUN interface, by starting with the `--no-tun` flag. +Obviously, this means that your node won't be able to send or receive L3 traffic. There is no interface +to send packets on, and consequently no interface to send received packets out of. From the point of +other nodes, your node will simply drop all incoming L3 traffic destined for it. The node **will still +route traffic** as normal. It takes part in routing, exchanges route info, and forwards packets not +intended for itself. + +The node also still allows access to the [message subsystem](#message-system). + +## API + +The node starts an HTTP API, which by default listens on `localhost:8989`. A different listening address +can be specified on the CLI when starting the system through the `--api-server-addr` flag. The API +allows access to [send and receive messages](#message-system), and will later be expanded to allow +admin functionality on the system. Note that message are sent using the identity of the node, and a +future admin API can be used to change the system behavior. As such, care should be taken that this +API is not accessible to unauthorized users. + +## Message System + +A message system is provided which allows users to send a message, which is essentially just "some data" +to a remote. Since the system is end-to-end encrypted, a receiver of a message is sure of the authenticity +and confidentiality of the content. The system does not interpret the data in any way and handles it +as an opaque block of bytes. Messages are sent with a deadline. This means the system continuously +tries to send (part of) the message, until it either succeeds, or the deadline expires. This happens +similar to the way TCP handles data. Messages are transmitted in chunks, which are embedded in the +same data stream used by L3 packets. As such, intermediate nodes can't distinguish between regular L3 +and message data. + +The primary way to interact with the message system is through [the API](#API). The message API is +documented in [here](./api_yaml.md). For some more info about how to +use the message system, see [the Message section](./message.md). + + +## Inspecting Node Keys + +Using the `inspect` subcommand, you can view the address associated with a public key. If no public key is provided, the node will show +its own public key. In either case, the derived address is also printed. You can specify the path to the private key with the `-k` flag. +If the file does not exist, a new private key will be generated. The optional `--json` flag can be used to print the information in json +format. + +```sh +mycelium inspect a47c1d6f2a15b2c670d3a88fbe0aeb301ced12f7bcb4c8e3aa877b20f8559c02 +``` + +Where the output could be something like this: + +```sh +Public key: a47c1d6f2a15b2c670d3a88fbe0aeb301ced12f7bcb4c8e3aa877b20f8559c02 +Address: 27f:b2c5:a944:4dad:9cb1:da4:8bf7:7e65 +``` \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/mycelium/installation.md b/collections/manual/documentation/system_administrators/mycelium/installation.md new file mode 100644 index 0000000..7f403c2 --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/installation.md @@ -0,0 +1,48 @@ + +

Installation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Full VM Example](#full-vm-example) + +*** + +## Introduction + +In this section, we cover how to install Mycelium. For this guide, we will show the steps on a full VM running on the TFGrid. + +Currently, Linux, macOS and Windows are supported. On Windows, you must have `wintun.dll` in the same directory you are executing the binary from. + +## Full VM Example + +- Deploy a Full VM with Planetary network and SSH into the VM +- Update the system + ``` + apt update + ``` +- Download the latest Mycelium release: [https://github.com/threefoldtech/mycelium/releases/latest](https://github.com/threefoldtech/mycelium/releases/latest) + ``` + wget https://github.com/threefoldtech/mycelium/releases/download/v0.4.0/mycelium-x86_64-unknown-linux-musl.tar.gz + ``` +- Extract Mycelium + ``` + tar -xvf mycelium-x86_64-unknown-linux-musl.tar.gz + ``` +- Move Mycelium to your path + ``` + mv mycelium /usr/local/bin + ``` +- Start Mycelium + ``` + mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 --tun-name utun2 + ``` +- Open another terminal +- Check the Mycelium connection information (address: ...) + ``` + mycelium inspect --json + ``` +- Ping the VM from another machine with IPv6 + ``` + ping6 mycelium_address + ``` \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/mycelium/message.md b/collections/manual/documentation/system_administrators/mycelium/message.md new file mode 100644 index 0000000..f132367 --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/message.md @@ -0,0 +1,90 @@ +

Message Subsystem

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Curl Examples](#curl-examples) +- [Mycelium Binary Examples](#mycelium-binary-examples) + +*** + +## Introduction + +The message subsystem can be used to send arbitrary length messages to receivers. A receiver is any +other node in the network. It can be identified both by its public key, or an IP address in its announced +range. The message subsystem can be interacted with both via the HTTP API, which is +[documented here](./api_yaml.md), or via the `mycelium` binary. By default, the messages do not interpret +the data in any way. When using the binary, the message is slightly modified to include an optional +topic at the start of the message. Note that in the HTTP API, all messages are encoded in base64. This +might make it difficult to consume these messages without additional tooling. + +## Curl Examples + +These examples assume you have at least 2 nodes running, and that they are both part of the same network. + +Send a message on node1, waiting up to 2 minutes for a possible reply: + +```bash +curl -v -H 'Content-Type: application/json' -d '{"dst": {"pk": "bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32"}, "payload": "xuV+"}' http://localhost:8989/api/v1/messages\?reply_timeout\=120 +``` + +Listen for a message on node2. Note that messages received while nothing is listening are added to +a queue for later consumption. Wait for up to 1 minute. + +```bash +curl -v http://localhost:8989/api/v1/messages\?timeout\=60\ +``` + +The system will (immediately) receive our previously sent message: + +```json +{"id":"e47b25063912f4a9","srcIp":"34f:b680:ba6e:7ced:355f:346f:d97b:eecb","srcPk":"955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b","dstIp":"2e4:9ace:9252:630:beee:e405:74c0:d876","dstPk":"bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32","payload":"xuV+"} +``` + +To send a reply, we can post a message on the reply path, with the received message `id` (still on +node2): + +```bash +curl -H 'Content-Type: application/json' -d '{"dst": {"pk":"955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b"}, "payload": "xuC+"}' http://localhost:8989/api/v1/messages/reply/e47b25063912f4a9 +``` + +If you did this fast enough, the initial sender (node1) will now receive the reply. + +## Mycelium Binary Examples + +As explained above, while using the binary the message is slightly modified to insert the optional +topic. As such, when using the binary to send messages, it is suggested to make sure the receiver is +also using the binary to listen for messages. The options discussed here are not covering all possibilities, +use the `--help` flag (`mycelium message send --help` and `mycelium message receive --help`) for a +full overview. + +Once again, send a message. This time using a topic (example.topic). Note that there are no constraints +on what a valid topic is, other than that it is valid UTF-8, and at most 255 bytes in size. The `--wait` +flag can be used to indicate that we are waiting for a reply. If it is set, we can also use an additional +`--timeout` flag to govern exactly how long (in seconds) to wait for. The default is to wait forever. + +```bash +mycelium message send 2e4:9ace:9252:630:beee:e405:74c0:d876 'this is a message' -t example.topic --wait +``` + +On the second node, listen for messages with this topic. If a different topic is used, the previous +message won't be received. If no topic is set, all messages are received. An optional timeout flag +can be specified, which indicates how long to wait for. Absence of this flag will cause the binary +to wait forever. + +```bash +mycelium message receive -t example.topic +``` + +Again, if the previous command was executed a message will be received immediately: + +```json +{"id":"4a6c956e8d36381f","topic":"example.topic","srcIp":"34f:b680:ba6e:7ced:355f:346f:d97b:eecb","srcPk":"955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b","dstIp":"2e4:9ace:9252:630:beee:e405:74c0:d876","dstPk":"bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32","payload":"this is a message"} +``` + +And once again, we can use the ID from this message to reply to the original sender, who might be waiting +for this reply (notice we used the hex encoded public key to identify the receiver here, rather than an IP): + +```bash +mycelium message send 955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b "this is a reply" --reply-to 4a6c956e8d36381f +``` diff --git a/collections/manual/documentation/system_administrators/mycelium/mycelium_toc.md b/collections/manual/documentation/system_administrators/mycelium/mycelium_toc.md new file mode 100644 index 0000000..817771d --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/mycelium_toc.md @@ -0,0 +1,14 @@ + +

Mycelium

+ +In this section, we present [Mycelium](https://github.com/threefoldtech/mycelium), an end-to-end encrypted IPv6 overlay network. + +

Table of Contents

+ +- [Overview](./overview.md) +- [Installation](./installation.md) +- [Additional Information](./information.md) +- [Message](./message.md) +- [Packet](./packet.md) +- [Data Packet](./data_packet.md) +- [API YAML](./api_yaml.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/mycelium/overview.md b/collections/manual/documentation/system_administrators/mycelium/overview.md new file mode 100644 index 0000000..d7778a6 --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/overview.md @@ -0,0 +1,33 @@ + +

Overview

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Features](#features) +- [Testing](#testing) + +*** + +## Introduction + +Mycelium is an end-2-end encrypted IPv6 overlay network written in Rust where each node that joins the overlay network will receive an overlay network IP in the 400::/7 range. + +The overlay network uses some of the core principles of the [Babel routing protocol](https://www.irif.fr/~jch/software/babel). + + +## Features + +- Mycelium, is locality aware, it will look for the shortest path between nodes +- All traffic between the nodes is end-2-end encrypted +- Traffic can be routed over nodes of friends, location aware +- If a physical link goes down Mycelium will automatically reroute your traffic +- The IP address is IPV6 and linked to private key +- A simple reliable messagebus is implemented on top of Mycelium +- Mycelium has multiple ways how to communicate quic, tcp, ... and we are working on holepunching for Quick which means P2P traffic without middlemen for NATted networks e.g. most homes +- Scalability is very important for us, we tried many overlay networks before and got stuck on all of them, we are trying to design a network which scales to a planetary level +- You can run mycelium without TUN and only use it as reliable message bus. + +## Testing + +We are looking for lots of testers to push the system. Visit the [Mycelium repository](https://github.com/threefoldtech/mycelium) to contribute. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/mycelium/packet.md b/collections/manual/documentation/system_administrators/mycelium/packet.md new file mode 100644 index 0000000..4ee2215 --- /dev/null +++ b/collections/manual/documentation/system_administrators/mycelium/packet.md @@ -0,0 +1,33 @@ + +

Packet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Packet Header](#packet-header) + +*** + +## Introduction + + +A `Packet` is the largest communication object between established `peers`. All communication is done +via these `packets`. The `packet` itself consists of a fixed size header, and a variable size body. +The body contains a more specific type of data. + +## Packet Header + +The packet header has a fixed size of 4 bytes, with the following layout: + +``` + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +| Version | Type | Reserved | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +``` + +The first byte is used to indicate the version of the protocol. Currently, only version 1 is supported +(0x01). The next byte is used to indicate the type of the body. `0x00` indicates a data packet, while +`0x01` indicates a control packet. The remaining 16 bits are currently reserved, and should be set to +all 0. diff --git a/collections/manual/documentation/system_administrators/pulumi/img/pulumi_logo.svg b/collections/manual/documentation/system_administrators/pulumi/img/pulumi_logo.svg new file mode 100644 index 0000000..26d4c65 --- /dev/null +++ b/collections/manual/documentation/system_administrators/pulumi/img/pulumi_logo.svg @@ -0,0 +1,7 @@ + + + + + + + diff --git a/collections/manual/documentation/system_administrators/pulumi/pulumi_deployment_details.md b/collections/manual/documentation/system_administrators/pulumi/pulumi_deployment_details.md new file mode 100644 index 0000000..5dc808c --- /dev/null +++ b/collections/manual/documentation/system_administrators/pulumi/pulumi_deployment_details.md @@ -0,0 +1,449 @@ +

Deployment Details

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) +- [Essential Workflow](#essential-workflow) + - [State](#state) + - [Creating an Empty Stack](#creating-an-empty-stack) + - [Bringing up the Infrastructure](#bringing-up-the-infrastructure) + - [Destroy the Infrastructure](#destroy-the-infrastructure) + - [Pulumi Makefile](#pulumi-makefile) +- [Creating a Network](#creating-a-network) + - [Pulumi File](#pulumi-file) +- [Creating a Virtual Machine](#creating-a-virtual-machine) +- [Kubernetes](#kubernetes) +- [Creating a Domain](#creating-a-domain) + - [Example of a Simple Domain Prefix](#example-of-a-simple-domain-prefix) + - [Example of a Fully Controlled Domain](#example-of-a-fully-controlled-domain) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We present here noteworthy details concerning different types of deployments that are possible with the ThreeFold Pulumi plugin. + +Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project. + +## Installation + +If this isn't already done, [install Pulumi](./pulumi_install.md) on your machine. + +## Essential Workflow + +### State + +We will be creating a state directory and informing pulumi we want to use that local directory to manage the state, no need to use a cloud backend managed by pulumi or other providers (for the sake of testing). + +```sh + mkdir ${current_dir}/state + pulumi login --cloud-url file://${current_dir}/state +``` + +### Creating an Empty Stack + +```sh + pulumi stack init test +``` + +### Bringing up the Infrastructure + +```sh + pulumi up --yes +``` + +Here we create an empty stack using `stack init` and we give it the name `test` +then to bring up the infrastructure we execute `pulumi up --yes`. + +> The `pulumi up` command shows the plan before agreeing to execute it + +### Destroy the Infrastructure + +```sh + pulumi destroy --yes + pulumi stack rm --yes + pulumi logout +``` + +### Pulumi Makefile + +In every example directory, you will find a project file `Pulumi.yaml` and a `Makefile` to reduce the amount of typing: + +```Makefile +current_dir = $(shell pwd) + +run: + rm -rf ${current_dir}/state + mkdir ${current_dir}/state + pulumi login --cloud-url file://${current_dir}/state + pulumi stack init test + pulumi up --yes + +destroy: + pulumi destroy --yes + pulumi stack rm --yes + pulumi logout +``` + +This means that, to execute, you just need to type `make run` and to destroy, you need to type `make destroy`. + +## Creating a Network + +We address here how to create a [network](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/network). + +### Pulumi File + +You can find the original file [here](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/network/Pulumi.yaml). + +```yml +name: pulumi-provider-grid +runtime: yaml + +plugins: + providers: + - name: grid + path: ../.. + +resources: + provider: + type: pulumi:providers:grid + properties: + mnemonic: + + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + farm_ids: [1] + + network: + type: grid:internal:Network + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: testing + description: test network + nodes: + - ${scheduler.nodes[0]} + ip_range: 10.1.0.0/16 + +outputs: + node_deployment_id: ${network.node_deployment_id} + nodes_ip_range: ${network.nodes_ip_range} +``` + +We will now go through this file section by section to properly understand what is happening. + +```yml +name: pulumi-provider-grid +runtime: yaml +``` + +- name is for the project name (can be anything) +- runtime: the runtime we are using can be code in yaml, python, go, etc. + +```yml +plugins: + providers: + - name: grid + path: ../.. + +``` + +Here, we define the plugins we are using within our project and their locations. Note that we use `../..` due to the repository hierarchy. + +```yml +resources: + provider: + type: pulumi:providers:grid + properties: + mnemonic: + +``` + +We then start by initializing the resources. The provider which we loaded in the plugins section is also a resource that has properties (the main one now is just the mnemonic of TCHhain). + +```yaml + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + farm_ids: [1] +``` + +Then, we create a scheduler `grid:internal:Scheduler`, that does the planning for us. Instead of being too specific about node IDs, we just give it some generic information. For example, "I want to work against these data centers (farms)". As long as the necessary criteria are provided, the scheduler can be more specific in the planning and select the appropriate resources available on the TFGrid. + +```yaml + network: + type: grid:internal:Network + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: testing + description: test network + nodes: + - ${scheduler.nodes[0]} + ip_range: 10.1.0.0/16 +``` + +Now, that we created the scheduler, we can go ahead and create the network resource `grid:internal:Network`. Please note that the network depends on the scheduler's existence. If we remove it, the scheduler and the network will be created in parallel, that's why we have the `dependsOn` section. We then proceed to specify the network resource properties, e.g. the name, the description, which nodes to deploy our network on, the IP range of the network. In our case, we only choose one node. + +To access information related to our deployment, we set the section **outputs**. This will display results that we can use, or reuse, while we develop our infrastructure further. + +```yaml +outputs: + node_deployment_id: ${network.node_deployment_id} + nodes_ip_range: ${network.nodes_ip_range} +``` + +## Creating a Virtual Machine + +Now, we will check an [example](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/virtual_machine) on how to create a virtual machine. + +Just like we've seen above, we will have two files `Makefile` and `Pulumi.yaml` where we describe the infrastructure. + +```yml +name: pulumi-provider-grid +runtime: yaml + +plugins: + providers: + - name: grid + path: ../.. + +resources: + provider: + type: pulumi:providers:grid + properties: + mnemonic: + + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + mru: 256 + sru: 2048 + farm_ids: [1] + + network: + type: grid:internal:Network + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: test + description: test network + nodes: + - ${scheduler.nodes[0]} + ip_range: 10.1.0.0/16 + + deployment: + type: grid:internal:Deployment + options: + provider: ${provider} + dependsOn: + - ${network} + properties: + node_id: ${scheduler.nodes[0]} + name: deployment + network_name: test + vms: + - name: vm + flist: https://hub.grid.tf/tf-official-apps/base:latest.flist + entrypoint: "/sbin/zinit init" + network_name: test + cpu: 2 + memory: 256 + planetary: true + mounts: + - disk_name: data + mount_point: /app + env_vars: + SSH_KEY: + + disks: + - name: data + size: 2 + +outputs: + node_deployment_id: ${deployment.node_deployment_id} + ygg_ip: ${deployment.vms_computed[0].ygg_ip} +``` + +We have a scheduler, and a network just like before. But now, we also have a deployment `grid:internal:Deployment` object that can have one or more disks and virtual machines. + +```yaml +deployment: + type: grid:internal:Deployment + options: + provider: ${provider} + dependsOn: + - ${network} + properties: + node_id: ${scheduler.nodes[0]} + name: deployment + network_name: test + vms: + - name: vm + flist: https://hub.grid.tf/tf-official-apps/base:latest.flist + entrypoint: "/sbin/zinit init" + network_name: test + cpu: 2 + memory: 256 + planetary: true + mounts: + - disk_name: data + mount_point: /app + env_vars: + SSH_KEY: + + disks: + - name: data + size: 2 +``` + +The deployment can be linked to a network using `network_name` and can have virtual machines in the `vms` section, and disks in the `disks` section. The disk can be linked and mounted in the VM if `disk_name` is used in the `mounts` section of the VM. + +We also specify a couple of essential properties, like how many virtual cores, how much memory, what FList to use, and the environment variables in the `env_vars` section. + +That's it! You can now execute `make run` to bring the infrastructure up. + +## Kubernetes + +We now see how to deploy a [Kubernetes cluster using Pulumi](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/kubernetes/Pulumi.yaml). + +```yaml + content was removed for brevity + kubernetes: + type: grid:internal:Kubernetes + options: + provider: ${provider} + dependsOn: + - ${network} + properties: + master: + name: kubernetes + node: ${scheduler.nodes[0]} + disk_size: 2 + planetary: true + cpu: 2 + memory: 2048 + + workers: + - name: worker1 + node: ${scheduler.nodes[0]} + disk_size: 2 + cpu: 2 + memory: 2048 + - name: worker2 + node: ${scheduler.nodes[0]} + disk_size: 2 + cpu: 2 + memory: 2048 + + token: t123456789 + network_name: test + ssh_key: + +outputs: + node_deployment_id: ${kubernetes.node_deployment_id} + ygg_ip: ${kubernetes.master_computed.ygg_ip} +``` + +Now, we define the Kubernetes resource `grid:internal:Kubernetes` that has master and workers blocks. You define almost everything like a normal VM except for the FLiist. Also note that the token is the `cluster token`. This will ensure that the workers and the master communicate properly. + +## Creating a Domain + +The ThreeFold Pulumi repository also covers examples on [how to work with TFGrid gateways](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/gateway_name/Pulumi.yaml). + +The basic idea is that you have a virtual machine workload on a specific IP, e.g. public IPv4, IPv6, or Planetary Network, and you want to access it using domains. + +There are two versions to achieve this, a simple and a fully controlled version. + +- Simple domain version: + - subdomain.gent01.dev.grid.tf + - This is a generous service from ThreeFold to reserve a subdomain on a set of defined gateway domains like **gent01.dev.grid.tf**. +- Fully controlled domain version: + - e.g. `mydomain.com` where you manage the domain with the name provider. + +### Example of a Simple Domain Prefix + +We present here the file for a simple domain prefix. + +```yml + content was removed for brevity + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + mru: 256 + farm_ids: [1] + ipv4: true + free_ips: 1 + + gatewayName: + type: grid:internal:GatewayName + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: pulumi + node_id: ${scheduler.nodes[0]} + backends: + - "http://69.164.223.208" + +outputs: + node_deployment_id: ${gatewayName.node_deployment_id} + fqdn: ${gatewayName.fqdn} + +``` + +In this example, we create a gateway name resource `grid:internal:GatewayName` for the name `pulumi.gent01.dev.grid.tf`. + +Some things to note: + +- **pulumi** is the prefix we want to reserve. +- It's assuming that the gateway domain we received by scheduler was the one managed by freefarm `gent01.dev.grid.tf`. +- **backends:** defines a list of IPs to load balance against when a request for `pulumi.gent01.dev.grid.tf` is received on the gateway. + +### Example of a Fully Controlled Domain + +Here's an [example](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/gateway_fqdn/Pulumi.yaml) of a more complicated, but fully controlled domain. + +```yml + code removed for brevity + gatewayFQDN: + type: grid:internal:GatewayFQDN + options: + provider: ${provider} + dependsOn: + - ${deployment} + properties: + name: testing + node_id: 14 + fqdn: mydomain.com + backends: + - http://[${deployment.vms_computed[0].ygg_ip}]:9000 +``` + +Here, we informed the gateway that any request coming for the domain `mydomain.com` needs to be balanced through the backends. + +> Note: You need to create an A record for your domain (here `mydomain.com`) pointing to the gateway IP. + +## Conclusion + +We covered in this guide some basic details concerning the use of the ThreeFold Pulumi plugin. + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/pulumi/pulumi_examples.md b/collections/manual/documentation/system_administrators/pulumi/pulumi_examples.md new file mode 100644 index 0000000..73363c3 --- /dev/null +++ b/collections/manual/documentation/system_administrators/pulumi/pulumi_examples.md @@ -0,0 +1,89 @@ +

Deployment Examples

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Set the Environment Variables](#set-the-environment-variables) +- [Test the Plugin](#test-the-plugin) +- [Destroy the Deployment](#destroy-the-deployment) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +[Pulumi](https://www.pulumi.com/) is an infrastructure as code platform that allows you to use familiar programming languages and tools to build, deploy, and manage cloud infrastructure. + +We present here the basic steps to test the examples within the [ThreeFold Pulumi](https://github.com/threefoldtech/pulumi-threefold) plugin repository. Once you've set the plugin and exported the necessary variables, the deployment process from one example to another is very similar. + +Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project. + +## Prerequisites + +There are a few things to set up before exploring Pulumi. Since we will be using the examples in the ThreeFold Pulumi repository, we must clone the repository before going further. + +* [Install Pulumi](./pulumi_install.md) on your machine +* Clone the **Pulumi-ThreeFold** repository + * ``` + git clone https://github.com/threefoldtech/pulumi-threefold + ``` +* Change directory + * ``` + cd ./pulumi-threefold + ``` + +## Set the Environment Variables + +You can export the environment variables before deploying workloads. + +* Export the network (**dev**, **qa**, **test**, **main**). Note that we are using the **dev** network by default. + * ``` + export NETWORK="Enter the network" + ``` +* Export your mnemonics. + * ``` + export MNEMONIC="Enter the mnemonics" + ``` +* Export the SSH_KEY (public key). + * ``` + export SSH_KEY="Enter the public Key" + ``` + +## Test the Plugin + +Once you've properly set the prerequisites, you can test many of the examples by simply going into the proper repository and running **make run**. + +The different examples that work simply by running **make run** are the following: + +* virtual_machine +* kubernetes +* network +* zdb +* gateway_name + +We give an example with **virtual_machine**. + +* Go to the directory **virtual_machine** + * ``` + cd examples/virtual_machine + ``` +* Deploy the Pulumi workload with **make** + * ``` + make run + ``` + +Note: To test **gateway_fqdn**, you will have to update the fqdn in **Pulumi.yaml** and create an A record for your domain pointing to the gateway IP. + + +## Destroy the Deployment + +You can destroy your Pulumi deployment at any time with the following make command: + +``` +make destroy +``` + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/pulumi/pulumi_install.md b/collections/manual/documentation/system_administrators/pulumi/pulumi_install.md new file mode 100644 index 0000000..93262a8 --- /dev/null +++ b/collections/manual/documentation/system_administrators/pulumi/pulumi_install.md @@ -0,0 +1,44 @@ +

Installing Pulumi

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) +- [Verification](#verification) + +*** + +## Introduction + +You can install [Pulumi](https://www.pulumi.com/) on Linux, MAC and Windows. + +To install Pulumi, simply follow the steps provided in the [Pulumi documentation](https://www.pulumi.com/docs/install/). We cover the basic steps here for convenience. + +## Installation + +* Install on Linux + * ``` + curl -fsSL https://get.pulumi.com | sh + ``` +* Install on MAC + * ``` + brew install pulumi/tap/pulumi + ``` +* Install on Windows + * ``` + choco install pulumi + ``` + +For Linux, if you prefer checking the shell script before executing, please do so. + +For Windows, note that there are other installation methods. Read the [Pulumi documentation](https://www.pulumi.com/docs/install/) for more information. + +## Verification + +To verify that Pulumi is properly installed on your machine, use the following command: + +``` +pulumi version +``` + +If you need more in-depth information, e.g. installing a specific version or migrating from an older version, please check the [installation documentation](https://www.pulumi.com/docs/install/). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/pulumi/pulumi_intro.md b/collections/manual/documentation/system_administrators/pulumi/pulumi_intro.md new file mode 100644 index 0000000..4595724 --- /dev/null +++ b/collections/manual/documentation/system_administrators/pulumi/pulumi_intro.md @@ -0,0 +1,130 @@ +

Introduction to Pulumi

+ +With Pulumi, you can express your infrastructure requirements using the languages you know and love, creating a seamless bridge between development and operations. Let's go! + +

Table of Contents

+ +- [Introduction](#introduction) +- [Benefits of Using Pulumi](#benefits-of-using-pulumi) +- [Declarative vs. Imperative Programming](#declarative-vs-imperative-programming) + - [Declaration Programming Example](#declaration-programming-example) + - [Benefits of declarative programming in IaC](#benefits-of-declarative-programming-in-iac) +- [Concepts](#concepts) + - [Pulumi Project](#pulumi-project) + - [Project File](#project-file) + - [Stacks](#stacks) + - [Resources](#resources) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +[ThreeFold Grid](https://threefold.io) is a decentralized cloud infrastructure platform that provides developers with a secure and scalable way to deploy and manage their applications. It is based on a peer-to-peer network of nodes that are distributed around the world. + +[Pulumi](https://www.pulumi.com/) is a cloud-native infrastructure as code (IaC) platform that allows developers to manage their infrastructure using code. It supports a wide range of cloud providers, including ThreeFold Grid. + +The [Pulumi plugin for ThreeFold Grid](https://github.com/threefoldtech/pulumi-provider-grid) provides developers with a way to deploy and manage their ThreeFold Grid resources using Pulumi. This means that developers can benefit from all of the features and benefits that Pulumi offers, such as cross-cloud support, type safety, preview and diff, and parallel execution -still in the works-. + +Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project. + +## Benefits of Using Pulumi + +Here are some additional benefits of using the Pulumi plugin for ThreeFold Grid: + +- Increased productivity: Pulumi allows developers to manage their infrastructure using code, which can significantly increase their productivity. +- Reduced errors: Pulumi's type safety and preview and diff features can help developers catch errors early, which can reduce the number of errors that occur in production. +- Improved collaboration: Pulumi programs can be shared with other developers, which can make it easier to collaborate on infrastructure projects. + +The Pulumi plugin for ThreeFold Grid is a powerful tool that can be used to deploy and manage a wide range of ThreeFold Grid applications. It is a good choice for developers who want to manage their ThreeFold Grid infrastructure using code and benefit from all of the features and benefits that Pulumi offers. + +## Declarative vs. Imperative Programming + +Declarative programming and imperative programming are two different ways to write code. Declarative programming focuses on describing the desired outcome, while imperative programming focuses on describing the steps needed to achieve that outcome. + +In the context of infrastructure as code (IaC), declarative programming allows you to describe your desired infrastructure state, and the IaC tool will figure out how to achieve it. Imperative programming, on the other hand, requires you to describe the steps needed to create and configure your infrastructure. + +### Declaration Programming Example + +Say I want an infrastructure of two virtual machines with X disks. The following would happen: + +1. Connect to the backend services. +2. Send the requests to create the virtual machines. +3. Sign the requests. +4. Execute the requests in a careful order. + +As you can see, the declarative code is much simpler and easier to read. It also makes it easier to make changes to your infrastructure, as you only need to change the desired state, and the IaC tool will figure out how to achieve it. + +### Benefits of declarative programming in IaC + +There are several benefits to using declarative programming in IaC: + +- Simpler code: Declarative code is simpler and easier to read than imperative code. This is because declarative code focuses on describing the desired outcome, rather than the steps needed to achieve it. +- More concise code: Declarative code is also more concise than imperative code. This is because declarative code does not need to specify the steps needed to achieve the desired outcome. +- Easier to make changes: Declarative code makes it easier to make changes to your infrastructure. This is because you only need to change the desired state, and the IaC tool will figure out how to achieve it. +- More reliable code: Declarative code is more reliable than imperative code. This is because declarative code does not need to worry about the order in which the steps are executed. The IaC tool will take care of that. + +We will be taking a look at a couple of examples, I'll be linking the source directory of the example and go through it, but first let's go through some concepts first + +## Concepts + +### Pulumi Project + +A Pulumi project is any folder that contains a **Pulumi.yaml** file. When in a subfolder, the closest enclosing folder with a **Pulumi.yaml** file determines the current project. A new project can be created with pulumi new. A project specifies which runtime to use and determines where to look for the program that should be executed during deployments. Supported runtimes are nodejs, python, dotnet, go, java, and yaml. + +### Project File + +The **Pulumi.yaml** project file specifies metadata about your project. The project file must begin with a capitalized P, although either a **.yml** or **.yaml** extension will work. + +A typical Pulumi.yaml file looks like the following: + +```yaml +name: my-project +runtime: + name: go + options: + binary: mybinary +description: A minimal Go Pulumi program +``` + +or + +```yaml +name: my-project +runtime: yaml +resources: + bucket: + type: aws:s3:Bucket + +``` + +For more on project or project files, please check the [Pulumi documentation](https://www.pulumi.com/docs/concepts/projects/). + +### Stacks + +Every Pulumi program is deployed to a [stack](https://www.pulumi.com/docs/concepts/stack/). A stack is an isolated, independently configurable instance of a Pulumi program. Stacks are commonly used to denote different phases of development (such as development, staging, and production) or feature branches (such as feature-x-dev). + +A project can have as many stacks as you need. By default, Pulumi creates a stack for you when you start a new project using the **pulumi new** command. + +### Resources + +Resources represent the fundamental units that make up your cloud infrastructure, such as a compute instance, a storage bucket, or a Kubernetes cluster. + +All infrastructure resources are described by one of two subclasses of the Resource class. These two subclasses are: + +- CustomResource: A custom resource is a cloud resource managed by a resource provider such as AWS, Microsoft Azure, Google Cloud, or Kubernetes. +- ComponentResource: A component resource is a logical grouping of other resources that creates a larger, higher-level abstraction that encapsulates its implementation details. + +Here's an example: + +```yaml +resources: + res: + type: the:resource:Type + properties: ...args + options: ...options +``` + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/pulumi/pulumi_readme.md b/collections/manual/documentation/system_administrators/pulumi/pulumi_readme.md new file mode 100644 index 0000000..0cc318c --- /dev/null +++ b/collections/manual/documentation/system_administrators/pulumi/pulumi_readme.md @@ -0,0 +1,12 @@ +

Pulumi Plugin

+ +Welcome to the *Pulumi Plugin* section of the ThreeFold Manual! + +In this section, we will explore the dynamic world of infrastructure as code (IaC) through the lens of Pulumi, a versatile tool that empowers you to define, deploy, and manage infrastructure using familiar programming languages. + +

Table of Contents

+ +- [Introduction to Pulumi](./pulumi_intro.md) +- [Installing Pulumi](./pulumi_install.md) +- [Deployment Examples](./pulumi_examples.md) +- [Deployment Details](./pulumi_deployment_details.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/system_administrators.md b/collections/manual/documentation/system_administrators/system_administrators.md new file mode 100644 index 0000000..fe09e6a --- /dev/null +++ b/collections/manual/documentation/system_administrators/system_administrators.md @@ -0,0 +1,83 @@ +# ThreeFold System Administrators + +This section covers all practical tutorials for system administrators working on the ThreeFold Grid. + +For complementary information on ThreeFold grid and its cloud component, refer to the [Cloud](../../knowledge_base/cloud/cloud_toc.md) section. + +

Table of Contents

+ +- [Getting Started](./getstarted/tfgrid3_getstarted.md) + - [SSH Remote Connection](./getstarted/ssh_guide/ssh_guide.md) + - [SSH with OpenSSH](./getstarted/ssh_guide/ssh_openssh.md) + - [SSH with PuTTY](./getstarted/ssh_guide/ssh_putty.md) + - [SSH with WSL](./getstarted/ssh_guide/ssh_wsl.md) + - [WireGuard Access](./getstarted/ssh_guide/ssh_wireguard.md) + - [Remote Desktop and GUI](./getstarted/remote-desktop_gui/remote-desktop_gui.md) + - [Cockpit: a Web-based Interface for Servers](./getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md) + - [XRDP: an Open-Source Remote Desktop Protocol](./getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md) + - [Apache Guacamole: a Clientless Remote Desktop Gateway](./getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md) +- [Planetary Network](./getstarted/planetarynetwork.md) +- [TFGrid Services](./getstarted/tfgrid_services/tf_grid_services_readme.md) +- [GPU](./gpu/gpu_toc.md) + - [GPU Support](./gpu/gpu.md) +- [Terraform](./terraform/terraform_toc.md) + - [Overview](./terraform/terraform_readme.md) + - [Installing Terraform](./terraform/terraform_install.md) + - [Terraform Basics](./terraform/terraform_basics.md) + - [Full VM Deployment](./terraform/terraform_full_vm.md) + - [GPU Support](./terraform/terraform_gpu_support.md) + - [Resources](./terraform/resources/terraform_resources_readme.md) + - [Using Scheduler](./terraform/resources/terraform_scheduler.md) + - [Virtual Machine](./terraform/resources/terraform_vm.md) + - [Web Gateway](./terraform/resources/terraform_vm_gateway.md) + - [Kubernetes Cluster](./terraform/resources/terraform_k8s.md) + - [ZDB](./terraform/resources/terraform_zdb.md) + - [Quantum Safe Filesystem](./terraform/resources/terraform_qsfs.md) + - [QSFS on Micro VM](./terraform/resources/terraform_qsfs_on_microvm.md) + - [QSFS on Full VM](./terraform/resources/terraform_qsfs_on_full_vm.md) + - [CapRover](./terraform/resources/terraform_caprover.md) + - [Advanced](./terraform/advanced/terraform_advanced_readme.md) + - [Terraform Provider](./terraform/advanced/terraform_provider.md) + - [Terraform Provisioners](./terraform/advanced/terraform_provisioners.md) + - [Mounts](./terraform/advanced/terraform_mounts.md) + - [Capacity Planning](./terraform/advanced/terraform_capacity_planning.md) + - [Updates](./terraform/advanced/terraform_updates.md) + - [SSH Connection with Wireguard](./terraform/advanced/terraform_wireguard_ssh.md) + - [Set a Wireguard VPN](./terraform/advanced/terraform_wireguard_vpn.md) + - [Synced MariaDB Databases](./terraform/advanced/terraform_mariadb_synced_databases.md) + - [Nomad](./terraform/advanced/terraform_nomad.md) + - [Nextcloud Deployments](./terraform/advanced/terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](./terraform/advanced/terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](./terraform/advanced/terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](./terraform/advanced/terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](./terraform/advanced/terraform_nextcloud_vpn.md) +- [Pulumi](./pulumi/pulumi_readme.md) + - [Introduction to Pulumi](./pulumi/pulumi_intro.md) + - [Installing Pulumi](./pulumi/pulumi_install.md) + - [Deployment Examples](./pulumi/pulumi_examples.md) + - [Deployment Details](./pulumi/pulumi_deployment_details.md) +- [Mycelium](./mycelium/mycelium_toc.md) + - [Overview](./mycelium/overview.md) + - [Installation](./mycelium/installation.md) + - [Additional Information](./mycelium/information.md) + - [Message](./mycelium/message.md) + - [Packet](./mycelium/packet.md) + - [Data Packet](./mycelium/data_packet.md) + - [API YAML](./mycelium/api_yaml.md) +- [Computer and IT Basics](./computer_it_basics/computer_it_basics.md) + - [CLI and Scripts Basics](./computer_it_basics/cli_scripts_basics.md) + - [Docker Basics](./computer_it_basics/docker_basics.md) + - [Git and GitHub Basics](./computer_it_basics/git_github_basics.md) + - [Firewall Basics](./computer_it_basics/firewall_basics/firewall_basics.md) + - [UFW Basics](./computer_it_basics/firewall_basics/ufw_basics.md) + - [Firewalld Basics](./computer_it_basics/firewall_basics/firewalld_basics.md) + - [File Transfer](./computer_it_basics/file_transfer.md) +- [Advanced](./advanced/advanced.md) + - [Token Transfer Keygenerator](./advanced/token_transfer_keygenerator.md) + - [Cancel Contracts](./advanced/cancel_contracts.md) + - [Contract Bills Reports](./advanced/contract_bill_report.md) + - [Listing Free Public IPs](./advanced/list_public_ips.md) + - [Redis](./advanced/grid3_redis.md) + - [IPFS](./advanced/ipfs/ipfs_toc.md) + - [IPFS on a Full VM](./advanced/ipfs/ipfs_fullvm.md) + - [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/img/terraform_.png b/collections/manual/documentation/system_administrators/terraform/advanced/img/terraform_.png new file mode 100644 index 0000000..6caa9fb Binary files /dev/null and b/collections/manual/documentation/system_administrators/terraform/advanced/img/terraform_.png differ diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_advanced_readme.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_advanced_readme.md new file mode 100644 index 0000000..67168e0 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_advanced_readme.md @@ -0,0 +1,18 @@ +

Terraform Advanced

+ +

Table of Contents

+ +- [Terraform Provider](./terraform_provider.html) +- [Terraform Provisioners](./terraform_provisioners.html) +- [Mounts](./terraform_mounts.html) +- [Capacity Planning](./terraform_capacity_planning.html) +- [Updates](./terraform_updates.html) +- [SSH Connection with Wireguard](./terraform_wireguard_ssh.md) +- [Set a Wireguard VPN](./terraform_wireguard_vpn.md) +- [Synced MariaDB Databases](./terraform_mariadb_synced_databases.md) +- [Nomad](./terraform_nomad.md) +- [Nextcloud Deployments](./terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](./terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_capacity_planning.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_capacity_planning.md new file mode 100644 index 0000000..63c96ad --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_capacity_planning.md @@ -0,0 +1,159 @@ +

Capacity Planning

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Preparing the Requests](#preparing-the-requests) + +*** + +## Introduction + +In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/simple-dynamic/main.tf) we will discuss capacity planning on top of the TFGrid. + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} +provider "grid" { +} + +locals { + name = "testvm" +} + +resource "grid_scheduler" "sched" { + requests { + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} + +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + planetary = true + } +} +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "vm2_ygg_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} +``` + +## Preparing the Requests + +```terraform +resource "grid_scheduler" "sched" { + # a machine for the first server instance + requests { + name = "server1" + cru = 1 + sru = 256 + mru = 256 + } + # a machine for the second server instance + requests { + name = "server2" + cru = 1 + sru = 256 + mru = 256 + } + # a name workload + requests { + name = "gateway" + public_config = true + } +} +``` + +Here we define a `list` of requests, each request has a name and filter options e.g `cru`, `sru`, `mru`, `hru`, having `public_config` or not, `public_ips_count` for this deployment, whether or not this node should be `dedicated`, whether or not this node should be `distinct` from other nodes in this plannder, `farm_id` to search in, nodes to exlude from search in `node_exclude`, and whether or not this node should be `certified`. + +The full docs for the capacity planner `scheduler` are found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/scheduler.md) + +And after that in our code we can reference the grid_scheduler object with the request name to be used instead of node_id. + +For example: + +```terraform +resource "grid_deployment" "server1" { + node = grid_scheduler.sched.nodes["server1"] + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, grid_scheduler.sched.nodes["server1"], "") + vms { + name = "firstserver" + flist = "https://hub.grid.tf/omar0.3bot/omarelawady-simple-http-server-latest.flist" + cpu = 1 + memory = 256 + rootfs_size = 256 + entrypoint = "/main.sh" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + env_vars = { + PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + } + + planetary = true + } +} +``` + +> Note: you need to call `distinct` while specifying the nodes in the network, because the scheduler may assign server1, server2 on the same node. Example: + +```terraform + resource "grid_network" "net1" { + name = local.name + nodes = distinct(values(grid_scheduler.sched.nodes)) + ip_range = "10.1.0.0/16" + description = "newer network" + } +``` diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md new file mode 100644 index 0000000..2fe3394 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md @@ -0,0 +1,585 @@ +

MariaDB Synced Databases Between Two VMs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [Prerequisites](#prerequisites) +- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer) +- [Set the VMs](#set-the-vms) + - [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy the 3Nodes with Terraform](#deploy-the-3nodes-with-terraform) + - [SSH into the 3Nodes](#ssh-into-the-3nodes) + - [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment) + - [Test the Wireguard Connection](#test-the-wireguard-connection) +- [Configure the MariaDB Database](#configure-the-mariadb-database) + - [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database) + - [Create User with Replication Grant](#create-user-with-replication-grant) + - [Verify the Access of the User](#verify-the-access-of-the-user) + - [Set the VMs to accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection) + - [TF Template Worker Server Data](#tf-template-worker-server-data) + - [TF Template Master Server Data](#tf-template-master-server-data) + - [Set the MariaDB Databases on Both 3Nodes](#set-the-mariadb-databases-on-both-3nodes) +- [Install and Set GlusterFS](#install-and-set-glusterfs) +- [Conclusion](#conclusion) + +*** + +# Introduction + +In this ThreeFold Guide, we show how to deploy a VPN with Wireguard and create a synced MariaDB database between the two servers using GlusterFS, a scalable network filesystem. Any change in one VM's database will be echoed in the other VM's database. This kind of deployment can lead to useful server architectures. + + + +# Main Steps + +This guide might seems overwhelming, but the steps are carefully explained. Take your time and it will all work out! + +To get an overview of the whole process, we present the main steps: + +* Download the dependencies +* Find two 3Nodes on the TFGrid +* Deploy and set the VMs with Terraform +* Create a MariaDB database +* Set GlusterFS + + + +# Prerequisites + +* [Install Terraform](https://developer.hashicorp.com/terraform/downloads) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +# Find Nodes with the ThreeFold Explorer + +We first need to decide on which 3Nodes we will be deploying our workload. + +We thus start by finding two 3Nodes with sufficient resources. For this current MariaDB guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 50 + * `Free MRU (GB)`: 2 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +# Set the VMs +## Create a Two Servers Wireguard VPN with Terraform + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. +Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-synced-db`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable files to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on. + + + +### Create the Terraform Files + +Open the terminal. + +* Go to the home folder + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-synced-db`: + * ``` + mkdir -p terraform/deployment-synced-db + ``` + * ``` + cd terraform/deployment-synced-db + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "tfnodeid2" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size + } + name = local.name + node = var.tfnodeid2 + network_name = grid_network.net1.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d2.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "ygg_ip2" { + value = grid_deployment.d2.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ipv4_vm2" { + value = grid_deployment.d2.vms[0].computedip +} + +``` + +In this file, we name the first VM as `vm1` and the second VM as `vm2`. For ease of communication, in this guide we call `vm1` as the master VM and `vm2` as the worker VM. + +In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2`is 10.1.4.2. This might be different during your own deployment. If so, change the codes in this guide accordingly. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + tfnodeid2 = "..." + + size = "50" + cpu = "1" + memory = "2048" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to increase or modify the quantity in the variables `size`, `cpu` and `memory`. + + + +### Deploy the 3Nodes with Terraform + +We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-synced-db` with the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the VPN: + * ``` + terraform apply + ``` + +After deployments, take note of the 3Nodes' IPv4 address. You will need those addresses to SSH into the 3Nodes. + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +### SSH into the 3Nodes + +* To [SSH into the 3Nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following while making sure to set the proper IP address for each VM: + * ``` + ssh root@3node_IPv4_Address + ``` + + + +### Preparing the VMs for the Deployment + +* Update and upgrade the system + * ``` + apt update && sudo apt upgrade -y && sudo apt-get install apache2 -y + ``` +* After download, you might need to reboot the system for changes to be fully taken into account + * ``` + reboot + ``` +* Reconnect to the VMs + + + +### Test the Wireguard Connection + +We now want to ping the VMs using Wireguard. This will ensure the connection is properly established. + +First, we set Wireguard with the Terraform output. + +* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/usr/local/etc/wireguard/wg.conf`. + * ``` + nano /usr/local/etc/wireguard/wg.conf + ``` + +* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The WireGuard output stands in between `EOT`. + +* Start the WireGuard on your local computer: + * ``` + wg-quick up wg + ``` + +* To stop the wireguard service: + * ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a WireGuard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`. +This should set everything properly. + +* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct: + * ``` + ping 10.1.3.2 + ``` + * ``` + ping 10.1.4.2 + ``` + +If you correctly receive the packets for the two VMs, you know that the VPN is properly set. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + + + +# Configure the MariaDB Database + +## Download MariaDB and Configure the Database + +* Download the MariaDB server and client on both the master VM and the worker VM + * ``` + apt install mariadb-server mariadb-client -y + ``` +* Configure the MariaDB database + * ``` + nano /etc/mysql/mariadb.conf.d/50-server.cnf + ``` + * Do the following changes + * Add `#` in front of + * `bind-address = 127.0.0.1` + * Remove `#` in front of the following lines and replace `X` by `1` for the master VM and by `2` for the worker VM + ``` + #server-id = X + #log_bin = /var/log/mysql/mysql-bin.log + ``` + * Below the lines shown above add the following line: + ``` + binlog_do_db = tfdatabase + ``` + +* Restart MariaDB + * ``` + systemctl restart mysql + ``` + +* Launch Mariadb + * ``` + mysql + ``` + + + +## Create User with Replication Grant + +* Do the following on both the master and the worker + * ``` + CREATE USER 'repuser'@'%' IDENTIFIED BY 'password'; + GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ; + FLUSH PRIVILEGES; + show master status\G; + ``` + + + +## Verify the Access of the User +* Verify the access of repuser user + ``` + SELECT host FROM mysql.user WHERE User = 'repuser'; + ``` + * You want to see `%` in Host + + + +## Set the VMs to accept the MariaDB Connection + +### TF Template Worker Server Data + +* Write the following in the Worker VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.3.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` + + + +### TF Template Master Server Data + +* Write the following in the Master VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.4.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` + + + +## Set the MariaDB Databases on Both 3Nodes + +We now set the MariaDB database. You should choose your own username and password. The password should be the same for the master and worker VMs. + +* On the master VM, write: + ``` + CREATE DATABASE tfdatabase; + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* On the worker VM, write: + ``` + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* To see a database, write the following: + ``` + show databases; + ``` +* To see users on MariaDB: + ``` + select user from mysql.user; + ``` +* To exit MariaDB: + ``` + exit; + ``` + + + +# Install and Set GlusterFS + +We will now install and set [GlusterFS](https://www.gluster.org/), a free and open-source software scalable network filesystem. + +* Install GlusterFS on both the master and worker VMs + * ``` + add-apt-repository ppa:gluster/glusterfs-7 -y && apt install glusterfs-server -y + ``` +* Start the GlusterFS service on both VMs + * ``` + systemctl start glusterd.service && systemctl enable glusterd.service + ``` +* Set the master to worker probe IP on the master VM: + * ``` + gluster peer probe 10.1.4.2 + ``` + +* See the peer status on the worker VM: + * ``` + gluster peer status + ``` + +* Set the master and worker IP address on the master VM: + * ``` + gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force + ``` + +* Start Gluster: + * ``` + gluster volume start vol1 + ``` + +* Check the status on the worker VM: + * ``` + gluster volume status + ``` + +* Mount the server with the master IP on the master VM: + * ``` + mount -t glusterfs 10.1.3.2:/vol1 /var/www + ``` + +* See if the mount is there on the master VM: + * ``` + df -h + ``` + +* Mount the Server with the worker IP on the worker VM: + * ``` + mount -t glusterfs 10.1.4.2:/vol1 /var/www + ``` + +* See if the mount is there on the worker VM: + * ``` + df -h + ``` + +We now update the mount with the filse fstab on both master and worker. + +* To prevent the mount from being aborted if the server reboot, write the following on both servers: + * ``` + nano /etc/fstab + ``` + * Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2): + * ``` + 10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + + * Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2): + * ``` + 10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + +The databases of both VMs are accessible in `/var/www`. This means that any change in either folder `/var/www` of each VM will be reflected in the same folder of the other VM. In order words, the databases are now synced in real-time. + + + +# Conclusion + +You now have two VMs syncing their MariaDB databases. This can be very useful for a plethora of projects requiring redundancy in storage. + +You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB and GlusterFS. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. + diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_mounts.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_mounts.md new file mode 100644 index 0000000..510e543 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_mounts.md @@ -0,0 +1,86 @@ +

Deploying a VM with Mounts Using Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [More Info](#more-info) + +*** + +## Introduction + +In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/mounts/main.tf), we will see how to deploy a VM and mount disks on it on the TFGrid. + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [2, 4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "") + disks { + name = "data" + size = 10 + description = "volume holding app data" + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "data" + mount_point = "/app" + } + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + } +} +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} +``` + +## More Info + +A complete list of Mount workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vmsmounts). diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_aio.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_aio.md new file mode 100644 index 0000000..16f3390 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_aio.md @@ -0,0 +1,140 @@ +

Nextcloud All-in-One Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Set a Firewall](#set-a-firewall) +- [Set the DNS Record for Your Domain](#set-the-dns-record-for-your-domain) +- [Install Nextcloud All-in-One](#install-nextcloud-all-in-one) +- [Set BorgBackup](#set-borgbackup) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We present a quick way to install Nextcloud All-in-One on the TFGrid. This guide is based heavily on the Nextcloud documentation available [here](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). It's mostly a simple adaptation to the TFGrid with some additional information on how to set correctly the firewall and the DNS record for your domain. + + + +## Deploy a Full VM + +* Deploy a Full VM with the [TF Dashboard](../../getstarted/ssh_guide/ssh_openssh.md) or [Terraform](../terraform_full_vm.md) + * Minimum specs: + * IPv4 Address + * 2 vcores + * 4096 MB of RAM + * 50 GB of Storage +* Take note of the VM IP address +* SSH into the Full VM + + + +## Set a Firewall + +We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +It should already be installed on your system. If it is not, install it with the following command: + +``` +apt install ufw +``` + +For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). + +We thus add the following rules: + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow https (port 443) + * ``` + ufw allow https + ``` +* Allow port 8443 + * ``` + ufw allow 8443 + ``` +* Allow port 3478 for Nextcloud Talk + * ``` + ufw allow 3478 + ``` + +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You now have enabled the firewall with proper security rules for your Nextcloud deployment. + + + +## Set the DNS Record for Your Domain + +* Go to your domain name registrar (e.g. Namecheap) + * In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on: + * Type: A Record + * Host: @ + * Value: + * TTL: Automatic + * It might take up to 30 minutes to set the DNS properly. + * To check if the A record has been registered, you can use a common DNS checker: + * ``` + https://dnschecker.org/#A/ + ``` + + + +## Install Nextcloud All-in-One + +For the rest of the guide, we follow the steps availabe on the Nextcloud website's tutorial [How to Install the Nextcloud All-in-One on Linux](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). + +* Install Docker + * ``` + curl -fsSL get.docker.com | sudo sh + ``` +* Install Nextcloud AIO + * ``` + sudo docker run \ + --sig-proxy=false \ + --name nextcloud-aio-mastercontainer \ + --restart always \ + --publish 80:80 \ + --publish 8080:8080 \ + --publish 8443:8443 \ + --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ + --volume /var/run/docker.sock:/var/run/docker.sock:ro \ + nextcloud/all-in-one:latest + ``` +* Reach the AIO interface on your browser: + * ``` + https://:8443 + ``` + * Example: `https://nextcloudwebsite.com:8443` +* Take note of the Nextcloud password +* Log in with the given password +* Add your domain name and click `Submit` +* Click `Start containers` +* Click `Open your Nextcloud` + +You can now easily access Nextcloud AIO with your domain URL! + + +## Set BorgBackup + +On the AIO interface, you can easily set BorgBackup. Since we are using Linux, we use the mounting directory `/mnt/backup`. Make sure to take note of the backup password. + +## Conclusion + +Most of the information in this guide can be found on the Nextcloud official website. We presented this guide to show another way to deploy Nextcloud on the TFGrid. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md new file mode 100644 index 0000000..940a270 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md @@ -0,0 +1,908 @@ +

Nextcloud Redundant Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [Prerequisites](#prerequisites) +- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer) +- [Set the VMs](#set-the-vms) + - [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy the 3nodes with Terraform](#deploy-the-3nodes-with-terraform) + - [SSH into the 3nodes](#ssh-into-the-3nodes) + - [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment) + - [Test the Wireguard Connection](#test-the-wireguard-connection) +- [Create the MariaDB Database](#create-the-mariadb-database) + - [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database) + - [Create User with Replication Grant](#create-user-with-replication-grant) + - [Verify the Access of the User](#verify-the-access-of-the-user) + - [Set the VMs to Accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection) + - [TF Template Worker Server Data](#tf-template-worker-server-data) + - [TF Template Master Server Data](#tf-template-master-server-data) + - [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database) +- [Install and Set GlusterFS](#install-and-set-glusterfs) +- [Install PHP and Nextcloud](#install-php-and-nextcloud) +- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns) + - [Worker File for DuckDNS](#worker-file-for-duckdns) +- [Set Apache](#set-apache) +- [Access Nextcloud on a Web Browser with the Subdomain](#access-nextcloud-on-a-web-browser-with-the-subdomain) +- [Enable HTTPS](#enable-https) + - [Install Certbot](#install-certbot) + - [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain) + - [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal) +- [Set a Firewall](#set-a-firewall) +- [Conclusion](#conclusion) +- [Acknowledgements and References](#acknowledgements-and-references) + +*** + +# Introduction + +In this Threefold Guide, we deploy a redundant [Nextcloud](https://nextcloud.com/) instance that is continually synced on two different 3node servers running on the [Threefold Grid](https://threefold.io/). + +We will learn how to deploy two full virtual machines (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). The Terraform deployment will be composed of a virtual private network (VPN) using [Wireguard](https://www.wireguard.com/). The two VMs will thus be connected in a private and secure network. Once this is done, we will link the two VMs together by setting up a [MariaDB](https://mariadb.org/) database and using [GlusterFS](https://www.gluster.org/). Then, we will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity. + +The advantage of this redundant Nextcloud deployment is obvious: if one of the two VMs goes down, the Nextcloud instance will still be accessible, as the other VM will take the lead. Also, the two VMs will be continually synced in real-time. If the master node goes down, the data will be synced to the worker node, and the worker node will become the master node. Once the master VM goes back online, the data will be synced to the master node and the master node will retake the lead as the master node. + +This kind of real-time backup of the database is not only limited to Nextcloud. You can use the same architecture to deploy different workloads while having the redundancy over two 3node servers. This architecture could be deployed over more than two 3nodes. Feel free to explore and let us know in the [Threefold Forum](http://forum.threefold.io/) if you come up with exciting and different variations of this kind of deployment. + +As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/). + +Let's go! + + + +# Main Steps + +This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out! + +To get an overview of the whole process, we present the main steps: + +* Download the dependencies +* Find two 3nodes on the TF Grid +* Deploy and set the VMs with Terraform +* Create a MariaDB database +* Download and set GlusterFS +* Install PHP and Nextcloud +* Create a subdomain with DuckDNS +* Set Apache +* Access Nextcloud +* Add HTTPS protection +* Set a firewall + + + +# Prerequisites + +* [Install Terraform](../terraform_install.md) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +# Find Nodes with the ThreeFold Explorer + +We first need to decide on which 3Nodes we will be deploying our workload. + +We thus start by finding two 3Nodes with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for 3Nodes with each a public IPv4 address. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find two 3Nodes with suitable resources for the deployment and take note of their node IDs on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 50 + * `Free MRU (GB)`: 2 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found two 3Nodes, take note of their node IDs. You will need to use those IDs when creating the Terraform files. + + + +# Set the VMs +## Create a Two Servers Wireguard VPN with Terraform + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node IDs of the two 3nodes you will be deploying on. + +### Create the Terraform Files + +Open the terminal. + +* Go to the home folder + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-nextcloud`: + * ``` + mkdir -p terraform/deployment-nextcloud + ``` + * ``` + cd terraform/deployment-nextcloud + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "tfnodeid2" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size + } + name = local.name + node = var.tfnodeid2 + network_name = grid_network.net1.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d2.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "ygg_ip2" { + value = grid_deployment.d2.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ipv4_vm2" { + value = grid_deployment.d2.vms[0].computedip +} + +``` + +In this file, we name the first VM as `vm1` and the second VM as `vm2`. In the guide, we call `vm1` as the master VM and `vm2` as the worker VM. + +In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + tfnodeid2 = "..." + + size = "50" + cpu = "1" + memory = "2048" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers. + +### Deploy the 3nodes with Terraform + +We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-nextcloud` with the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the VPN: + * ``` + terraform apply + ``` + +After deployments, take note of the 3nodes' IPv4 address. You will need those addresses to SSH into the 3nodes. + +### SSH into the 3nodes + +* To [SSH into the 3nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following: + * ``` + ssh root@VM_IPv4_Address + ``` + +### Preparing the VMs for the Deployment + +* Update and upgrade the system + * ``` + apt update && apt upgrade -y && apt-get install apache2 -y + ``` +* After download, reboot the system + * ``` + reboot + ``` +* Reconnect to the VMs + + + +### Test the Wireguard Connection + +We now want to ping the VMs using Wireguard. This will ensure the connection is properly established. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + +First, we set Wireguard with the Terraform output. + +* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/etc/wireguard/wg.conf`. + * ``` + nano /etc/wireguard/wg.conf + ``` + +* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The Wireguard output stands in between `EOT`. + +* Start Wireguard on your local computer: + * ``` + wg-quick up wg + ``` + +* To stop the wireguard service: + * ``` + wg-quick down wg + ``` + +If it doesn't work and you already did a wireguard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`. +This should set everything properly. + +* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct: + * ``` + ping 10.1.3.2 + ``` + * ``` + ping 10.1.4.2 + ``` + +If you correctly receive the packets from the two VMs, you know that the VPN is properly set. + + + +# Create the MariaDB Database + +## Download MariaDB and Configure the Database + +* Download MariaDB's server and client on both VMs + * ``` + apt install mariadb-server mariadb-client -y + ``` +* Configure the MariaDB database + * ``` + nano /etc/mysql/mariadb.conf.d/50-server.cnf + ``` + * Do the following changes + * Add `#` in front of + * `bind-address = 127.0.0.1` + * Remove `#` in front of the following lines and replace `X` by `1` on the master VM and by `2` on the worker VM + ``` + #server-id = X + #log_bin = /var/log/mysql/mysql-bin.log + ``` + * Below the lines shown above add the following line: + ``` + binlog_do_db = nextcloud + ``` + +* Restart MariaDB + * ``` + systemctl restart mysql + ``` + +* Launch MariaDB + * ``` + mysql + ``` + +## Create User with Replication Grant + +* Do the following on both VMs + * ``` + CREATE USER 'repuser'@'%' IDENTIFIED BY 'password'; + GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ; + FLUSH PRIVILEGES; + show master status\G; + ``` + +## Verify the Access of the User +* Verify the access of the user + ``` + SELECT host FROM mysql.user WHERE User = 'repuser'; + ``` + * You want to see `%` in Host + +## Set the VMs to Accept the MariaDB Connection + +### TF Template Worker Server Data + +* Write the following in the worker VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.3.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` +### TF Template Master Server Data + +* Write the following in the master VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.4.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` + +## Set the Nextcloud User and Database + +We now set the Nextcloud database. You should choose your own username and password. The password should be the same for the master and worker VMs. + +* On the master VM, write: + ``` + CREATE DATABASE nextcloud; + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* On the worker VM, write: + ``` + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* To see the databases, write: + ``` + show databases; + ``` +* To see users, write: + ``` + select user from mysql.user; + ``` +* To exit MariaDB, write: + ``` + exit; + ``` + + + +# Install and Set GlusterFS + +We will now install and set [GlusterFS](https://www.gluster.org/), a free and open source software scalable network filesystem. + +* Install GlusterFS on both the master and worker VMs + * ``` + echo | add-apt-repository ppa:gluster/glusterfs-7 && apt install glusterfs-server -y + ``` +* Start the GlusterFS service on both VMs + * ``` + systemctl start glusterd.service && systemctl enable glusterd.service + ``` +* Set the master to worker probe IP on the master VM: + * ``` + gluster peer probe 10.1.4.2 + ``` + +* See the peer status on the worker VM: + * ``` + gluster peer status + ``` + +* Set the master and worker IP address on the master VM: + * ``` + gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force + ``` + +* Start GlusterFS on the master VM: + * ``` + gluster volume start vol1 + ``` + +* Check the status on the worker VM: + * ``` + gluster volume status + ``` + +* Mount the server with the master IP on the master VM: + * ``` + mount -t glusterfs 10.1.3.2:/vol1 /var/www + ``` + +* See if the mount is there on the master VM: + * ``` + df -h + ``` + +* Mount the server with the worker IP on the worker VM: + * ``` + mount -t glusterfs 10.1.4.2:/vol1 /var/www + ``` + +* See if the mount is there on the worker VM: + * ``` + df -h + ``` + +We now update the mount with the filse fstab on both VMs. + +* To prevent the mount from being aborted if the server reboots, write the following on both servers: + * ``` + nano /etc/fstab + ``` + +* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2): + * ``` + 10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + +* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2): + * ``` + 10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + + + +# Install PHP and Nextcloud + +* Install PHP and the PHP modules for Nextcloud on both the master and the worker: + * ``` + apt install php -y && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y + ``` + +We will now install Nextcloud. This is done only on the master VM. + +* On both the master and worker VMs, go to the folder `/var/www`: + * ``` + cd /var/www + ``` + +* To install the latest Nextcloud version, go to the Nextcloud homepage: + * See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/). + +* We now download Nextcloud on the master VM. + * ``` + wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip + ``` + +You only need to download on the master VM, since you set a peer-to-peer connection, it will also be accessible on the worker VM. + +* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress: + * ``` + apt install p7zip-full -y + ``` + * ``` + 7z x nextcloud-27.0.1.zip -o/var/www/ + ``` + +* After the download, see if the Nextcloud file is there on the worker VM: + * ``` + ls + ``` + +* Then, we grant permissions to the folder. Do this on both the master VM and the worker VM. + * ``` + chown www-data:www-data /var/www/nextcloud/ -R + ``` + + + +# Create a Subdomain with DuckDNS + +We want to create a subdomain to access Nextcloud over the public internet. + +For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit. + +We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website. Make sure to do this for both VMs. + +* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/). +* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system. + +Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure. + +## Worker File for DuckDNS + +In our current scenario, we want to make sure the master VM stays the main IP address for the DuckDNS subdomain as long as the master VM is online. To do so, we add an `if` statement in the worker VM's `duck.sh` file. The process is as follow: the worker VM will ping the master VM and if it sees that the master VM is offline, it will run the command to update DuckDNS's subdomain with the worker VM's IP address. When the master VM goes back online, it will run the `duck.sh` file within 5 minutes and the DuckDNS's subdomain will be updated with the master VM's IP address. + +The content of the `duck.sh` file for the worker VM is the following. Make sure to replace the line `echo ...` with the line provided by DuckDNS and to replace `mastervm_IPv4_address` with the master VM's IP address. + +``` +ping -c 2 mastervm_IPv4_address + +if [ $? != 0 ] +then + + echo url="https://www.duckdns.org/update?domains=exampledomain&token=a7c4d0ad-114e-40ef-ba1d-d217904a50f2&ip=" | curl -k -o ~/duckdns/duck.log -K - + +fi + +``` + +Note: When the master VM goes offline, after 5 minutes maximum DuckDNS will change the IP address from the master’s to the worker’s. Without clearing the DNS cache, your browser might have some difficulties connecting to the updated IP address when reaching the URL `subdomain.duckdns.org`. Thus you might need to [clear your DNS cache](https://blog.hubspot.com/website/flush-dns). You can also use the [Tor browser](https://www.torproject.org/) to connect to Nextcloud. If the IP address changes, you can simply leave the browser and reopen another session as the browser will automatically clear the DNS cache. + + + +# Set Apache + +We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`. + +* On both the master and worker VMs, write the following: + * ``` + nano /etc/apache2/sites-available/nextcloud.conf + ``` + +The file should look like this, with your own subdomain instead of `subdomain`: + +``` + + DocumentRoot "/var/www/nextcloud" + ServerName subdomain.duckdns.org + ServerAlias www.subdomain.duckdns.org + + ErrorLog ${APACHE_LOG_DIR}/nextcloud.error + CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined + + + Require all granted + Options FollowSymlinks MultiViews + AllowOverride All + + + Dav off + + + SetEnv HOME /var/www/nextcloud + SetEnv HTTP_HOME /var/www/nextcloud + Satisfy Any + + + + +``` + +* On both the master VM and the worker VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file: + * ``` + a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl + ``` + +* Then, reload and restart Apache: + * ``` + systemctl reload apache2 && systemctl restart apache2 + ``` + + + +# Access Nextcloud on a Web Browser with the Subdomain + +We now access Nextcloud over the public Internet. + +* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain): + * ``` + subdomain.duckdns.org + ``` + +Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser. + +* Choose a name and a password. For this guide, we use the following: + * ``` + ncadmin + password1234 + ``` + +* Enter the Nextcloud Database information created with MariaDB and click install: + * ``` + Database user: ncuser + Database password: password1234 + Database name: nextcloud + Database location: localhost + ``` + +Nextcloud will then proceed to complete the installation. + +We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database. + +After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain. + + + +# Enable HTTPS + +## Install Certbot + +We will now enable HTTPS. This needs to be done on the master VM as well as the worker VM. This section can be done simultaneously on the two VMs. But make sure to do the next section on setting the Certbot with only one VM at a time. + +To enable HTTPS, first install `letsencrypt` with `certbot`: + +Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/) + +* See if you have the latest version of snap: + * ``` + snap install core; snap refresh core + ``` + +* Remove certbot-auto: + * ``` + apt-get remove certbot + ``` + +* Install certbot: + * ``` + snap install --classic certbot + ``` + +* Ensure that certbot can be run: + * ``` + ln -s /snap/bin/certbot /usr/bin/certbot + ``` + +* Then, install certbot-apache: + * ``` + apt install python3-certbot-apache -y + ``` + +## Set the Certbot with the DNS Domain + +To avoid errors, set HTTPS with the master VM and power off the worker VM. + +* To do so with a 3node, you can simply comment the `vms` section of the worker VM in the Terraform `main.tf` file and do `terraform apply` on the terminal. + * Put `/*` one line above the section, and `*/` one line below the section `vms`: +``` +/* + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +*/ +``` +* Put `#` in front of the appropriated lines, as shown below: +``` +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +#output "node1_zmachine2_ip" { +# value = grid_deployment.d2.vms[0].ip +#} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +#output "ygg_ip2" { +# value = grid_deployment.d2.vms[0].ygg_ip +#} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +#output "ipv4_vm2" { +# value = grid_deployment.d2.vms[0].computedip +#} +``` + +* To add the HTTPS protection, write the following line on the master VM with your own subdomain: + * ``` + certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org + ``` + +* Once the HTTPS is set, you can reset the worker VM: + * To reset the worker VM, simply remove `/*`, `*/` and `#` on the main file and redo `terraform apply` on the terminal. + +Note: You then need to redo the same process with the worker VM. This time, make sure to set the master VM offline to avoid errors. This means that you should comment the section `vms`of `vm1`instead of `vm2`. + +## Verify HTTPS Automatic Renewal + +* Make a dry run of the certbot renewal to verify that it is correctly set up. + * ``` + certbot renew --dry-run + ``` + +You now have HTTPS security on your Nextcloud instance. + +# Set a Firewall + +Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +It should already be installed on your system. If it is not, install it with the following command: + +``` +apt install ufw +``` + +For our security rules, we want to allow SSH, HTTP and HTTPS. + +We thus add the following rules: + + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow https (port 443) + * ``` + ufw allow https + ``` + +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You now have enabled the firewall with proper security rules for your Nextcloud deployment. + + + +# Conclusion + +If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone! + +The Nextcloud database is synced in real-time on two different 3nodes. When one 3node goes offline, the database is still synchronized on the other 3node. Once the powered-off 3node goes back online, the database is synced automatically with the node that was powered off. + +You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this. + +You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB, GlusterFS, PHP and Nextcloud. Now, you know how to deploy workloads on the Threefold Grid with an efficient architecture in order to ensure redundancy. This is just the beginning. The Threefold Grid has a somewhat infinite potential when it comes to deployments, workloads, architectures and server projects. Let's see where it goes from here! + +This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge. + + + +# Acknowledgements and References + +A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort! + +The main reference for this guide is this [amazing video](https://youtu.be/ARsqxUw1ONc) by NETVN82. Many steps were modified or added to make this suitable with Wireguard and the Threefold Grid. Other configurations are possible. We invite you to explore the possibilities offered by the Threefold Grid! + +This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_single.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_single.md new file mode 100644 index 0000000..5ad8116 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_single.md @@ -0,0 +1,594 @@ +

Nextcloud Single Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) +- [Set the Full VM](#set-the-full-vm) + - [Overview](#overview) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform) + - [SSH into the 3Node](#ssh-into-the-3node) + - [Prepare the Full VM](#prepare-the-full-vm) +- [Create the MariaDB Database](#create-the-mariadb-database) + - [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database) + - [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database) +- [Install PHP and Nextcloud](#install-php-and-nextcloud) +- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns) +- [Set Apache](#set-apache) +- [Access Nextcloud on a Web Browser](#access-nextcloud-on-a-web-browser) +- [Enable HTTPS](#enable-https) + - [Install Certbot](#install-certbot) + - [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain) + - [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal) +- [Set a Firewall](#set-a-firewall) +- [Conclusion](#conclusion) +- [Acknowledgements and References](#acknowledgements-and-references) + +*** + +# Introduction + +In this Threefold Guide, we deploy a [Nextcloud](https://nextcloud.com/) instance on a full VM running on the [Threefold Grid](https://threefold.io/). + +We will learn how to deploy a full virtual machine (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). We will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity. + +As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/). + +Let's go! + + + +# Main Steps + +This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out! + +To get an overview of the whole process, we present the main steps: + +* Download the dependencies +* Find a 3Node on the TF Grid +* Deploy and set the VM with Terraform +* Install PHP and Nextcloud +* Create a subdomain with DuckDNS +* Set Apache +* Access Nextcloud +* Add HTTPS protection +* Set a firewall + + + +# Prerequisites + +- [Install Terraform](../terraform_install.md) + +You need to download and install properly Terraform on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +# Find a 3Node with the ThreeFold Explorer + +We first need to decide on which 3Node we will be deploying our workload. + +We thus start by finding a 3Node with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Node. + * `Free SRU (GB)`: 50 + * `Free MRU (GB)`: 2 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a 3Node, take note of its node ID. You will need to use this ID when creating the Terraform files. + + + +# Set the Full VM + +## Overview + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-single-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on. + +## Create the Terraform Files + +Open the terminal and follow those steps. + +* Go to the home folder + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-single-nextcloud`: + * ``` + mkdir -p terraform/deployment-single-nextcloud + ``` + * ``` + cd terraform/deployment-single-nextcloud + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +In this file, we name the full VM as `vm1`. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + + size = "50" + cpu = "1" + memory = "2048" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node. Simply replace the three dots by the appropriate content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers. + +## Deploy the Full VM with Terraform + +We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-single-nextcloud` with the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the full VM: + * ``` + terraform apply + ``` + +After deployments, take note of the 3Node's IPv4 address. You will need this address to SSH into the 3Node. + +## SSH into the 3Node + +* To [SSH into the 3Node](../../getstarted/ssh_guide/ssh_guide.md), write the following: + * ``` + ssh root@VM_IPv4_Address + ``` + +## Prepare the Full VM + +* Update and upgrade the system + * ``` + apt update && apt upgrade && apt-get install apache2 + ``` +* After download, reboot the system + * ``` + reboot + ``` +* Reconnect to the VM + + + +# Create the MariaDB Database + +## Download MariaDB and Configure the Database + +* Download MariaDB's server and client + * ``` + apt install mariadb-server mariadb-client + ``` +* Configure the MariaDB database + * ``` + nano /etc/mysql/mariadb.conf.d/50-server.cnf + ``` + * Do the following changes + * Add `#` in front of + * `bind-address = 127.0.0.1` + * Remove `#` in front of the following lines and make sure the variable `server-id` is set to `1` + ``` + #server-id = 1 + #log_bin = /var/log/mysql/mysql-bin.log + ``` + * Below the lines shown above add the following line: + ``` + binlog_do_db = nextcloud + ``` + +* Restart MariaDB + * ``` + systemctl restart mysql + ``` + +* Launch MariaDB + * ``` + mysql + ``` + +## Set the Nextcloud User and Database + +We now set the Nextcloud database. You should choose your own username and password. + +* On the full VM, write: + ``` + CREATE DATABASE nextcloud; + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* To see the databases, write: + ``` + show databases; + ``` +* To see users, write: + ``` + select user from mysql.user; + ``` +* To exit MariaDB, write: + ``` + exit; + ``` + + +# Install PHP and Nextcloud + +* Install PHP and the PHP modules for Nextcloud on both the master and the worker: + * ``` + apt install php && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip + ``` + +We will now install Nextcloud. + +* On the full VM, go to the folder `/var/www`: + * ``` + cd /var/www + ``` + +* To install the latest Nextcloud version, go to the Nextcloud homepage: + * See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/). + +* We now download Nextcloud on the full VM. + * ``` + wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip + ``` + +* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress: + * ``` + apt install p7zip-full + ``` + * ``` + 7z x nextcloud-27.0.1.zip -o/var/www/ + ``` +* Then, we grant permissions to the folder. + * ``` + chown www-data:www-data /var/www/nextcloud/ -R + ``` + + + +# Create a Subdomain with DuckDNS + +We want to create a subdomain to access Nextcloud over the public internet. + +For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit. + +We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website. + +* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/). +* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system. + +Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure. + + + +# Set Apache + +We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`. + +* On full VM, write the following: + * ``` + nano /etc/apache2/sites-available/nextcloud.conf + ``` + +The file should look like this, with your own subdomain instead of `subdomain`: + +``` + + DocumentRoot "/var/www/nextcloud" + ServerName subdomain.duckdns.org + ServerAlias www.subdomain.duckdns.org + + ErrorLog ${APACHE_LOG_DIR}/nextcloud.error + CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined + + + Require all granted + Options FollowSymlinks MultiViews + AllowOverride All + + + Dav off + + + SetEnv HOME /var/www/nextcloud + SetEnv HTTP_HOME /var/www/nextcloud + Satisfy Any + + + + +``` + +* On the full VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file: + * ``` + a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl + ``` + +* Then, reload and restart Apache: + * ``` + systemctl reload apache2 && systemctl restart apache2 + ``` + + + +# Access Nextcloud on a Web Browser + +We now access Nextcloud over the public Internet. + +* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain): + * ``` + subdomain.duckdns.org + ``` + +Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser. + +* Choose a name and a password. For this guide, we use the following: + * ``` + ncadmin + password1234 + ``` + +* Enter the Nextcloud Database information created with MariaDB and click install: + * ``` + Database user: ncuser + Database password: password1234 + Database name: nextcloud + Database location: localhost + ``` + +Nextcloud will then proceed to complete the installation. + +We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database. + +After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain. + + + +# Enable HTTPS + +## Install Certbot + +We will now enable HTTPS on the full VM. + +To enable HTTPS, first install `letsencrypt` with `certbot`: + +Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/) + +* See if you have the latest version of snap: + * ``` + snap install core; snap refresh core + ``` + +* Remove certbot-auto: + * ``` + apt-get remove certbot + ``` + +* Install certbot: + * ``` + snap install --classic certbot + ``` + +* Ensure that certbot can be run: + * ``` + ln -s /snap/bin/certbot /usr/bin/certbot + ``` + +* Then, install certbot-apache: + * ``` + apt install python3-certbot-apache + ``` + +## Set the Certbot with the DNS Domain + +We now set the certbot with the DNS domain. + +* To add the HTTPS protection, write the following line on the full VM with your own subdomain: + * ``` + certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org + ``` + +## Verify HTTPS Automatic Renewal + +* Make a dry run of the certbot renewal to verify that it is correctly set up. + * ``` + certbot renew --dry-run + ``` + +You now have HTTPS security on your Nextcloud instance. + +# Set a Firewall + +Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +It should already be installed on your system. If it is not, install it with the following command: + +``` +apt install ufw +``` + +For our security rules, we want to allow SSH, HTTP and HTTPS. + +We thus add the following rules: + + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow https (port 443) + * ``` + ufw allow https + ``` + +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You now have enabled the firewall with proper security rules for your Nextcloud deployment. + + + +# Conclusion + +If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone! + +You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this. + +You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Terraform, MariaDB, PHP and Nextcloud. + +This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge. + + + +# Acknowledgements and References + +A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort! + +This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform. + +This single Nextcloud instance guide is an adaptation from the [Nextcloud Redundant Deployment guide](terraform_nextcloud_redundant.md). The inspiration to make a single instance deployment guide comes from [RobertL](https://forum.threefold.io/t/threefold-guide-nextcloud-redundant-deployment-on-two-3node-servers/3915/3) on the ThreeFold Forum. + +Thanks to everyone who helped shape this guide. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_toc.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_toc.md new file mode 100644 index 0000000..4152838 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_toc.md @@ -0,0 +1,10 @@ +

Nextcloud Deployments

+ +We present here different Nextcloud deployments. While this section is focused on Nextcloud, those deployment architectures can be used as templates for other kind of deployments on the TFGrid. + +

Table of Contents

+ +- [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md) +- [Nextcloud Single Deployment](./terraform_nextcloud_single.md) +- [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md) +- [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md) \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md new file mode 100644 index 0000000..4045078 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md @@ -0,0 +1,343 @@ +

Nextcloud 2-Node VPN Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [2-Node Terraform Deployment](#2-node-terraform-deployment) + - [Create the Terraform Files](#create-the-terraform-files) + - [Variables File](#variables-file) + - [Main File](#main-file) + - [Deploy the 2-Node VPN](#deploy-the-2-node-vpn) +- [Nextcloud Setup](#nextcloud-setup) +- [Nextcloud VM Prerequisites](#nextcloud-vm-prerequisites) +- [Prepare the VMs for the Rsync Daily Backup](#prepare-the-vms-for-the-rsync-daily-backup) +- [Create a Cron Job for the Rsync Daily Backup](#create-a-cron-job-for-the-rsync-daily-backup) +- [Future Projects](#future-projects) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +This guide is a proof-of-concept to show that, using two VMs in a WireGuard VPN, it is possible to, on the first VM, set a Nextcloud AIO instance on the TFGrid, set on it a daily backup and update with Borgbackup, and, on the second VM, set a second daily backup of the first backup. This means that we have 2 virtual machines, one VM with the Nextcloud instance and the Nextcloud backup, and another VM with a backup of the Nextcloud backup. + +This architecture leads to a higher redundancy level, since we can afford to lose one of the two VMs and still be able to retrieve the Nextcloud database. Note that to achieve this, we are creating a virtual private network (VPN) with WireGuard. This will connect the two VMs and allow for file transfers. While there are many ways to proceed, for this guide we will be using [ssh-keygen](https://linux.die.net/man/1/ssh-keygen), [Rsync](https://linux.die.net/man/1/rsync) and [Cron](https://linux.die.net/man/1/crontab). + +Note that, in order to reduce the deployment cost, we set the minimum CPU and memory requirements for the Backup VM. We do not need high CPU and memory for this VM since it is only used for storage. + +Note that this guide also make use of the ThreeFold gateway. For this reason, this deployment can be set on any two 3Nodes on the TFGrid, i.e. there is no need for IPv4 on the 2 nodes we are deploying on, as long as we set a gateway on a gateway node. + +For now, let's see how to achieve this redundant deployment with Rsync! + +# 2-Node Terraform Deployment + +For this guide, we are deploying a Nextcloud AIO instance along a Backup VM, enabling daily backups of both VMs. The two VMs are connected by a WireGuard VPN. The deployment will be using the [Nextcloud FList](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) available in the **tf-images** ThreeFold Tech repository. + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. **var.size** for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main.tf as is. + +For this example, we will be deploying the Nextcloud instance with a ThreeFold gateway and a gateway domain. Other configurations are possible. + +### Variables File + +* Copy the following content and save the file under the name `credentials.auto.tfvars`: + +``` +mnemonics = "..." +SSH_KEY = "..." +network = "main" + +size_vm1 = "50" +cpu_vm1 = "2" +memory_vm1 = "4096" + +size_vm2 = "50" +cpu_vm2 = "1" +memory_vm2 = "512" + +gateway_id = "50" +vm1_id = "5453" +vm2_id = "12" + +deployment_name = "nextcloudgatewayvpn" +nextcloud_flist = "https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist" +``` + +Make sure to add your own seed phrase and SSH public key. Simply replace the three dots by the content. Note that you can deploy on a different node than node 5453 for the **vm1** node. If you want to deploy on another node than node 5453 for the **gateway** node, make sure that you choose a gateway node. To find a gateway node, go on the [ThreeFold Dashboard](https://dashboard.grid.tf/) Nodes section of the Explorer and select **Gateways (Only)**. + +Obviously, you can decide to increase or modify the quantity for the CPU, memory and size variables. Note that we set the minimum CPU and memory parameters for the Backup VM (**vm2**). This will reduce the cost of the deployment. Since the Backup VM is only used for storage, we don't need to set the CPU and memory higher. + +### Main File + +* Copy the following content and save the file under the name `main.tf`: + +``` +variable "mnemonics" { + type = string + default = "your mnemonics" +} + +variable "network" { + type = string + default = "main" +} + +variable "SSH_KEY" { + type = string + default = "your SSH pub key" +} + +variable "deployment_name" { + type = string +} + +variable "size_vm1" { + type = string +} + +variable "cpu_vm1" { + type = string +} + +variable "memory_vm1" { + type = string +} + +variable "size_vm2" { + type = string +} + +variable "cpu_vm2" { + type = string +} + +variable "memory_vm2" { + type = string +} + +variable "nextcloud_flist" { + type = string +} + +variable "gateway_id" { + type = string +} + +variable "vm1_id" { + type = string +} + +variable "vm2_id" { + type = string +} + + +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = var.mnemonics + network = var.network +} + +data "grid_gateway_domain" "domain" { + node = var.gateway_id + name = var.deployment_name +} + +resource "grid_network" "net" { + nodes = [var.gateway_id, var.vm1_id, var.vm2_id] + ip_range = "10.1.0.0/16" + name = "network" + description = "My network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = var.vm1_id + network_name = grid_network.net.name + + disks { + name = "data" + size = var.size_vm1 + } + + vms { + name = "vm1" + flist = var.nextcloud_flist + cpu = var.cpu_vm1 + memory = var.memory_vm1 + rootfs_size = 15000 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + GATEWAY = "true" + IPV4 = "false" + NEXTCLOUD_DOMAIN = data.grid_gateway_domain.domain.fqdn + } + mounts { + disk_name = "data" + mount_point = "/mnt/data" + } + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size_vm2 + } + node = var.vm2_id + network_name = grid_network.net.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu_vm2 + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory_vm2 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + planetary = true + } +} + +resource "grid_name_proxy" "p1" { + node = var.gateway_id + name = data.grid_gateway_domain.domain.name + backends = [format("http://%s:80", grid_deployment.d1.vms[0].ip)] + network = grid_network.net.name + tls_passthrough = false +} + +output "wg_config" { + value = grid_network.net.access_wg_config +} + +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "vm2_ip" { + value = grid_deployment.d2.vms[0].ip +} + + +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +``` + +## Deploy the 2-Node VPN + +We now deploy the 2-node VPN with Terraform. Make sure that you are in the correct folder containing the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy Nextcloud: + * ``` + terraform apply + ``` + +Note that, at any moment, if you want to see the information on your Terraform deployment, write the following: + * ``` + terraform show + ``` + +# Nextcloud Setup + +* Access Nextcloud Setup + * Once you've deployed Nextcloud, you can access the Nextcloud Setup page by pasting on a browser the URL displayed on the line `fqdn = "..."` of the `terraform show` output. For more information on this, [read this documentation](../../../dashboard/solutions/nextcloud.md#nextcloud-setup). +* Create a backup and set a daily backup and update + * Make sure to create a backup with `/mnt/backup` as the mount point, and set a daily update and backup for your Nextcloud VM. For more information, [read this documentation](../../../dashboard/solutions/nextcloud.md#backups-and-updates). + +> Note: By default, the daily Borgbackup is set at 4:00 UTC. If you change this parameter, make sure to adjust the moment the [Rsync backup](#create-a-cron-job-for-the-rsync-daily-backup) is done. + +# Nextcloud VM Prerequisites + +We need to install a few things on the Nextcloud VM before going further. + +* Update the Nextcloud VM + * ``` + apt update + ``` +* Install ping on the Nextcloud VM if you want to test the VPN connection (Optional) + * ``` + apt install iputils-ping -y + ``` +* Install Rsync on the Nextcloud VM + * ``` + apt install rsync + ``` +* Install nano on the Nextcloud VM + * ``` + apt install nano + ``` +* Install Cron on the Nextcloud VM + * apt install cron + +# Prepare the VMs for the Rsync Daily Backup + +* Test the VPN (Optional) with [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) + * ``` + ping + ``` +* Generate an SSH key pair on the Backup VM + * ``` + ssh-keygen + ``` +* Take note of the public key in the Backup VM + * ``` + cat ~/.ssh/id_rsa.pub + ``` +* Add the public key of the Backup VM in the Nextcloud VM + * ``` + nano ~/.ssh/authorized_keys + ``` + +> Make sure to put the Backup VM SSH public key before the public key already present in the file **authorized_keys** of the Nextcloud VM. + +# Create a Cron Job for the Rsync Daily Backup + +We now set a daily cron job that will make a backup between the Nextcloud VM and the Backup VM using Rsync. + +* Open the crontab on the Backup VM + * ``` + crontab -e + ``` +* Add the cron job at the end of the file + * ``` + 0 8 * * * rsync -avz --no-perms -O --progress --delete --log-file=/root/rsync_storage.log root@10.1.3.2:/mnt/backup/ /mnt/backup/ + ``` + +> Note: By default, the Nextcloud automatic backup is set at 4:00 UTC. For this reason, we set the Rsync daily backup at 8:00 UTC. + +> Note: To set Rsync with a script, [read this documentation](../../computer_it_basics/file_transfer.md#automate-backup-with-rsync). + +# Future Projects + +This concept can be expanded in many directions. We can generate a script to facilitate the process, we can set a script directly in an FList for minimal user configurations, we can also explore Mariadb and GlusterFS instead of Rsync. + +As a generic deployment, we can develop a weblet that makes a daily backup of any other ThreeFold Playground weblet. + +# Questions and Feedback + +We invite others to propose ideas and codes if they feel inspired! + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nomad.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nomad.md new file mode 100644 index 0000000..debc309 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_nomad.md @@ -0,0 +1,359 @@ +

Deploy a Nomad Cluster

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [What is Nomad?](#what-is-nomad) +- [Prerequisites](#prerequisites) +- [Create the Terraform Files](#create-the-terraform-files) + - [Main File](#main-file) + - [Credentials File](#credentials-file) +- [Deploy the Nomad Cluster](#deploy-the-nomad-cluster) +- [SSH into the Client and Server Nodes](#ssh-into-the-client-and-server-nodes) + - [SSH with the Planetary Network](#ssh-with-the-planetary-network) + - [SSH with WireGuard](#ssh-with-wireguard) +- [Destroy the Nomad Deployment](#destroy-the-nomad-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this ThreeFold Guide, we will learn how to deploy a Nomad cluster on the TFGrid with Terraform. We cover a basic two client and three server nodes Nomad cluster. After completing this guide, you will have sufficient knowledge to build your own personalized Nomad cluster. + + + +## What is Nomad? + +[Nomad](https://www.nomadproject.io/) is a simple and flexible scheduler and orchestrator to deploy and manage containers and non-containerized applications across on-premises and clouds at scale. + +In the dynamic world of cloud computing, managing and orchestrating workloads across diverse environments can be a daunting task. Nomad emerges as a powerful solution, simplifying and streamlining the deployment, scheduling, and management of applications. + +Nomad's elegance lies in its lightweight architecture and ease of use. It operates as a single binary, minimizing resource consumption and complexity. Its intuitive user interface and straightforward configuration make it accessible to a wide range of users, from novices to experienced DevOps. + +Nomad's versatility extends beyond its user-friendliness. It seamlessly handles a wide array of workloads, including legacy applications, microservices, and batch jobs. Its adaptability extends to diverse environments, effortlessly orchestrating workloads across on-premises infrastructure and public clouds. It's more of Kubernetes for humans! + + + +## Prerequisites + +* [Install Terraform](https://developer.hashicorp.com/terraform/downloads) +* [Install WireGuard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + +If you are new to Terraform, feel free to read this basic [Terraform Full VM guide](../terraform_full_vm.md) to get you started. + + + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform: a main file and a variables file. The variables file contains the environment variables and the main file contains the necessary information to deploy your workload. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The file `main.tf` will be using the environment variables from the variables files (e.g. `var.cpu` for the CPU parameter) and thus you do not need to change this file. + +Of course, you can adjust the two files based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main file as is. + +Also note that this deployment uses both the Planetary network and WireGuard. + +### Main File + +We start by creating the main file for our Nomad cluster. + +* Create a directory for your Terraform Nomad cluster + * ``` + mkdir nomad + ``` + * ``` + cd nomad + ``` +* Create the `main.tf` file + * ``` + nano main.tf + ``` + +* Copy the following `main.tf` template and save the file + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "nomadcluster" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid] + ip_range = "10.1.0.0/16" + description = "nomad network" + add_wg_access = true +} +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid + network_name = grid_network.net1.name + vms { + name = "server1" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + ip = "10.1.3.2" + env_vars = { + SSH_KEY = var.SSH_KEY + } + planetary = true + } + vms { + name = "server2" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } + vms { + name = "server3" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } + vms { + name = "client1" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } + vms { + name = "client2" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +output "server1_wg_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "server2_wg_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "server3_wg_ip" { + value = grid_deployment.d1.vms[2].ip +} +output "client1_wg_ip" { + value = grid_deployment.d1.vms[3].ip +} +output "client2_wg_ip" { + value = grid_deployment.d1.vms[4].ip +} + +output "server1_planetary_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "server2_planetary_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} +output "server3_planetary_ip" { + value = grid_deployment.d1.vms[2].ygg_ip +} +output "client1_planetary_ip" { + value = grid_deployment.d1.vms[3].ygg_ip +} +output "client2_planetary_ip" { + value = grid_deployment.d1.vms[4].ygg_ip +} +``` + +### Credentials File + +We create a credentials file that will contain the environment variables. This file should be in the same directory as the main file. + +* Create the `credentials.auto.tfvars` file + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid = "..." + + size = "50" + cpu = "2" + memory = "1024" + ``` + +Make sure to replace the three dots by your own information for `mnemonics` and `SSH_KEY`. You will also need to find a suitable node for your deployment and set its node ID (`tfnodeid`). Feel free to adjust the parameters `size`, `cpu` and `memory` if needed. + + + +## Deploy the Nomad Cluster + +We now deploy the Nomad Cluster with Terraform. Make sure that you are in the directory containing the `main.tf` file. + +* Initialize Terraform + * ``` + terraform init + ``` + +* Apply Terraform to deploy the Nomad cluster + * ``` + terraform apply + ``` + + + +## SSH into the Client and Server Nodes + +You can now SSH into the client and server nodes using both the Planetary network and WireGuard. + +Note that the IP addresses will be shown under `Outputs` after running the command `Terraform apply`, with `planetary_ip` for the Planetary network and `wg_ip` for WireGuard. + +### SSH with the Planetary Network + +* To [SSH with the Planetary network](../../getstarted/ssh_guide/ssh_openssh.md), write the following with the proper IP address + * ``` + ssh root@planetary_ip + ``` + +You now have an SSH connection access over the Planetary network to the client and server nodes of your Nomad cluster. + +### SSH with WireGuard + +To SSH with WireGuard, we first need to set the proper WireGuard configurations. + +* Create a file named `wg.conf` in the directory `/etc/wireguard` + * ``` + nano /etc/wireguard/wg.conf + ``` + +* Paste the content provided by the Terraform deployment in the file `wg.conf` and save it. + * Note that you can use `terraform show` to see the Terraform output. The WireGuard configurations (`wg_config`) stands in between the two `EOT` instances. + +* Start WireGuard on your local computer + * ``` + wg-quick up wg + ``` +* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the WireGuard IP of a node to make sure the connection is correct + * ``` + ping wg_ip + ``` + +We are now ready to SSH into the client and server nodes with WireGuard. + +* To SSH with WireGuard, write the following with the proper IP address: + * ``` + ssh root@wg_ip + ``` + +You now have an SSH connection access over WireGuard to the client and server nodes of your Nomad cluster. For more information on connecting with WireGuard, read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + + + +## Destroy the Nomad Deployment + +If you want to destroy the Nomad deployment, write the following in the terminal: + +* ``` + terraform destroy + ``` + * Then write `yes` to confirm. + +Make sure that you are in the corresponding Terraform folder when writing this command. + + +## Conclusion + +You now have the basic knowledge to deploy a Nomad cluster on the TFGrid. Feel free to explore the many possibilities available that come with Nomad. + +You can now use a Nomad cluster to deploy your workloads. For more information on this, read this documentation on [how to deploy a Redis workload on the Nomad cluster](https://developer.hashicorp.com/nomad/tutorials/get-started/gs-deploy-job). + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_provider.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_provider.md new file mode 100644 index 0000000..eafe66c --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_provider.md @@ -0,0 +1,53 @@ +

Terraform Provider

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Environment Variables](#environment-variables) +- [Remarks](#remarks) + +*** + +## Introduction + +We present the basics of the Terraform Provider. + +## Example + +``` terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = grid network, one of: dev test qa main + key_type = key type registered on substrate (ed25519 or sr25519) + relay_url = example: "wss://relay.dev.grid.tf" + rmb_timeout = timeout duration in seconds for rmb calls + substrate_url = substrate url, example: "wss://tfchain.dev.grid.tf/ws" +} +``` + +## Environment Variables + +should be recognizable as Env variables too + +- `MNEMONICS` +- `NETWORK` +- `SUBSTRATE_URL` +- `KEY_TYPE` +- `RELAY_URL` +- `RMB_TIMEOUT` + +The *_URL variables can be used to override the dafault urls associated with the specified network + +## Remarks + +- Grid terraform provider is hosted on terraform registry [here](https://registry.terraform.io/providers/threefoldtech/grid/latest/docs?pollNotifications=true) +- All provider input variables and their description can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/index.md) +- Capitalized environment variables can be used instead of writing them in the provider (e.g. MNEMONICS) diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_provisioners.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_provisioners.md new file mode 100644 index 0000000..3bae3ea --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_provisioners.md @@ -0,0 +1,119 @@ +

Terraform and Provisioner

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Params docs](#params-docs) + - [Requirements](#requirements) + - [Connection Block](#connection-block) + - [Provisioner Block](#provisioner-block) + - [More Info](#more-info) + +*** + +## Introduction + +In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/external_provisioner/remote-exec_hello-world/main.tf), we will see how to deploy a VM and apply provisioner commands on it on the TFGrid. + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +locals { + name = "myvm" +} + +resource "grid_network" "net1" { + nodes = [1] + ip_range = "10.1.0.0/24" + name = local.name + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + name = local.name + node = 1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/grid3_ubuntu20.04-latest.flist" + entrypoint = "/init.sh" + cpu = 2 + memory = 1024 + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + connection { + type = "ssh" + user = "root" + agent = true + host = grid_deployment.d1.vms[0].ygg_ip + } + + provisioner "remote-exec" { + inline = [ + "echo 'Hello world!' > /root/readme.txt" + ] + } +} +``` + +## Params docs + +### Requirements + +- the machine should have `ssh server` running +- the machine should have `scp` installed + +### Connection Block + +- defines how we will connect to the deployed machine + +``` terraform + connection { + type = "ssh" + user = "root" + agent = true + host = grid_deployment.d1.vms[0].ygg_ip + } +``` + +type: defines the used service to connect to +user: the connecting users +agent: if used the provisoner will use the default key to connect to the remote machine +host: the ip/host of the remote machine + +### Provisioner Block + +- defines the actual provisioner behaviour + +``` terraform + provisioner "remote-exec" { + inline = [ + "echo 'Hello world!' > /root/readme.txt" + ] + } +``` + +- remote-exec: the provisoner type we are willing to use can be remote, local or another type +- inline: This is a list of command strings. They are executed in the order they are provided. This cannot be provided with script or scripts. +- script: This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be provided with inline or scripts. +- scripts: This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be provided with inline or script. + +### More Info + +A complete list of provisioner parameters can be found [here](https://www.terraform.io/language/resources/provisioners/remote-exec). diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_updates.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_updates.md new file mode 100644 index 0000000..e3c2b66 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_updates.md @@ -0,0 +1,55 @@ +

Updating

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Updating with Terraform](#updating-with-terraform) +- [Adjustments](#adjustments) + +*** + +## Introduction + +We present ways to update using Terraform. Note that this is not fully supported. + +Some of the updates are working, but the code is not finished, use at your own risk. + +## Updating with Terraform + +Updates are triggered by changing the deployments fields. +So for example, if you have the following network resource: + +```terraform +resource "grid_network" "net" { + nodes = [2] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} +``` + +Then decided to add a node: + +```terraform +resource "grid_network" "net" { + nodes = [2, 4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} +``` + +After calling `terraform apply`, the provider does the following: + +- Add node 4 to the network. +- Update the version of the workload. +- Update the version of the deployment. +- Update the hash in the contract (the contract id will stay the same) + +## Adjustments + +There are workloads that doesn't support in-place updates (e.g. Zmachines). To change them there are a couple of options (all performs destroy/create so data can be lost): + +1. `terraform taint grid_deployment.d1` (next apply will destroy ALL workloads within grid_deployment.d1 and create a new deployment) +2. `terraform destroy --target grid_deployment.d1 && terraform apply --target grid_deployment.d1` (same as above) +3. Remove the vm, then execute a `terraform apply`, then add the vm with the new config (this performs two updates but keeps neighboring workloads inside the same deployment intact). (CAUTION: this could be done only if the vm is last one in the list of vms, otherwise undesired behavior will occur) diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_ssh.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_ssh.md new file mode 100644 index 0000000..174bc3a --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_ssh.md @@ -0,0 +1,280 @@ +

SSH Into a 3Node with Wireguard

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) +- [Create the Terraform Files](#create-the-terraform-files) +- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform) +- [Set the Wireguard Connection](#set-the-wireguard-connection) +- [SSH into the 3Node with Wireguard](#ssh-into-the-3node-with-wireguard) +- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this ThreeFold Guide, we show how simple it is to deploy a micro VM on the ThreeFold Grid with Terraform and to make an SSH connection with Wireguard. + + + +## Prerequisites + +* [Install Terraform](../terraform_install.md) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows). + + + +## Find a 3Node with the ThreeFold Explorer + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) to find a 3Node +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. Here's what would work for our currernt situation. + * `Free SRU (GB)`: 15 + * `Free MRU (GB)`: 1 + * `Total CRU (Cores)`: 1 + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. + +Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-ssh`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on. + +Now let's create the Terraform files. + +* Open the terminal and go to the home directory + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-wg-ssh`: + * ``` + mkdir -p terraform/deployment-wg-ssh + ``` + * ``` + cd terraform/deployment-wg-ssh + ``` + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +``` + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content, set the node ID as well as your mnemonics and SSH public key, then save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + + size = "15" + cpu = "1" + memory = "512" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node server you wish to deploy on. Simply replace the three dots by the proper content. + + + +## Deploy the Micro VM with Terraform + +We now deploy the micro VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-ssh` containing the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the micro VM: + * ``` + terraform apply + ``` + * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. + + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +## Set the Wireguard Connection + +To set the Wireguard connection, on your local computer, you will need to take the Terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + +* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`. + * ``` + nano /usr/local/etc/wireguard/wg.conf + ``` + * Paste the content between the two `EOT` displayed after you set `terraform apply`. + +* Start the wireguard: + * ``` + wg-quick up wg + ``` + +If you want to stop the Wireguard service, write the following on your terminal: + +* ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a Wireguard connection with the same file from Terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`. + +As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the Wireguard connection is correct. Make sure to replace `vm_wg_ip` with the proper IP address: +* ``` + ping vm_wg_ip + ``` + * Note that, with this Terraform deployment, the Wireguard IP address of the micro VM is named `node1_zmachine1_ip` + + +## SSH into the 3Node with Wireguard + +To SSH into the 3Node with Wireguard, simply write the following in the terminal with the proper Wireguard IP address: + +``` +ssh root@vm_wg_ip +``` + +You now have access into the VM over Wireguard SSH connection. + + + +## Destroy the Terraform Deployment + +If you want to destroy the Terraform deployment, write the following in the terminal: + +* ``` + terraform destroy + ``` + * Then write `yes` to confirm. + +Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-ssh`. + + + +## Conclusion + +In this simple ThreeFold Guide, you learned how to SSH into a 3Node with Wireguard and Terraform. Feel free to explore further Terraform and Wireguard. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_vpn.md b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_vpn.md new file mode 100644 index 0000000..d8d27ea --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_vpn.md @@ -0,0 +1,345 @@ +

Deploy Micro VMs and Set a Wireguard VPN

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) +- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform) +- [Deploy the Micro VMs with Terraform](#deploy-the-micro-vms-with-terraform) +- [Set the Wireguard Connection](#set-the-wireguard-connection) +- [SSH into the 3Node](#ssh-into-the-3node) +- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this ThreeFold Guide, we will learn how to deploy two micro virtual machines (Ubuntu 22.04) with Terraform. The Terraform deployment will be composed of a virtual private network (VPN) using Wireguard. The two VMs will thus be connected in a private and secure network. + +Note that this concept can be extended with more than two micro VMs. Once you understand this guide, you will be able to adjust and deploy your own personalized Wireguard VPN on the ThreeFold Grid. + + +## Prerequisites + +* [Install Terraform](../terraform_install.md) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows). + + + +## Find a 3Node with the ThreeFold Explorer + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address. + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 15 + * `Free MRU (GB)`: 1 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create a Two Servers Wireguard VPN with Terraform + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. + +Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-vpn`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on. + +Now let's create the Terraform files. + + +* Open the terminal and go to the home directory + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-wg-vpn`: + * ``` + mkdir -p terraform && cd $_ + ``` + * ``` + mkdir deployment-wg-vpn && cd $_ + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "tfnodeid2" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size + } + name = local.name + node = var.tfnodeid2 + network_name = grid_network.net1.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d2.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "ygg_ip2" { + value = grid_deployment.d2.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ipv4_vm2" { + value = grid_deployment.d2.vms[0].computedip +} + +``` + +In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + tfnodeid2 = "..." + + size = "15" + cpu = "1" + memory = "512" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. + +Set the parameters for your VMs as you wish. The two servers will have the same parameters. For this example, we use the minimum parameters. + + +## Deploy the Micro VMs with Terraform + +We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-vpn` containing the main and variables files. + +* Initialize Terraform by writing the following in the terminal: + * ``` + terraform init + ``` +* Apply the Terraform deployment: + * ``` + terraform apply + ``` + * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +## Set the Wireguard Connection + +To set the Wireguard connection, on your local computer, you will need to take the terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + +* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`. + * ``` + nano /usr/local/etc/wireguard/wg.conf + ``` + * Paste the content between the two `EOT` displayed after you set `terraform apply`. + +* Start the wireguard: + * ``` + wg-quick up wg + ``` + +If you want to stop the Wireguard service, write the following on your terminal: + +* ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a Wireguard connection with the same file from terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`. + +As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VMs to make sure the Wireguard connection is correct. Make sure to replace `wg_vm_ip` with the proper IP address for each VM: + +* ``` + ping wg_vm_ip + ``` + + + +## SSH into the 3Node + +You can now SSH into the 3Nodes with either Wireguard or IPv4. + +To SSH with Wireguard, write the following with the proper IP address for each 3Node: + +``` +ssh root@vm_wg_ip +``` + +To SSH with IPv4, write the following for each 3Nodes: + +``` +ssh root@vm_IPv4 +``` + +You now have an SSH connection access to the VMs over Wireguard and IPv4. + + + +## Destroy the Terraform Deployment + +If you want to destroy the Terraform deployment, write the following in the terminal: + +* ``` + terraform destroy + ``` + * Then write `yes` to confirm. + +Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-vpn`. + + + +## Conclusion + +In this ThreeFold Guide, we learned how easy it is to deploy a VPN with Wireguard and Terraform. You can adjust the parameters how you like and explore different possibilities. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/grid3_terraform_home.md b/collections/manual/documentation/system_administrators/terraform/grid3_terraform_home.md new file mode 100644 index 0000000..82e6155 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/grid3_terraform_home.md @@ -0,0 +1 @@ +!!!include:terraform_readme diff --git a/collections/manual/documentation/system_administrators/terraform/grid_terraform.md b/collections/manual/documentation/system_administrators/terraform/grid_terraform.md new file mode 100644 index 0000000..60f4dbe --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/grid_terraform.md @@ -0,0 +1,270 @@ +# Grid provider for terraform + + - A resource, and a data source (`internal/provider/`), + - Examples (`examples/`) + +## Requirements + +- [Terraform](https://www.terraform.io/downloads.html) >= 0.13.x +- [Go](https://golang.org/doc/install) >= 1.15 + +## Building The Provider + +Note: please clone all of the following repos in the same directory +- clone github.com/threefoldtech/zos (switch to master-3 branch) +- Clone github.com/threefoldtech/tf_terraform_provider (deployment_resource branch) +- Enter the repository directory + +```bash +go get +mkdir -p ~/.terraform.d/plugins/threefoldtech.com/providers/grid/0.1/linux_amd64 +go build -o terraform-provider-grid +mv terraform-provider-grid ~/.terraform.d/plugins/threefoldtech.com/providers/grid/0.1/linux_amd64 +``` + + +## example deployment + + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" {} + + +resource "grid_deployment" "d1" { + node = 2 + disks { + name = "mydisk1" + size = 2 + description = "this is my disk description1" + + } + disks { + name = "mydisk2" + size=2 + description = "this is my disk2" + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 2048 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "mydisk1" + mount_point = "/opt" + } + mounts { + disk_name = "mydisk2" + mount_point = "/test" + } + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP" + } + + } +} + +``` + +## Using the provider + +to create your twin please check [grid substrate getting started](grid_substrate_getting_started) + +```bash +./msgbusd --twin #run message bus with your twin id +cd examples/resources +export MNEMONICS="" +terraform init && terraform apply +``` +## Destroying deployment +```bash +terraform destroy +``` + +## More examples + +a two machine deployment with the first using a public ip + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [2] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net1.name + ip_range = grid_network.net1.nodes_ip_range["2"] + + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "node1_vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} + +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} +``` + +multinode deployments +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [4, 2] + ip_range = "172.20.0.0/16" + name = "net1" + description = "new network" +} + +resource "grid_deployment" "d1" { + node = 4 + network_name = grid_network.net1.name + ip_range = grid_network.net1.deployment_info[0].ip_range + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } + +} + +resource "grid_deployment" "d2" { + node = 2 + network_name = grid_network.net1.name + ip_range = grid_network.net1.nodes_ip_range["2"] + vms { + name = "vm3" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} + + +output "node2_vm1_ip" { + value = grid_deployment.d2.vms[0].ip +} + + +``` + +zds + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_deployment" "d1" { + node = 2 + + zdbs{ + name = "zdb1" + size = 1 + description = "zdb1 description" + password = "zdbpasswd1" + mode = "user" + } + zdbs{ + name = "zdb2"You can easily check using [explorer-ui](@explorer_home) , + size = 2 + description = "zdb2 description" + password = "zdbpasswd2" + mode = "seq" + } +} + +output "deployment_id" { + value = grid_deployment.d1.id +} +``` \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/img/terraform_install.png b/collections/manual/documentation/system_administrators/terraform/img/terraform_install.png new file mode 100644 index 0000000..a6026ae Binary files /dev/null and b/collections/manual/documentation/system_administrators/terraform/img/terraform_install.png differ diff --git a/collections/manual/documentation/system_administrators/terraform/img/terraform_works.png b/collections/manual/documentation/system_administrators/terraform/img/terraform_works.png new file mode 100644 index 0000000..8cd0757 Binary files /dev/null and b/collections/manual/documentation/system_administrators/terraform/img/terraform_works.png differ diff --git a/collections/manual/documentation/system_administrators/terraform/img/weblets_contracts.png b/collections/manual/documentation/system_administrators/terraform/img/weblets_contracts.png new file mode 100644 index 0000000..cc2cdbf Binary files /dev/null and b/collections/manual/documentation/system_administrators/terraform/img/weblets_contracts.png differ diff --git a/collections/manual/documentation/system_administrators/terraform/resources/img/graphql_publicconf.png b/collections/manual/documentation/system_administrators/terraform/resources/img/graphql_publicconf.png new file mode 100644 index 0000000..80fe4f5 Binary files /dev/null and b/collections/manual/documentation/system_administrators/terraform/resources/img/graphql_publicconf.png differ diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_caprover.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_caprover.md new file mode 100644 index 0000000..a0302a6 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_caprover.md @@ -0,0 +1,506 @@ +

Terraform Caprover

+ +

Table of Contents

+ +- [What is CapRover?](#what-is-caprover) +- [Features of Caprover](#features-of-caprover) +- [Prerequisites](#prerequisites) +- [How to Run CapRover on ThreeFold Grid 3](#how-to-run-caprover-on-threefold-grid-3) + - [Clone the Project Repo](#clone-the-project-repo) + - [A) leader node deployment/setup:](#a-leader-node-deploymentsetup) + - [Step 1: Deploy a Leader Node](#step-1-deploy-a-leader-node) + - [Step 2: Connect Root Domain](#step-2-connect-root-domain) + - [Note](#note) + - [Step 3: CapRover Root Domain Configurations](#step-3-caprover-root-domain-configurations) + - [Step 4: Access the Captain Dashboard](#step-4-access-the-captain-dashboard) + - [To allow cluster mode](#to-allow-cluster-mode) + - [B) Worker Node Deployment/setup:](#b-worker-node-deploymentsetup) +- [Implementations Details:](#implementations-details) + +*** + +## What is CapRover? + +[CapRover](https://caprover.com/) is an easy-to-use app/database deployment and web server manager that works for a variety of applications such as Node.js, Ruby, PHP, Postgres, and MongoDB. It runs fast and is very robust, as it uses Docker, Nginx, LetsEncrypt, and NetData under the hood behind its user-friendly interface. +Here’s a link to CapRover's open source repository on [GitHub](https://github.com/caprover/caprover). + +## Features of Caprover + +- CLI for automation and scripting +- Web GUI for ease of access and convenience +- No lock-in: Remove CapRover and your apps keep working ! +- Docker Swarm under the hood for containerization and clustering. +- Nginx (fully customizable template) under the hood for load-balancing. +- Let’s Encrypt under the hood for free SSL (HTTPS). +- **One-Click Apps** : Deploying one-click apps is a matter of seconds! MongoDB, Parse, MySQL, WordPress, Postgres and many more. +- **Fully Customizable** : Optionally fully customizable nginx config allowing you to enable HTTP2, specific caching logic, custom SSL certs and etc. +- **Cluster Ready** : Attach more nodes and create a cluster in seconds! CapRover automatically configures nginx to load balance. +- **Increase Productivity** : Focus on your apps ! Not the bells and whistles, just to run your apps. +- **Easy Deploy** : Many ways to deploy. You can upload your source from dashboard, use command line caprover deploy, use webhooks and build upon git push + +## Prerequisites + +- Domain Name: + after installation, you will need to point a wildcard DNS entry to your CapRover IP Address. + Note that you can use CapRover without a domain too. But you won't be able to setup HTTPS or add `Self hosted Docker Registry`. +- TerraForm installed to provision, adjust and tear down infrastructure using the tf configuration files provided here. +- Yggdrasil installed and enabled for End-to-end encrypted IPv6 networking. +- account created on [Polkadot](https://polkadot.js.org/apps/?rpc=wss://tfchain.dev.threefold.io/ws#/accounts) and got an twin id, and saved you mnemonics. +- TFTs in your account balance (in development, Transferer some test TFTs from ALICE account). + +## How to Run CapRover on ThreeFold Grid 3 + +In this guide, we will use Caprover to setup your own private Platform as a service (PaaS) on TFGrid 3 infrastructure. + +### Clone the Project Repo + +```sh +git clone https://github.com/freeflowuniverse/freeflow_caprover.git +``` + +### A) leader node deployment/setup: + +#### Step 1: Deploy a Leader Node + +Create a leader caprover node using terraform, here's an example : + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = "" + network = "dev" # or test to use testnet +} + +resource "grid_network" "net0" { + nodes = [4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d0" { + node = 4 + network_name = grid_network.net0.name + ip_range = lookup(grid_network.net0.nodes_ip_range, 4, "") + disks { + name = "data0" + # will hold images, volumes etc. modify the size according to your needs + size = 20 + description = "volume holding docker data" + } + disks { + name = "data1" + # will hold data reltaed to caprover conf, nginx stuff, lets encrypt stuff. + size = 5 + description = "volume holding captain data" + } + + vms { + name = "caprover" + flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist" + # modify the cores according to your needs + cpu = 4 + publicip = true + # modify the memory according to your needs + memory = 8192 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "data0" + mount_point = "/var/lib/docker" + } + mounts { + disk_name = "data1" + mount_point = "/captain" + } + env_vars = { + "PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576" + # SWM_NODE_MODE env var is required, should be "leader" or "worker" + # leader: will run sshd, containerd, dockerd as zinit services plus caprover service in leader mode which start caprover, lets encrypt, nginx containers. + # worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster. check the wroker terrafrom file example. + "SWM_NODE_MODE" = "leader" + # CAPROVER_ROOT_DOMAIN is optional env var, by providing it you can access the captain dashboard after vm initilization by visiting http://captain.your-root-domain + # otherwise you will have to add the root domain manually from the captain dashboard by visiting http://{publicip}:3000 to access the dashboard + "CAPROVER_ROOT_DOMAIN" = "roverapps.grid.tf" + } + } +} + +output "wg_config" { + value = grid_network.net0.access_wg_config +} +output "ygg_ip" { + value = grid_deployment.d0.vms[0].ygg_ip +} +output "vm_ip" { + value = grid_deployment.d0.vms[0].ip +} +output "vm_public_ip" { + value = grid_deployment.d0.vms[0].computedip +} +``` + +```bash +cd freeflow_caprover/terraform/leader/ +vim main.tf +``` + +- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on. +- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing. +- In the `PUBLIC_KEY` env var value put your ssh public key . +- In the `CAPROVER_ROOT_DOMAIN` env var value put your root domain, this is optional and you can add it later from the dashboard put it will save you the extra step and allow you to access your dashboard using your domain name directly after the deployment. + +- save the file, and execute the following commands: + + ```bash + terraform init + terraform apply + ``` + +- wait till you see `apply complete`, and note the VM public ip in the final output. + +- verify the status of the VM + + ```bash + ssh root@{public_ip_address} + zinit list + zinit log caprover + ``` + + You will see output like this: + + ```bash + root@caprover:~ # zinit list + sshd: Running + containerd: Running + dockerd: Running + sshd-init: Success + caprover: Running + root@caprover:~ # zinit log caprover + [+] caprover: CapRover Root Domain: newapps.grid.tf + [+] caprover: { + [+] caprover: "namespace": "captain", + [+] caprover: "customDomain": "newapps.grid.tf" + [+] caprover: } + [+] caprover: CapRover will be available at http://captain.newapps.grid. tf after installation + [-] caprover: docker: Cannot connect to the Docker daemon at unix:///var/ run/docker.sock. Is the docker daemon running?. + [-] caprover: See 'docker run --help'. + [-] caprover: Unable to find image 'caprover/caprover:latest' locally + [-] caprover: latest: Pulling from caprover/caprover + [-] caprover: af4c2580c6c3: Pulling fs layer + [-] caprover: 4ea40d27a2cf: Pulling fs layer + [-] caprover: 523d612e9cd2: Pulling fs layer + [-] caprover: 8fee6a1847b0: Pulling fs layer + [-] caprover: 60cce3519052: Pulling fs layer + [-] caprover: 4bae1011637c: Pulling fs layer + [-] caprover: ecf48b6c1f43: Pulling fs layer + [-] caprover: 856f69196742: Pulling fs layer + [-] caprover: e86a512b6f8c: Pulling fs layer + [-] caprover: cecbd06d956f: Pulling fs layer + [-] caprover: cdd679ff24b0: Pulling fs layer + [-] caprover: d60abbe06609: Pulling fs layer + [-] caprover: 0ac0240c1a59: Pulling fs layer + [-] caprover: 52d300ad83da: Pulling fs layer + [-] caprover: 8fee6a1847b0: Waiting + [-] caprover: e86a512b6f8c: Waiting + [-] caprover: 60cce3519052: Waiting + [-] caprover: cecbd06d956f: Waiting + [-] caprover: cdd679ff24b0: Waiting + [-] caprover: 4bae1011637c: Waiting + [-] caprover: d60abbe06609: Waiting + [-] caprover: 0ac0240c1a59: Waiting + [-] caprover: 52d300ad83da: Waiting + [-] caprover: 856f69196742: Waiting + [-] caprover: ecf48b6c1f43: Waiting + [-] caprover: 523d612e9cd2: Verifying Checksum + [-] caprover: 523d612e9cd2: Download complete + [-] caprover: 4ea40d27a2cf: Verifying Checksum + [-] caprover: 4ea40d27a2cf: Download complete + [-] caprover: af4c2580c6c3: Verifying Checksum + [-] caprover: af4c2580c6c3: Download complete + [-] caprover: 4bae1011637c: Verifying Checksum + [-] caprover: 4bae1011637c: Download complete + [-] caprover: 8fee6a1847b0: Verifying Checksum + [-] caprover: 8fee6a1847b0: Download complete + [-] caprover: 856f69196742: Verifying Checksum + [-] caprover: 856f69196742: Download complete + [-] caprover: ecf48b6c1f43: Verifying Checksum + [-] caprover: ecf48b6c1f43: Download complete + [-] caprover: e86a512b6f8c: Verifying Checksum + [-] caprover: e86a512b6f8c: Download complete + [-] caprover: cdd679ff24b0: Verifying Checksum + [-] caprover: cdd679ff24b0: Download complete + [-] caprover: d60abbe06609: Verifying Checksum + [-] caprover: d60abbe06609: Download complete + [-] caprover: cecbd06d956f: Download complete + [-] caprover: 0ac0240c1a59: Verifying Checksum + [-] caprover: 0ac0240c1a59: Download complete + [-] caprover: 60cce3519052: Verifying Checksum + [-] caprover: 60cce3519052: Download complete + [-] caprover: af4c2580c6c3: Pull complete + [-] caprover: 52d300ad83da: Download complete + [-] caprover: 4ea40d27a2cf: Pull complete + [-] caprover: 523d612e9cd2: Pull complete + [-] caprover: 8fee6a1847b0: Pull complete + [-] caprover: 60cce3519052: Pull complete + [-] caprover: 4bae1011637c: Pull complete + [-] caprover: ecf48b6c1f43: Pull complete + [-] caprover: 856f69196742: Pull complete + [-] caprover: e86a512b6f8c: Pull complete + [-] caprover: cecbd06d956f: Pull complete + [-] caprover: cdd679ff24b0: Pull complete + [-] caprover: d60abbe06609: Pull complete + [-] caprover: 0ac0240c1a59: Pull complete + [-] caprover: 52d300ad83da: Pull complete + [-] caprover: Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f + [-] caprover: Status: Downloaded newer image for caprover/caprover:latest + [+] caprover: Captain Starting ... + [+] caprover: Overriding skipVerifyingDomains from /captain/data/ config-override.json + [+] caprover: Installing Captain Service ... + [+] caprover: + [+] caprover: Installation of CapRover is starting... + [+] caprover: For troubleshooting, please see: https://caprover.com/docs/ troubleshooting.html + [+] caprover: + [+] caprover: + [+] caprover: + [+] caprover: + [+] caprover: + [+] caprover: >>> Checking System Compatibility <<< + [+] caprover: Docker Version passed. + [+] caprover: Ubuntu detected. + [+] caprover: X86 CPU detected. + [+] caprover: Total RAM 8339 MB + [+] caprover: Pulling: nginx:1 + [+] caprover: Pulling: caprover/caprover-placeholder-app:latest + [+] caprover: Pulling: caprover/certbot-sleeping:v1.6.0 + [+] caprover: October 12th 2021, 12:49:26.301 pm Fresh installation! + [+] caprover: October 12th 2021, 12:49:26.309 pm Starting swarm at 185.206.122.32:2377 + [+] caprover: Swarm started: z06ymksbcoren9cl7g2xzw9so + [+] caprover: *** CapRover is initializing *** + [+] caprover: Please wait at least 60 seconds before trying to access CapRover. + [+] caprover: =================================== + [+] caprover: **** Installation is done! ***** + [+] caprover: CapRover is available at http://captain.newapps.grid.tf + [+] caprover: Default password is: captain42 + [+] caprover: =================================== + ``` + + Wait until you see \***\* Installation is done! \*\*\*** in the caprover service log. + +#### Step 2: Connect Root Domain + +After the container runs, you will now need to connect your CapRover instance to a Root Domain. + +Let’s say you own example.com. You can set \*.something.example.com as an A-record in your DNS settings to point to the IP address of the server where you installed CapRover. To do this, go to the DNS settings in your domain provider website, and set a wild card A record entry. + +For example: Type: A, Name (or host): \*.something.example.com, IP (or Points to): `110.122.131.141` where this is the IP address of your CapRover machine. + +```yaml +TYPE: A record +HOST: \*.something.example.com +POINTS TO: (IP Address of your server) +TTL: (doesn’t really matter) +``` + +To confirm, go to https://mxtoolbox.com/DNSLookup.aspx and enter `somethingrandom.something.example.com` and check if IP address resolves to the IP you set in your DNS. + +##### Note + +`somethingrandom` is needed because you set a wildcard entry in your DNS by setting `*.something.example.com` as your host, not `something.example.com`. + +#### Step 3: CapRover Root Domain Configurations + +skip this step if you provided your root domain in the TerraFrom configuration file + +Once the CapRover is initialized, you can visit `http://[IP_OF_YOUR_SERVER]:3000` in your browser and login to CapRover using the default password `captain42`. You can change your password later. + +In the UI enter you root domain and press Update Domain button. + +#### Step 4: Access the Captain Dashboard + +Once you set your root domain as caprover.example.com, you will be redirected to captain.caprover.example.com. + +Now CapRover is ready and running in a single node. + +##### To allow cluster mode + +- Enable HTTPS + + - Go to CapRover `Dashboard` tab, then in `CapRover Root Domain Configurations` press on `Enable HTTPS` then you will asked to enter your email address + +- Docker Registry Configuration + + - Go to CapRover `Cluster` tab, then in `Docker Registry Configuration` section, press on `Self hosted Docker Registry` or add your `Remote Docker Registry` + +- Run the following command in the ssh session: + + ```bash + docker swarm join-token worker + ``` + + It will output something like this: + + ```bash + docker swarm join --token SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ez fpdd6svnnbq7 185.206.122.33:2377 + ``` + +- To add a worker node to this swarm, you need: + + - Generated token `SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ezfpdd6svnnbq7` + - Leader node public ip `185.206.122.33` + +This information is required in the next section to run CapRover in cluster mode. + +### B) Worker Node Deployment/setup: + +We show how to deploy a worker node by providing an example worker Terraform file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = "" + network = "dev" # or test to use testnet +} + +resource "grid_network" "net2" { + nodes = [4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d2" { + node = 4 + network_name = grid_network.net2.name + ip_range = lookup(grid_network.net2.nodes_ip_range, 4, "") + disks { + name = "data2" + # will hold images, volumes etc. modify the size according to your needs + size = 20 + description = "volume holding docker data" + } + + vms { + name = "caprover" + flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist" + # modify the cores according to your needs + cpu = 2 + publicip = true + # modify the memory according to your needs + memory = 2048 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "data2" + mount_point = "/var/lib/docker" + } + env_vars = { + "PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576" + } + # SWM_NODE_MODE env var is required, should be "leader" or "worker" + # leader: check the wroker terrafrom file example. + # worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster. + + "SWM_NODE_MODE" = "worker" + # from the leader node (the one running caprover) run `docker swarm join-token worker` + # you must add the generated token to SWMTKN env var and the leader public ip to LEADER_PUBLIC_IP env var + + "SWMTKN"="SWMTKN-1-522cdsyhknmavpdok4wi86r1nihsnipioc9hzfw9dnsvaj5bed-8clrf4f2002f9wziabyxzz32d" + "LEADER_PUBLIC_IP" = "185.206.122.38" + + } +} + +output "wg_config" { + value = grid_network.net2.access_wg_config +} +output "ygg_ip" { + value = grid_deployment.d2.vms[0].ygg_ip +} +output "vm_ip" { + value = grid_deployment.d2.vms[0].ip +} +output "vm_public_ip" { + value = grid_deployment.d2.vms[0].computedip +} +``` + +```bash +cd freeflow_caprover/terraform/worker/ +vim main.tf +``` + +- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on. +- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing. +- In the `PUBLIC_KEY` env var value put your ssh public key. +- In the `SWMTKN` env var value put the previously generated token. +- In the `LEADER_PUBLIC_IP` env var value put the leader node public ip. + +- Save the file, and execute the following commands: + + ```bash + terraform init + terraform apply + ``` + +- Wait till you see `apply complete`, and note the VM public ip in the final output. + +- Verify the status of the VM. + + ```bash + ssh root@{public_ip_address} + zinit list + zinit log caprover + ``` + + You will see output like this: + + ```bash + root@caprover:~# zinit list + caprover: Success + dockerd: Running + containerd: Running + sshd: Running + sshd-init: Success + root@caprover:~# zinit log caprover + [-] caprover: Cannot connect to the Docker daemon at unix:///var/run/ docker.sock. Is the docker daemon running? + [+] caprover: This node joined a swarm as a worker. + ``` + +This means that your worker node is now ready and have joined the cluster successfully. + +You can also verify this from CapRover dashboard in `Cluster` tab. Check `Nodes` section, you should be able to see the new worker node added there. + +Now CapRover is ready in cluster mode (more than one server). + +To run One-Click Apps please follow this [tutorial](https://caprover.com/docs/one-click-apps.html) + +## Implementations Details: + +- we use Ubuntu 18.04 to minimize the production issues as CapRover is tested on Ubuntu 18.04 and Docker 19.03. +- In standard installation, CapRover has to be installed on a machine with a public IP address. +- Services are managed by `Zinit` service manager to bring these processes up and running in case of any failure: + + - sshd-init : service used to add user public key in vm ssh authorized keys (it run once). + - containerd: service to maintain container runtime needed by docker. + - caprover: service to run caprover container(it run once). + - dockerd: service to run docker daemon. + - sshd: service to maintain ssh server daemon. + +- we adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system + ```bash + echo -500 >/proc/self/oom_score_adj + ``` diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_k8s.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_k8s.md new file mode 100644 index 0000000..2b3f992 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_k8s.md @@ -0,0 +1,210 @@ +

Kubernetes Cluster

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Grid Kubernetes Resource](#grid-kubernetes-resource) + - [Kubernetes Outputs](#kubernetes-outputs) +- [More Info](#more-info) +- [Demo Video](#demo-video) + +*** + +## Introduction + +While Kubernetes deployments can be quite difficult and can require lots of experience, we provide here a very simple way to provision Kubernetes cluster on the TFGrid. + +## Example + +An example for deploying a kubernetes cluster could be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/k8s/main.tf) + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_scheduler" "sched" { + requests { + name = "master_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + public_ips_count = 1 + } + requests { + name = "worker1_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + } + requests { + name = "worker2_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + } + requests { + name = "worker3_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + } +} + +locals { + solution_type = "Kubernetes" + name = "myk8s" +} +resource "grid_network" "net1" { + solution_type = local.solution_type + name = local.name + nodes = distinct(values(grid_scheduler.sched.nodes)) + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_kubernetes" "k8s1" { + solution_type = local.solution_type + name = local.name + network_name = grid_network.net1.name + token = "12345678910122" + ssh_key = "PUT YOUR SSH KEY HERE" + + master { + disk_size = 2 + node = grid_scheduler.sched.nodes["master_node"] + name = "mr" + cpu = 2 + publicip = true + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker1_node"] + name = "w0" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker2_node"] + name = "w2" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker3_node"] + name = "w3" + cpu = 2 + memory = 2048 + } +} + +output "computed_master_public_ip" { + value = grid_kubernetes.k8s1.master[0].computedip +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +``` + +Everything looks similar to our first example, the global terraform section, the provider section and the network section. + +## Grid Kubernetes Resource + +```terraform +resource "grid_kubernetes" "k8s1" { + solution_type = local.solution_type + name = local.name + network_name = grid_network.net1.name + token = "12345678910122" + ssh_key = "PUT YOUR SSH KEY HERE" + + master { + disk_size = 2 + node = grid_scheduler.sched.nodes["master_node"] + name = "mr" + cpu = 2 + publicip = true + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker1_node"] + name = "w0" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker2_node"] + name = "w2" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker3_node"] + name = "w3" + cpu = 2 + memory = 2048 + } +} +``` + +It requires + +- Network name that would contain the cluster +- A cluster token to work as a key for other nodes to join the cluster +- SSH key to access the cluster VMs. + +Then, we describe the the master and worker nodes in terms of: + +- name within the deployment +- disk size +- node to deploy it on +- cpu +- memory +- whether or not this node needs a public ip + +### Kubernetes Outputs + +```terraform +output "master_public_ip" { + value = grid_kubernetes.k8s1.master[0].computedip +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +``` + +We will be mainly interested in the master node public ip `computed IP` and the wireguard configurations + +## More Info + +A complete list of k8s resource parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/kubernetes.md). + +## Demo Video + +Here is a video showing how to deploying k8s with Terraform. + +
+ +
\ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_k8s_demo.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_k8s_demo.md new file mode 100644 index 0000000..2e49eb1 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_k8s_demo.md @@ -0,0 +1,7 @@ +

Demo Video Showing Deploying k8s with Terraform

+ +
+ +
+ + diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs.md new file mode 100644 index 0000000..3dcffc7 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs.md @@ -0,0 +1,20 @@ +

Quantum Safe Filesystem (QSFS)

+ +

Table of Contents

+ +- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md) +- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md) + +*** + +## Introduction + +Quantum Storage is a FUSE filesystem that uses mechanisms of forward error correction (Reed Solomon codes) to make sure data (files and metadata) are stored in multiple remote places in a way that we can afford losing x number of locations without losing the data. + +The aim is to support unlimited local storage with remote backends for offload and backup which cannot be broken, even by a quantum computer. + +## QSFS Workload Parameters and Documentation + +A complete list of QSFS workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-qsfs). + +The [quantum-storage](https://github.com/threefoldtech/quantum-storage) repo contains a more thorough description of QSFS operation. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md new file mode 100644 index 0000000..8fe3628 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md @@ -0,0 +1,211 @@ +

QSFS on Full VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Create the Terraform Files](#create-the-terraform-files) +- [Full Example](#full-example) +- [Mounting the QSFS Disk](#mounting-the-qsfs-disk) +- [Debugging](#debugging) + +*** + +## Introduction + +This short ThreeFold Guide will teach you how to deploy a Full VM with QSFS disk on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04 based cloud-init image. + +The steps are very simple. You first need to create the Terraform files, and then deploy the full VM and the QSFS workloads. After the deployment is done, you will need to SSH into the full VM and manually mount the QSFS disk. + +The main goal of this guide is to show you all the necessary steps to deploy a Full VM with QSFS disk on the TGrid using Terraform. + + + +## Prerequisites + +- [Install Terraform](../terraform_install.md) + +You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +## Create the Terraform Files + +Deploying a FullVM is a bit different than deploying a MicroVM, let take a look first at these differences +- FullVMs uses `cloud-init` images and unlike the microVMs it needs at least one disk attached to the vm to copy the image to, and it serves as the root fs for the vm. +- QSFS disk is based on `virtiofs`, and you can't use QSFS disk as the first mount in a FullVM, instead you need a regular disk. +- Any extra disks/mounts will be available on the vm but unlike mounts on MicroVMs, extra disks won't be mounted automatically. you will need to mount it manually after the deployment. + +Let modify the qsfs-on-microVM [example](./terraform_qsfs_on_microvm.md) to deploy a QSFS on FullVM this time: + +- Inside the `grid_deployment` resource we will need to add a disk for the vm root fs. + + ```terraform + disks { + name = "roof-fs" + size = 10 + description = "root fs" + } + ``` + +- We need also to add an extra mount inside the `grid_deployment` resource in `vms` block. it must be the first mounts block in the vm: + + ```terraform + mounts { + disk_name = "rootfs" + mount_point = "/" + } + ``` + +- We also need to specify the flist for our FullVM, inside the `grid_deployment` in the `vms` block, change the flist filed to use this image: + - https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist + + + +## Full Example +The full example would be like this: + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +locals { + metas = ["meta1", "meta2", "meta3", "meta4"] + datas = ["data1", "data2", "data3", "data4"] +} + +resource "grid_network" "net1" { + nodes = [11] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d1" { + node = 11 + dynamic "zdbs" { + for_each = local.metas + content { + name = zdbs.value + description = "description" + password = "password" + size = 10 + mode = "user" + } + } + dynamic "zdbs" { + for_each = local.datas + content { + name = zdbs.value + description = "description" + password = "password" + size = 10 + mode = "seq" + } + } +} + +resource "grid_deployment" "qsfs" { + node = 11 + network_name = grid_network.net1.name + disks { + name = "rootfs" + size = 10 + description = "rootfs" + } + qsfs { + name = "qsfs" + description = "description6" + cache = 10240 # 10 GB + minimal_shards = 2 + expected_shards = 4 + redundant_groups = 0 + redundant_nodes = 0 + max_zdb_data_dir_size = 512 # 512 MB + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + compression_algorithm = "snappy" + metadata { + type = "zdb" + prefix = "hamada" + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + groups { + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + } + vms { + name = "vm" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + planetary = true + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576" + } + mounts { + disk_name = "rootfs" + mount_point = "/" + } + mounts { + disk_name = "qsfs" + mount_point = "/qsfs" + } + } +} +output "metrics" { + value = grid_deployment.qsfs.qsfs[0].metrics_endpoint +} +output "ygg_ip" { + value = grid_deployment.qsfs.vms[0].ygg_ip +} +``` + +**note**: the `grid_deployment.qsfs.name` should be the same as the qsfs disk name in `grid_deployment.vms.mounts.disk_name`. + + + +## Mounting the QSFS Disk +After applying this terraform file, you will need to manually mount the disk. +SSH into the VM and type `mount -t virtiofs /qsfs`: + +```bash +mkdir /qsfs +mount -t virtiofs qsfs /qsfs +``` + + + +## Debugging + +During deployment, you might encounter the following error when using mount command: + +`mount: /qsfs: wrong fs type, bad option, bad superblock on qsfs3, missing codepage or helper program, or other error.` + +- **Explanations**: Most likely you typed a wrong qsfs deployment/disk name that not matched with the one from qsfs deployment. +- **Solution**: Double check your terraform file, and make sure the name you are using as qsfs deployment/disk name is matching the one you are trying to mount on your VM. diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md new file mode 100644 index 0000000..9b3a609 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md @@ -0,0 +1,348 @@ +

QSFS on Micro VM with Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Find a 3Node](#find-a-3node) +- [Create the Terraform Files](#create-the-terraform-files) + - [Create the Files with the Provider](#create-the-files-with-the-provider) + - [Create the Files Manually](#create-the-files-manually) +- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform) +- [SSH into the 3Node](#ssh-into-the-3node) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this ThreeFold Guide, we will learn how to deploy a Quantum Safe File System (QSFS) deployment with Terraform. The main template for this example can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/qsfs/main.tf). + + +## Prerequisites + +In this guide, we will be using Terraform to deploy a QSFS workload on a micro VM that runs on the TFGrid. Make sure to have the latest Terraform version. + +- [Install Terraform](../terraform_install.md) + + + + +## Find a 3Node + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address. + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 15 + * `Free MRU (GB)`: 1 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create the Terraform Files + +We present two different methods to create the Terraform files. In the first method, we will create the Terraform files using the [TFGrid Terraform Provider](https://github.com/threefoldtech/terraform-provider-grid). In the second method, we will create the Terraform files manually. Feel free to choose the method that suits you best. + +### Create the Files with the Provider + +Creating the Terraform files is very straightforward. We want to clone the repository `terraform-provider-grid` locally and run some simple commands to properly set and start the deployment. + +* Clone the repository `terraform-provider-grid` + * ``` + git clone https://github.com/threefoldtech/terraform-provider-grid + ``` +* Go to the subdirectory containing the examples + * ``` + cd terraform-provider-grid/examples/resources/qsfs + ``` +* Set your own mnemonics (replace `mnemonics words` with your own mnemonics) + * ``` + export MNEMONICS="mnemonics words" + ``` +* Set the network (replace `network` by the desired network, e.g. `dev`, `qa`, `test` or `main`) + * ``` + export NETWORK="network" + ``` +* Initialize the Terraform deployment + * ``` + terraform init + ``` +* Apply the Terraform deployment + * ``` + terraform apply + ``` +* At any moment, you can destroy the deployment with the following line + * ``` + terraform destroy + ``` + +When using this method, you might need to change some parameters within the `main.tf` depending on your specific deployment. + +### Create the Files Manually + +For this method, we use two files to deploy with Terraform. The first file contains the environment variables (**credentials.auto.tfvars**) and the second file contains the parameters to deploy our workloads (**main.tf**). To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file, but only the file **credentials.auto.tfvars**. + +* Open the terminal and go to the home directory (optional) + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-qsfs-microvm`: + * ``` + mkdir -p terraform && cd $_ + ``` + * ``` + mkdir deployment-qsfs-microvm && cd $_ + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +# Variables + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "network" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +variable "minimal_shards" { + type = string +} + +variable "expected_shards" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = var.network +} + +locals { + metas = ["meta1", "meta2", "meta3", "meta4"] + datas = ["data1", "data2", "data3", "data4"] +} + +resource "grid_network" "net1" { + nodes = [var.tfnodeid1] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d1" { + node = var.tfnodeid1 + dynamic "zdbs" { + for_each = local.metas + content { + name = zdbs.value + description = "description" + password = "password" + size = var.size + mode = "user" + } + } + dynamic "zdbs" { + for_each = local.datas + content { + name = zdbs.value + description = "description" + password = "password" + size = var.size + mode = "seq" + } + } +} + +resource "grid_deployment" "qsfs" { + node = var.tfnodeid1 + network_name = grid_network.net1.name + qsfs { + name = "qsfs" + description = "description6" + cache = 10240 # 10 GB + minimal_shards = var.minimal_shards + expected_shards = var.expected_shards + redundant_groups = 0 + redundant_nodes = 0 + max_zdb_data_dir_size = 512 # 512 MB + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + compression_algorithm = "snappy" + metadata { + type = "zdb" + prefix = "hamada" + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + groups { + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = var.cpu + memory = var.memory + entrypoint = "/sbin/zinit init" + planetary = true + env_vars = { + SSH_KEY = var.SSH_KEY + } + mounts { + disk_name = "qsfs" + mount_point = "/qsfs" + } + } +} +output "metrics" { + value = grid_deployment.qsfs.qsfs[0].metrics_endpoint +} +output "ygg_ip" { + value = grid_deployment.qsfs.vms[0].ygg_ip +} +``` + +Note that we named the VM as **vm1**. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ```terraform + # Network + network = "main" + + # Credentials + mnemonics = "..." + SSH_KEY = "..." + + # Node Parameters + tfnodeid1 = "..." + size = "15" + cpu = "1" + memory = "512" + + # QSFS Parameters + minimal_shards = "2" + expected_shards = "4" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node you want to deploy on. Simply replace the three dots by the content. If you want to deploy on the Test net, you can replace **main** by **test**. + +Set the parameters for your VMs as you wish. For this example, we use the minimum parameters. + +For the section QSFS Parameters, you can decide on how many VMs your data will be sharded. You can also decide the minimum of VMs to recover the whole of your data. For example, a 16 minimum, 20 expected configuration will disperse your data on 20 3Nodes, and the deployment will only need at any time 16 VMs to recover the whole of your data. This gives resilience and redundancy to your storage. A 2 minimum, 4 expected configuration is given here for the main template. + + + +## Deploy the Micro VM with Terraform + +We now deploy the QSFS deployment with Terraform. Make sure that you are in the correct folder `terraform/deployment-qsfs-microvm` containing the main and variables files. + +* Initialize Terraform by writing the following in the terminal: + * ``` + terraform init + ``` +* Apply the Terraform deployment: + * ``` + terraform apply + ``` + * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +## SSH into the 3Node + +You can now SSH into the 3Node with Planetary Network. + +To SSH with Planetary Network, write the following: + +``` +ssh root@planetary_IP +``` + +Note that the IP address should be the value of the parameter **ygg_ip** from the Terraform Outputs. + +You now have an SSH connection access to the VM over Planetary Network. + + + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_resources_readme.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_resources_readme.md new file mode 100644 index 0000000..6f11322 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_resources_readme.md @@ -0,0 +1,13 @@ +

Terraform Resources

+ +

Table of Contents

+ +- [Using Scheduler](./terraform_scheduler.md) +- [Virtual Machine](./terraform_vm.html) +- [Web Gateway](./terraform_vm_gateway.html) +- [Kubernetes Cluster](./terraform_k8s.html) +- [ZDB](./terraform_zdb.html) +- [Quantum Safe Filesystem](./terraform_qsfs.md) + - [QSFS on Micro VM](./terraform_qsfs_on_microvm.md) + - [QSFS on Full VM](./terraform_qsfs_on_full_vm.md) +- [CapRover](./terraform_caprover.html) diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_scheduler.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_scheduler.md new file mode 100644 index 0000000..dc51676 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_scheduler.md @@ -0,0 +1,153 @@ +

Scheduler Resource

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How the Scheduler Works](#how-the-scheduler-works) +- [Quick Example](#quick-example) + +*** + + +## Introduction + +Using the TFGrid scheduler enables users to automatically get the nodes that match their criterias. We present here some basic information on this resource. + + + +## How the Scheduler Works + +To better understand the scheduler, we summarize the main process: + +- At first if `farm_id` is specified, then the scheduler will check if this farm has the Farmerbot enabled + - If so it will try to find a suitable node using the Farmerbot. +- If the Farmerbot is not enabled, it will use grid proxy to find a suitable node. + + + +## Quick Example + +Let's take a look at the following example: + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1-dev" + } + } +} +provider "grid" { +} + +locals { + name = "testvm" +} + +resource "grid_scheduler" "sched" { + requests { + farm_id = 53 + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} + +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } +} +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "vm2_ygg_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} + +``` + +From the example above, we take a closer look at the following section: + +``` +resource "grid_scheduler" "sched" { + requests { + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} +``` + +In this case, the user is specifying the requirements which match the deployments. + +Later on, the user can use the result of the scheduler which contains the `[nodes]` in the deployments: + +``` +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ... +} + +``` + +and + +``` +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + ... + } + ... +} +``` + diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_vm.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_vm.md new file mode 100644 index 0000000..b349c5e --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_vm.md @@ -0,0 +1,282 @@ +

VM Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Template](#template) +- [Using scheduler](#using-scheduler) +- [Using Grid Explorer](#using-grid-explorer) +- [Describing the overlay network for the project](#describing-the-overlay-network-for-the-project) +- [Describing the deployment](#describing-the-deployment) +- [Which flists to use](#which-flists-to-use) +- [Remark multiple VMs](#remark-multiple-vms) +- [Reference](#reference) + +*** + +## Introduction + +The following provides the basic information to deploy a VM with Terraform on the TFGrid. + +## Template + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1-dev" + } + } +} + +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = "dev" # or test to use testnet +} + +locals { + name = "testvm" +} + +resource "grid_scheduler" "sched" { + requests { + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} + +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } +} +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "vm2_ygg_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} + +``` + +## Using scheduler + +- If the user decided to choose [scheduler](terraform_scheduler.md) to find a node for him, then he will use the node returned from the scheduler as the example above + +## Using Grid Explorer + +- If not, the user can still specify the node directly if he wants using the grid explorer to find a node that matches his requirements + +## Describing the overlay network for the project + +```terraform +resource "grid_network" "net1" { + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + name = "network" + description = "some network" + add_wg_access = true +} +``` + +We tell terraform we will have a network one node `having the node ID returned from the scheduler` using the IP Range `10.1.0.0/16` and add wireguard access for this network + +## Describing the deployment + +```terraform +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + +} +``` + +It's bit long for sure but let's try to dissect it a bit + +```terraform + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "") +``` + +- `node = grid_scheduler.sched.nodes["node1"]` means this deployment will happen on node returned from the scheduler. Otherwise the user can specify the node as `node = 2` and in this case the choice of the node is completely up to the user at this point. They need to do the capacity planning. Check the [Node Finder](../../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria. +- `network_name` which network to deploy our project on, and here we choose the `name` of network `net1` +- `ip_range` here we [lookup](https://www.terraform.io/docs/language/functions/lookup.html) the iprange of node `2` and initially load it with `""` + +> Advannced note: Direct map access fails during the planning if the key doesn't exist which happens in cases like adding a node to the network and a new deployment on this node. So it's replaced with this to make a default empty value to pass the planning validation and it's validated anyway inside the plugin. + +## Which flists to use + +see [list of flists](../../../developers/flist/grid3_supported_flists.md) + +## Remark multiple VMs + +in terraform you can define items of a list like the following + +``` +listname { + +} +listname { + +} +``` + +So to add a VM + +```terraform + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY ="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeq1MFCQOv3OCLO1HxdQl8V0CxAwt5AzdsNOL91wmHiG9ocgnq2yipv7qz+uCS0AdyOSzB9umyLcOZl2apnuyzSOd+2k6Cj9ipkgVx4nx4q5W1xt4MWIwKPfbfBA9gDMVpaGYpT6ZEv2ykFPnjG0obXzIjAaOsRthawuEF8bPZku1yi83SDtpU7I0pLOl3oifuwPpXTAVkK6GabSfbCJQWBDSYXXM20eRcAhIMmt79zo78FNItHmWpfPxPTWlYW02f7vVxTN/LUeRFoaNXXY+cuPxmcmXp912kW0vhK9IvWXqGAEuSycUOwync/yj+8f7dRU7upFGqd6bXUh67iMl7 ahmed@ahmedheaven" + } + + } +``` + +- We give it a name within our deployment `vm1` +- `flist` is used to define the flist to run within the VM. Check the [list of flists](../../../developers/flist/grid3_supported_flists.md) +- `cpu` and `memory` are used to define the cpu and memory +- `publicip` is usued to define if it requires a public IP or not +- `entrypoint` is used define the entrypoint which in most of the cases in `/sbin/zinit init`, but in case of flists based on vms it can be specific to each flist +- `env_vars` are used to define te environment variables, in this example we define `SSH_KEY` to authorize me accessing the machine + Here we say we will have this deployment on node with `twin ID 2` using the overlay network defined from before `grid_network.net1.name` and use the ip range allocated to that specific node `2` + +The file describes only the desired state which is `a deployment of two VMs and their specifications in terms of cpu and memory, and some environment variables e.g sshkey to ssh into the machine` + +## Reference + +A complete list of VM workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vms). + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [8] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" + add_wg_access = true +} +resource "grid_deployment" "d1" { + node = 8 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "") + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + } +} +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} +``` diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_vm_gateway.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_vm_gateway.md new file mode 100644 index 0000000..d7feb5d --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_vm_gateway.md @@ -0,0 +1,172 @@ +

Terraform Web Gateway With VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Expose with Prefix](#expose-with-prefix) +- [Expose with Full Domain](#expose-with-full-domain) +- [Using Gateway Name on Private Networks (WireGuard)](#using-gateway-name-on-private-networks-wireguard) + +*** + +## Introduction + +In this section, we provide the basic information for a VM web gateway using Terraform on the TFGrid. + +## Expose with Prefix + +A complete list of gateway name workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/name_proxy.md). + +``` + terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +# this data source is used to break circular dependency in cases similar to the following: +# vm: needs to know the domain in its init script +# gateway_name: needs the ip of the vm to use as backend. +# - the fqdn can be computed from grid_gateway_domain for the vm +# - the backend can reference the vm ip directly +data "grid_gateway_domain" "domain" { + node = 7 + name = "ashraf" +} +resource "grid_network" "net1" { + nodes = [8] + ip_range = "10.1.0.0/24" + name = "network" + description = "newer network" + add_wg_access = true +} +resource "grid_deployment" "d1" { + node = 8 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "") + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/strm-helloworld-http-latest.flist" + cpu = 2 + publicip = true + memory = 1024 + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP ashraf@thinkpad" + } + planetary = true + } +} +resource "grid_name_proxy" "p1" { + node = 7 + name = "ashraf" + backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])] + tls_passthrough = false +} +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "public_ip" { + value = split("/",grid_deployment.d1.vms[0].computedip)[0] +} + +output "ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +``` + +please note to use grid_name_proxy you should choose a node that has public config and has a domain in its public config like node 7 in the following example +![ ](./img/graphql_publicconf.png) + +Here + +- we created a grid domain resource `ashraf` to be deployed on gateway node `7` to end up with a domain `ashraf.ghent01.devnet.grid.tf` +- we create a proxy for the gateway to send the traffic coming to `ashraf.ghent01.devnet.grid.tf` to the vm as a backend, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replcae it with `true` your backend service needs to be able to do the TLS termination + +## Expose with Full Domain + +A complete list of gateway fqdn workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/fqdn_proxy.md). + +it is more like the above example the only difference is you need to create an `A record` on your name provider for `remote.omar.grid.tf` to gateway node `7` IPv4. + +``` + +resource "grid_fqdn_proxy" "p1" { + node = 7 + name = "workloadname" + fqdn = "remote.omar.grid.tf" + backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])] + tls_passthrough = true +} + +output "fqdn" { + value = grid_fqdn_proxy.p1.fqdn +} +``` + +## Using Gateway Name on Private Networks (WireGuard) + +It is possible to create a vm with private ip (wireguard) and use it as a backend for a gateway contract. this is done as the following + +- Create a gateway domain data source. this data source will construct the full domain so we can use it afterwards + +``` +data "grid_gateway_domain" "domain" { + node = grid_scheduler.sched.nodes["node1"] + name = "examp123456" +} +``` + +- create a network resource + +``` +resource "grid_network" "net1" { + nodes = [grid_scheduler.sched.nodes["node1"]] + ip>_range = "10.1.0.0/16" + name = mynet + description = "newer network" +} +``` + +- Create a vm to host your service + +``` +resource "grid_deployment" "d1" { + name = vm1 + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + ... + } +} +``` + +- Create a grid_name_proxy resource using the network created above and the wireguard ip of the vm that host the service. Also consider changing the port to the correct port + +``` +resource "grid_name_proxy" "p1" { + node = grid_scheduler.sched.nodes["node1"] + name = "examp123456" + backends = [format("http://%s:9000", grid_deployment.d1.vms[0].ip)] + network = grid_network.net1.name + tls_passthrough = false +} +``` + +- To know the full domain created using the data source above you can show it via + +``` +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +``` + +- Now vist the domain you should be able to reach your service hosted on the vm diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_zdb.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_zdb.md new file mode 100644 index 0000000..8efa916 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_zdb.md @@ -0,0 +1,64 @@ +

Deploying a ZDB with terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We provide a basic template for ZDB deployment with Terraform on the TFGrid. + +A brief description of zdb fields can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-zdbs). + +A more thorough description of zdb operation can be found in its parent [repo](https://github.com/threefoldtech/0-db). + +## Example + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_deployment" "d1" { + node = 4 + + zdbs{ + name = "zdb1" + size = 10 + description = "zdb1 description" + password = "zdbpasswd1" + mode = "user" + } + zdbs{ + name = "zdb2" + size = 2 + description = "zdb2 description" + password = "zdbpasswd2" + mode = "seq" + } +} + +output "deployment_id" { + value = grid_deployment.d1.id +} + +output "zdb1_endpoint" { + value = format("[%s]:%d", grid_deployment.d1.zdbs[0].ips[0], grid_deployment.d1.zdbs[0].port) +} + +output "zdb1_namespace" { + value = grid_deployment.d1.zdbs[0].namespace +} +``` + + diff --git a/collections/manual/documentation/system_administrators/terraform/resources/terraform_zlogs.md b/collections/manual/documentation/system_administrators/terraform/resources/terraform_zlogs.md new file mode 100644 index 0000000..1867ce4 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/resources/terraform_zlogs.md @@ -0,0 +1,108 @@ +# Zlogs + +Zlogs is a utility that allows you to stream VM logs to a remote location. You can find the full description [here](https://github.com/threefoldtech/zos/tree/main/docs/manual/zlogs) + +## Using Zlogs + +In terraform, a vm has a zlogs field, this field should contain a list of target URLs to stream logs to. + +Valid protocols are: `ws`, `wss`, and `redis`. + +For example, to deploy two VMs named "vm1" and "vm2", with one vm1 streaming logs to vm2, this is what main.tf looks like: +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [2, 4] + ip_range = "10.1.0.0/16" + name = "network" + description = "some network description" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "") + vms { + name = "vm1" #streaming logs + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + entrypoint = "/sbin/zinit init" + cpu = 2 + memory = 1024 + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + zlogs = tolist([ + format("ws://%s:5000", replace(grid_deployment.d2.vms[0].computedip, "//.*/", "")), + ]) + } +} + +resource "grid_deployment" "d2" { + node = 4 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 4, "") + vms { + name = "vm2" #receiving logs + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + publicip = true + } +} +``` + +At this point, two VMs are deployed, and vm1 is ready to stream logs to vm2. +But what is missing here is that vm1 is not actually producing any logs, and vm2 is not listening for incoming messages. + +### Creating a server + +- First, we will create a server on vm2. This should be a websocket server listening on port 5000 as per our zlogs definition in main.tf ```ws://%s:5000```. + +- a simple python websocket server looks like this: +``` +import asyncio +import websockets +import gzip + + +async def echo(websocket): + async for message in websocket: + data = gzip.decompress(message).decode('utf-8') + f = open("output.txt", "a") + f.write(data) + f.close() + +async def main(): + async with websockets.serve(echo, "0.0.0.0", 5000, ping_interval=None): + await asyncio.Future() + +asyncio.run(main()) +``` +- Note that incoming messages are decompressed since zlogs compresses any messages using gzip. +- After a message is decompressed, it is then appended to `output.txt`. + +### Streaming logs + +- Zlogs streams anything written to stdout of the zinit process on a vm. +- So, simply running ```echo "to be streamed" 1>/proc/1/fd/1``` on vm1 should successfuly stream this message to the vm2 and we should be able to see it in `output.txt`. +- Also, if we want to stream a service's logs, a service definition file should be created in ```/etc/zinit/example.yaml``` on vm1 and should look like this: +``` +exec: sh -c "echo 'to be streamed'" +log: stdout +``` + diff --git a/collections/manual/documentation/system_administrators/terraform/sidebar.md b/collections/manual/documentation/system_administrators/terraform/sidebar.md new file mode 100644 index 0000000..19759ad --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/sidebar.md @@ -0,0 +1,30 @@ +- [**Home**](@threefold:threefold_home) +- [**Manual 3 Home**](@manual3_home_new) + +--- + +**Terraform** + +- [Read Me First](@terraform_readme) +- [Install](@terraform_install) +- [Basics](@terraform_basics) +- [Tutorial](@terraform_get_started) +- [Delete](@terraform_delete) + +--- + +**Resources** + +- [using scheduler](@terraform_scheduler) +- [Virtual Machine](@terraform_vm) +- [Web Gateway](@terraform_vm_gateway) +- [Kubernetes cluster](@terraform_k8s) +- [ZDB](@terraform_zdb) +- [Quantum Filesystem](@terraform_qsfs) +- [CapRover](@terraform_caprover) + **Advanced** +- [Terraform Provider](@terraform_provider) +- [Terraform Provisioners](@terraform_provisioners) +- [Mounts](@terraform_mounts) +- [Capacity planning](@terraform_capacity_planning) +- [Updates](terraform_updates) diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_basic_example.md b/collections/manual/documentation/system_administrators/terraform/terraform_basic_example.md new file mode 100644 index 0000000..60c1fca --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_basic_example.md @@ -0,0 +1,44 @@ +```terraform +resource "grid_network" "net" { + nodes = [2] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net.name + ip_range = lookup(grid_network.net.nodes_ip_range, 2, "") + disks { + name = "mydisk1" + size = 2 + description = "this is my disk description1" + + } + disks { + name = "mydisk2" + size=2 + description = "this is my disk2" + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "mydisk1" + mount_point = "/opt" + } + mounts { + disk_name = "mydisk2" + mount_point = "/test" + } + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP" + } + + } +} +``` diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_basics.md b/collections/manual/documentation/system_administrators/terraform/terraform_basics.md new file mode 100644 index 0000000..0d200d6 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_basics.md @@ -0,0 +1,187 @@ +

Terraform Basics

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Requirements](#requirements) +- [Basic Commands](#basic-commands) +- [Find A Node](#find-a-node) +- [Preparation](#preparation) +- [Main File Details](#main-file-details) + - [Initializing the Provider](#initializing-the-provider) +- [Export Environment Variables](#export-environment-variables) + - [Output Section](#output-section) +- [Start a Deployment](#start-a-deployment) +- [Delete a Deployment](#delete-a-deployment) +- [Available Flists](#available-flists) +- [Full and Micro Virtual Machines](#full-and-micro-virtual-machines) +- [Tips on Managing Resources](#tips-on-managing-resources) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We cover some important aspects of Terraform deployments on the ThreeFold Grid. + +For a complete guide on deploying a full VM on the TFGrid, read [this documentation](./terraform_full_vm.md). + +## Requirements + +Here are the requirements to use Terraform on the TFGrid: + +- [Set your TFGrid account](../getstarted/tfgrid3_getstarted.md) +- [Install Terraform](../terraform/terraform_install.md) + +## Basic Commands + +Here are some very useful commands to use with Terraform: + +- Initialize the repo `terraform init` +- Execute a terraform file `terraform apply` +- See the output `terraform output` + - This is useful when you want to output variables such as public ip, planetary network ip, wireguard configurations, etc. +- See the state `terraform show` +- Destroy `terraform destroy` + +## Find A Node + +There are two options when it comes to finding a node to deploy on. You can use the scheduler or search for a node with the Nodes Explorer. + +- Use the [scheduler](resources/terraform_scheduler.md) + - Scheduler will help you find a node that matches your criteria +- Use the Nodes Explorer + - You can check the [Node Finder](../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria. + - Make sure you choose a node which has enough capacity and is available (up and running). + +## Preparation + +We cover the basic preparations beforing explaining the main file. + +- Make a directory for your project + - ``` + mkdir myfirstproject + ``` +- Change directory + - ``` + cd myfirstproject + ``` +- Create a main file and insert content + - ``` + nano main.tf + ``` + + +## Main File Details + +Here is a concrete example of a Terraform main file. + +### Initializing the Provider + + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1" + } + } +} + +``` +- You can always provide a version to chooses a specific version of the provider like `1.8.1-dev` to use version `1.8.1` for devnet +- If `version = "1.8.1"` is omitted, the provider will fetch the latest version, but for environments other than main you have to specify the version explicitly +- For devnet, qanet and testnet use version = `"-dev", "-qa" and "-rcx"` respectively + +Providers can have different arguments e.g using which identity when deploying, which Substrate network to create contracts on, etc. This can be done in the provider section, as shown below: + +```terraform +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = "dev" # or test to use testnet + +} +``` + +## Export Environment Variables + +When writing the main file, you can decide to leave a variable content empty. In this case you can export the variable content as environment variables. + +* Export your mnemonics + * ``` + export MNEMONICS="..." + ``` +* Export the network + * ``` + export NETWORK="..." + ``` + +For more info, consult the [Provider Manual](./advanced/terraform_provider.md). + +### Output Section + +The output section is useful to find information such as: + +- the overlay wireguard network configurations +- the private IPs of the VMs +- the public IP of the VM `exposed under computedip` + + +The output section will look something like this: + +```terraform +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +## Start a Deployment + +To start a deployment, run the following command `terraform init && terraform apply`. + +## Delete a Deployment + +To delete a deployment, run the following command: + +``` +terraform destroy +``` + +## Available Flists + +You can consult the [list of Flists](../../developers/flist/flist.md) to learn more about the available Flist to use with a virtual machine. + +## Full and Micro Virtual Machines + +There are some key distinctions to take into account when it comes to deploying full or micro VMs on the TFGrid: + +* Only the flist determines if we get a full or a micro VM +* Full VMs ignore the **rootfs** field and use the first mount as their root filesystem (rootfs) +* We can upgrade a full VM by tearing it down, leaving the disk in detached state, and then reattaching the disk to a new VM + * For more information on this, read [this documentation](https://forum.threefold.io/t/full-vm-recovery-tool/4152). + +## Tips on Managing Resources + +As a general advice, you can use multiple accounts on TFChain and group your resources per account. + +This gives you the following benefits: + +- More control over TFT spending +- Easier to delete all your contracts +- Less chance to make mistakes +- Can use an account to share access with multiple people + +## Conclusion + +This was a quick introduction to Terraform, for a complete guide, please read [this documentation](./terraform_full_vm.md). For advanced tutorials and deployments, read [this section](./advanced/terraform_advanced_readme.md). To learn more about the different resources to deploy with Terraform on the TFGrid, read [this section](./resources/terraform_resources_readme.md). \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_full_vm.md b/collections/manual/documentation/system_administrators/terraform/terraform_full_vm.md new file mode 100644 index 0000000..42ceee2 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_full_vm.md @@ -0,0 +1,280 @@ +

Terraform Complete Full VM Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Process](#main-process) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) + - [Using the Grid Scheduler](#using-the-grid-scheduler) + - [Using the Grid Explorer](#using-the-grid-explorer) +- [Create the Terraform Files](#create-the-terraform-files) +- [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform) +- [SSH into the 3Node](#ssh-into-the-3node) +- [Delete the Deployment](#delete-the-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +This short ThreeFold Guide will teach you how to deploy a Full VM on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04. + +The steps are very simple. You first need to create the Terraform files, the variables file and the deployment file, and then deploy the full VM. After the deployment is done, you can SSH into the full VM. + +The main goal of this guide is to show you all the necessary steps to deploy a Full VM on the TGrid using Terraform. Once you get acquainted with this first basic deployment, you should be able to explore on your own the possibilities that the TFGrid and Terraform combined provide. + + + +## Main Process + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` file as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployments`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable file to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on. + +Once this is done, initialize and apply Terraform to deploy your workload, then SSH into the Full VM. That's it! Now let's go through all these steps in further details. + + + +## Prerequisites + +- [Install Terraform](./terraform_install.md) + +You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +## Find a 3Node with the ThreeFold Explorer + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address. + +We present two options to find a suitable node: the scheduler and the TFGrid Explorer. + + + +### Using the Grid Scheduler + +Using the TFGrid scheduler can be very efficient depending on what you are trying to achieve. To learn more about the scheduler, please refer to this [Scheduler Guide](resources/terraform_scheduler.md). + + + +### Using the Grid Explorer + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +- Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +- Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +- For proper understanding, we give further information on some relevant columns: + - `ID` refers to the node ID + - `Free Public IPs` refers to available IPv4 public IP addresses + - `HRU` refers to HDD storage + - `SRU` refers to SSD storage + - `MRU` refers to RAM (memory) + - `CRU` refers to virtual cores (vcores) +- To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + - At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + - For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + - `Free SRU (GB)`: 15 + - `Free MRU (GB)`: 1 + - `Total CRU (Cores)`: 1 + - `Free Public IP`: 2 + - Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create the Terraform Files + +Open the terminal. + +- Go to the home folder + + - ``` + cd ~ + ``` + +- Create the folder `terraform` and the subfolder `deployment-full-vm`: + - ``` + mkdir -p terraform/deployment-full-vm + ``` + - ``` + cd terraform/deployment-full-vm + ``` +- Create the `main.tf` file: + + - ``` + nano main.tf + ``` + +- Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +In this file, we name the VM as `vm1`. + +- Create the `credentials.auto.tfvars` file: + + - ``` + nano credentials.auto.tfvars + ``` + +- Copy the `credentials.auto.tfvars` content and save the file. + +``` +mnemonics = "..." +SSH_KEY = "..." + +tfnodeid1 = "..." + +size = "15" +cpu = "1" +memory = "512" +``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the server used. Simply replace the three dots by the content. + +We set here the minimum specs for a full VM, but you can adjust these parameters. + + + +## Deploy the Full VM with Terraform + +We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployments` containing the main and variables files. + +- Initialize Terraform: + + - ``` + terraform init + ``` + +- Apply Terraform to deploy the full VM: + - ``` + terraform apply + ``` + +After deployments, take note of the 3Node' IPv4 address. You will need this address to SSH into the 3Node. + + + +## SSH into the 3Node + +- To [SSH into the 3Node](../getstarted/ssh_guide/ssh_guide.md), write the following: + - ``` + ssh root@VM_IPv4_Address + ``` + + + +## Delete the Deployment + +To stop the Terraform deployment, you simply need to write the following line in the terminal: + +``` +terraform destroy +``` + +Make sure that you are in the Terraform directory you created for this deployment. + + + +## Conclusion + +You now have the basic knowledge and know-how to deploy on the TFGrid using Terraform. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_get_started.md b/collections/manual/documentation/system_administrators/terraform/terraform_get_started.md new file mode 100644 index 0000000..a4693da --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_get_started.md @@ -0,0 +1,87 @@ +![ ](./advanced/img//terraform_.png) + +## Using Terraform + +- make a directory for your project `mkdir myfirstproject` +- `cd myfirstproject` +- create `main.tf` <- creates the terraform main file + +## Create + +to start the deployment `terraform init && terraform apply` + +## Destroying + +can be done using `terraform destroy` + +And that's it!! you managed to deploy 2 VMs on the threefold grid v3 + +## How to use a Terraform File + +### Initializing the provider + +In terraform's global section + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1" + } + } +} + +``` + +- You can always provide a version to chooses a specific version of the provider like `1.8.1-dev` to use version `1.8.1` for devnet +- If `version = "1.8.1"` is omitted, the provider will fetch the latest version but for environments other than main you have to specify the version explicitly +- For devnet, qanet and testnet use version = `"-dev", "-qa" and "-rcx"` respectively + +Providers can have different arguments e.g using which identity when deploying, which substrate network to create contracts on, .. etc. This can be done in the provider section + +```terraform +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = "dev" # or test to use testnet + +} +``` + +Please note you can leave its content empty and export everything as environment variables + +``` +export MNEMONICS="....." +export NETWORK="....." + +``` + +For more info see [Provider Manual](./advanced/terraform_provider.md) + +### output section + +```terraform +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +Output parameters show what has been done: + +- the overlay wireguard network configurations +- the private IPs of the VMs +- the public IP of the VM `exposed under computedip` + +### Which flists to use in VM + +see [list of flists](../manual3_iac/grid3_supported_flists.md) diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_gpu_support.md b/collections/manual/documentation/system_administrators/terraform/terraform_gpu_support.md new file mode 100644 index 0000000..345b5e1 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_gpu_support.md @@ -0,0 +1,55 @@ +

GPU Support and Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +The TFGrid now supports GPUs. We present here a quick example. This section will be expanded as new information comes in. + + + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtechdev.com/providers/grid" + } + } +} +provider "grid" { +} +locals { + name = "testvm" +} +resource "grid_network" "net1" { + name = local.name + nodes = [93] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = 93 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + gpus = [ + "0000:0e:00.0/1002/744c" + ] + } +``` \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_install.md b/collections/manual/documentation/system_administrators/terraform/terraform_install.md new file mode 100644 index 0000000..f125b7d --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_install.md @@ -0,0 +1,53 @@ +

Installing Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Terraform](#install-terraform) + - [Install Terraform on Linux](#install-terraform-on-linux) + - [Install Terraform on MAC](#install-terraform-on-mac) + - [Install Terraform on Windows](#install-terraform-on-windows) +- [ThreeFold Terraform Plugin](#threefold-terraform-plugin) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +There are many ways to install Terraform depending on your operating system. Terraform is available for Linux, MAC and Windows. + +## Install Terraform + +You can get Terraform from the Terraform website [download page](https://www.terraform.io/downloads.html). You can also install it using your system package manager. The Terraform [installation manual](https://learn.hashicorp.com/tutorials/terraform/install-cli) contains the essential information for a proper installation. + +We cover here the basic steps for Linux, MAC and Windows for convenience. Refer to the official Terraform documentation if needed. + +### Install Terraform on Linux + +To install Terraform on Linux, we follow the official [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). + +* [Install Terraform on Linux](../computer_it_basics/cli_scripts_basics.md#install-terraform) + +### Install Terraform on MAC + +To install Terraform on MAC, install Brew and then install Terraform. + +* [Install Brew](../computer_it_basics/cli_scripts_basics.md#install-brew) +* [Install Terraform with Brew](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-brew) + +### Install Terraform on Windows + +To install Terraform on Windows, a quick way is to first install Chocolatey and then install Terraform. + +* [Install Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-chocolatey) +* [Install Terraform with Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-chocolatey) + +## ThreeFold Terraform Plugin + +The ThreeFold [Terraform plugin](https://github.com/threefoldtech/terraform-provider-grid) is supported on Linux, MAC and Windows. + +There's no need to specifically install the ThreeFold Terraform plugin. Terraform will automatically load it from an online directory according to instruction within the deployment file. + +## Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_readme.md b/collections/manual/documentation/system_administrators/terraform/terraform_readme.md new file mode 100644 index 0000000..17b9463 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_readme.md @@ -0,0 +1,45 @@ + + +

Terraform

+ +Welcome to the *Terraform* section of the ThreeFold Manual! + +In this section, we'll embark on a journey to explore the powerful capabilities of Terraform within the ThreeFold Grid ecosystem. Terraform, a cutting-edge infrastructure as code (IaC) tool, empowers you to define and provision your infrastructure efficiently and consistently. + +

Table of Contents

+ +- [What is Terraform?](#what-is-terraform) +- [Terraform on ThreeFold Grid: Unleashing Power and Simplicity](#terraform-on-threefold-grid-unleashing-power-and-simplicity) +- [Get Started](#get-started) +- [Features](#features) +- [What is Not Supported](#what-is-not-supported) + +*** + +## What is Terraform? + +Terraform is an open-source tool that enables you to describe and deploy infrastructure using a declarative configuration language. With Terraform, you can define your infrastructure components, such as virtual machines, networks, and storage, in a human-readable configuration file. This file, often referred to as the Terraform script, becomes a blueprint for your entire infrastructure. + +The beauty of Terraform lies in its ability to automate the provisioning and management of infrastructure across various cloud providers, ensuring that your deployments are reproducible and scalable. It promotes collaboration, version control, and the ability to treat your infrastructure as code, providing a unified and seamless approach to managing complex environments. + +## Terraform on ThreeFold Grid: Unleashing Power and Simplicity + +Within the ThreeFold Grid ecosystem, Terraform plays a pivotal role in streamlining the deployment and orchestration of decentralized, peer-to-peer infrastructure. Leveraging the unique capabilities of the ThreeFold Grid, you can use Terraform to define and deploy your workloads, tapping into the TFGrid decentralized architecture for unparalleled scalability, reliability, and sustainability. + +This manual will guide you through the process of setting up, configuring, and managing your infrastructure on the ThreeFold Grid using Terraform. Whether you're a seasoned developer, a DevOps professional, or someone exploring the world of decentralized computing for the first time, this guide is designed to provide clear and concise instructions to help you get started. + +## Get Started + +![ ](../terraform/img//terraform_works.png) + +Threefold loves Open Source! In v3.0 we are integrating one of the most popular 'Infrastructure as Code' (IaC) tools of the cloud industry, [Terraform](https://terraform.io). Utilizing the Threefold grid v3 using Terraform gives a consistent workflow and a familiar experience for everyone coming from different background. Terraform describes the state desired of how the deployment should look like instead of imperatively describing the low level details and the mechanics of how things should be glued together. + +## Features + +- All basic primitives from ThreeFold grid can be deployed, which is a lot. +- Terraform can destroy a deployment +- Terraform shows all the outputs + +## What is Not Supported + +- we don't support updates/upgrades, if you want a change you need to destroy a deployment & re-create your deployment this in case you want to change the current running instances properties or change the node, but adding a vm to an existing deployment this shouldn't affect other running vm and same if we need to decommission a vm from a deployment this also shouldn't affect the others diff --git a/collections/manual/documentation/system_administrators/terraform/terraform_toc.md b/collections/manual/documentation/system_administrators/terraform/terraform_toc.md new file mode 100644 index 0000000..15a7c78 --- /dev/null +++ b/collections/manual/documentation/system_administrators/terraform/terraform_toc.md @@ -0,0 +1,34 @@ +

Terraform

+ +

Table of Contents

+ +- [Overview](./terraform_readme.md) +- [Installing Terraform](./terraform_install.md) +- [Terraform Basics](./terraform_basics.md) +- [Full VM Deployment](./terraform_full_vm.md) +- [GPU Support](./terraform_gpu_support.md) +- [Resources](./resources/terraform_resources_readme.md) + - [Using Scheduler](./resources/terraform_scheduler.md) + - [Virtual Machine](./resources/terraform_vm.md) + - [Web Gateway](./resources/terraform_vm_gateway.md) + - [Kubernetes Cluster](./resources/terraform_k8s.md) + - [ZDB](./resources/terraform_zdb.md) + - [Quantum Safe Filesystem](./resources/terraform_qsfs.md) +- [QSFS on Micro VM](./resources/terraform_qsfs_on_microvm.md) +- [QSFS on Full VM](./resources/terraform_qsfs_on_full_vm.md) + - [CapRover](./resources/terraform_caprover.md) +- [Advanced](./advanced/terraform_advanced_readme.md) + - [Terraform Provider](./advanced/terraform_provider.md) + - [Terraform Provisioners](./advanced/terraform_provisioners.md) + - [Mounts](./advanced/terraform_mounts.md) + - [Capacity Planning](./advanced/terraform_capacity_planning.md) + - [Updates](./advanced/terraform_updates.md) + - [SSH Connection with Wireguard](./advanced/terraform_wireguard_ssh.md) + - [Set a Wireguard VPN](./advanced/terraform_wireguard_vpn.md) + - [Synced MariaDB Databases](./advanced/terraform_mariadb_synced_databases.md) + - [Nomad](./advanced/terraform_nomad.md) + - [Nextcloud Deployments](./advanced/terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](./advanced/terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](./advanced/terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](./advanced/terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](./advanced/terraform_nextcloud_vpn.md) \ No newline at end of file