simplified developers

This commit is contained in:
mik-tf 2024-04-04 00:57:10 +00:00
parent 5cbee48d14
commit cfaec4f635
195 changed files with 0 additions and 12842 deletions

View File

@ -1,11 +0,0 @@
<h1> Flist </h1>
<h2> Table of Contents </h2>
- [Zero-OS Hub](./flist_hub/zos_hub.md)
- [Generate an API Token](./flist_hub/api_token.md)
- [Convert Docker Image Into Flist](./flist_hub/convert_docker_image.md)
- [Supported Flists](./grid3_supported_flists.md)
- [Flist Case Studies](./flist_case_studies/flist_case_studies.md)
- [Case Study: Debian 12](./flist_case_studies/flist_debian_case_study.md)
- [Case Study: Nextcloud AIO](./flist_case_studies/flist_nextcloud_case_study.md)

View File

@ -1,6 +0,0 @@
<h1> Flist Case Studies </h1>
<h2> Table of Contents </h2>
- [Case Study: Debian 12](./flist_debian_case_study.md)
- [Case Study: Nextcloud AIO](./flist_nextcloud_case_study.md)

View File

@ -1,300 +0,0 @@
<h1> Flist Case Study: Debian 12 </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [You Said Flist?](#you-said-flist)
- [Case Study Objective](#case-study-objective)
- [The Overall Process](#the-overall-process)
- [Docker Image Creation](#docker-image-creation)
- [Dockerfile](#dockerfile)
- [Docker Image Script](#docker-image-script)
- [zinit Folder](#zinit-folder)
- [README.md File](#readmemd-file)
- [Putting it All Together](#putting-it-all-together)
- [Docker Publishing Steps](#docker-publishing-steps)
- [Create Account and Access Token](#create-account-and-access-token)
- [Build and Push the Docker Image](#build-and-push-the-docker-image)
- [Convert the Docker Image to an Flist](#convert-the-docker-image-to-an-flist)
- [Deploy the Flist on the TF Playground](#deploy-the-flist-on-the-tf-playground)
- [Conclusion](#conclusion)
***
## Introduction
For this tutorial, we will present a case study demonstrating how easy it is to create a new flist on the ThreeFold ecosystem. We will be creating a Debian Flist and we will deploy a micro VM on the ThreeFold Playground and access our Debian deployment.
To do all this, we will need to create a Docker Hub account, create a Dockerfile, a docker image and a docker container, then convert the docker image to a Zero-OS flist. After all this, we will be deploying our Debian workload on the ThreeFold Playground. You'll see, it's pretty straightforward and fun to do.
### You Said Flist?
First, let's recall what an flist actually is and does. In short, an flist is a very effective way to deal with software data and the end result is fast deployment and high reliability.
In a flist, we separate the metadata from the data. The metadata is a description of what files are in that particular image. It's the data providing information about the app/software. Thanks to flist, the 3Node doesn't need to install a complete software program in order to run properly. Only the necessary files are installed. Zero-OS can read the metadata of a container and only download and execute the necessary binaries and applications to run the workload, when it is necessary.
Sounds great? It really is great, and very effective!
One amazing thing about the flist technology is that it is possible to convert any Docker image into an flist, thanks to the [ThreeFold Docker Hub Converter tool](https://hub.grid.tf/docker-convert). If this sounds complicated, fear not. It is very easy and we will show you how to proceed in this case study.
### Case Study Objective
The goal of this case study is to give you enough information and tools so that you can yourself build your own flist projects and deploy on the ThreeFold Grid.
This case study is not meant to show you all the detailed steps on creating an flist from scratch. We will instead start with some files templates available on the ThreeFold repository [tf-images](https://github.com/threefoldtech/tf-images). This is one of the many advantages of working with open-source projects: we can easily get inspiration from the already available codes of the many ThreeFold repositories and work our way up from there.
### The Overall Process
To give you a bird's view of the whole project, here are the main steps:
* Create the Docker image
* Push the Docker image to the Docker Hub
* Convert the Docker image to a Zero-OS flist
* Deploy a micro VM with the flist on the ThreeFold Playground
## Docker Image Creation
As we've said previously, we will not explore all the details of creating an flist from scratch. This would be done in a subsequent guide. For now, we want to take existing codes and work our way from there. This is not only quicker, but it is a good way to get to know the ThreeFold's ecosystem and repositories.
We will be using the code available on the [ThreeFold Tech's Github page](https://github.com/threefoldtech). In our case, we want to explore the repository [tf-images](https://github.com/threefoldtech/tf-images).
If you go on the subsection [tfgrid3](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3), you can see many different flists available. In our case, we want to deploy the Debian Linux distribution. It is thus logic to try and find similar Linux distributions to take inspiration from.
For this case study, we draw inspiration from the [Ubuntu 22.04](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/ubuntu22.04) directory.
If we look at the Ubuntu 22.04 directory tree, this is what we get:
```
.
├── Dockerfile
├── README.md
├── start.sh
└── zinit
├── ssh-init.yaml
└── sshd.yaml
```
We will now explore each of those files to get a good look at the whole repository and try to understand how it all works together.
### Dockerfile
We recall that to make a Docker image, you need to create a Dockerfile. As per [Docker's documentation](https://docs.docker.com/engine/reference/builder/), a Dockerfile is "a Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image".
The Ubuntu 22.04 Dockerfile is as follows:
File: `Dockerfile`
```Dockerfile
FROM ubuntu:22.04
RUN apt update && \
apt -y install wget openssh-server
RUN wget -O /sbin/zinit https://github.com/threefoldtech/zinit/releases/download/v0.2.5/zinit && \
chmod +x /sbin/zinit
COPY zinit /etc/zinit
COPY start.sh /start.sh
RUN chmod +x /sbin/zinit && chmod +x /start.sh
ENTRYPOINT ["zinit", "init"]
```
We can see from the first line that the Dockerfile will look for the docker image `ubuntu:22.04`. In our case, we want to get the Debian 12 docker image. This information is available on the Docker Hub (see [Debian Docker Hub](https://hub.docker.com/_/debian)).
We will thus need to change the line `FROM ubuntu:22.04` to the line `FROM debian:12`. It isn't more complicated than that!
We now have the following Dockerfile fore the Debian docker image:
File: `Dockerfile`
```Dockerfile
FROM debian:12
RUN apt update && \
apt -y install wget openssh-server
RUN wget -O /sbin/zinit https://github.com/threefoldtech/zinit/releases/download/v0.2.5/zinit && \
chmod +x /sbin/zinit
COPY zinit /etc/zinit
COPY start.sh /start.sh
RUN chmod +x /sbin/zinit && chmod +x /start.sh
ENTRYPOINT ["zinit", "init"]
```
There is nothing more needed here. Pretty fun to start from some existing open-source code, right?
### Docker Image Script
The other important file we will be looking at is the `start.sh` file. This is the basic script that will be used to properly set the docker image. Thankfully, there is nothing more to change in this file, we can leave it as is. As we will see later, this file will be executed by zinit when the container starts.
File: `start.sh`
```.sh
#!/bin/bash
mkdir -p /var/run/sshd
mkdir -p /root/.ssh
touch /root/.ssh/authorized_keys
chmod 700 /root/.ssh
chmod 600 /root/.ssh/authorized_keys
echo "$SSH_KEY" >> /root/.ssh/authorized_keys
```
### zinit Folder
Next, we want to take a look at the zinit folder.
But first, what is zinit? In a nutshell, zinit is a process manager (pid 1) that knows how to launch, monitor and sort dependencies. It thus executes targets in the proper order. For more information on zinit, check the [zinit repository](https://github.com/threefoldtech/zinit).
When we start the Docker container, the files in the folder zinit will be executed.
If we take a look at the file `ssh-init.yaml`, we find the following:
```.yaml
exec: bash /start.sh
log: stdout
oneshot: true
````
We can see that the first line calls the [bash](https://www.gnu.org/software/bash/) Unix shell and that it will run the file `start.sh` we've seen earlier.
In this zinit service file, we define a service named `ssh-init.yaml`, where we tell zinit which commands to execute (here `bash /start.sh`), where to log (here in `stdout`) and where `oneshot` is set to `true` (meaning that it should only be executed once).
If we take a look at the file `sshd.yaml`, we find the following:
```.yaml
exec: bash -c "/usr/sbin/sshd -D"
after:
- ssh-init
```
Here another service `sshd.yaml` runs after the `ssh-init.yaml` process.
### README.md File
As every good programmer knows, a good code is nothing without some good documentation to help others understand what's going on! This is where the `README.md` file comes into play.
In this file, we can explain what our code is doing and offer steps to properly configure the whole deployment. For the users that will want to deploy the flist on the ThreeFold Playground, they would need the FLIst URL and the basic steps to deploy a Micro VM on the TFGrid. We will thus add this information in the README.md file. This information can be seen in the [section below](#deploy-the-flist-on-the-tf-playground). To read the complete README.md file, go to [this link](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/debian).
### Putting it All Together
We've now went through all the files available in the Ubuntu 22.04 directory on the tf-images repository. To build your own image, you would simply need to put all those files in a local folder on your computer and follow the steps presented at the next section, [Docker Publishing Steps](#docker-publishing-steps).
To have a look at the final result of the changes we bring to the Ubuntu 22.04 version, have a look at this [Debian directory](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/debian) on the ThreeFold's tf-images repository.
## Docker Publishing Steps
### Create Account and Access Token
To be able to push Docker images to the Docker Hub, you obviously need to create a Docker Hub account! This is very easy and please note that there are so many amazing documentation on Docker online. If you're lost, make the most of your favorite search engine and find a way out of the blue.
Here are the steps to create an account and an access token.
* Go to the [Docker Hub](https://hub.docker.com/)
* Click `Register` and follow the steps given by Docker
* On the top right corner, click on your account name and select `Account Settings`
* On the left menu, click on `Security`
* Click on `New Access Token`
* Choose an Access Token description that you will easily identify then click `Generate`
* Make sure to set the permissions `Read, Write, Delete`
* Follow the steps given to properly connect your local computer to the Docker Hub
* Run `docker login -u <account_name>`
* Set the password
You now have access to the Docker Hub from your local computer. We will then proceed to push the Docker image we've created.
### Build and Push the Docker Image
* Make sure the Docker Daemon is running
* Build the docker container
* Template:
* ```
docker build -t <docker_username>/<docker_repo_name> .
```
* Example:
* ```
docker build -t username/debian12 .
```
* Push the docker container to the [Docker Hub](https://hub.docker.com/)
* Template:
* ```
docker push <your_username>/<docker_repo_name>
```
* Example:
* ```
docker push username/debian12
```
* You should now see your docker image on the [Docker Hub](https://hub.docker.com/) when you go into the menu option `My Profile`.
* Note that you can access this link quickly with the following template:
* ```
https://hub.docker.com/u/<account_name>
```
## Convert the Docker Image to an Flist
We will now convert the Docker image into a Zero-OS flist. This part is so easy you will almost be wondering why you never heard about flist before!
* Go to the [ThreeFold Hub](https://hub.grid.tf/).
* Sign in with the ThreeFold Connect app.
* Go to the [Docker Hub Converter](https://hub.grid.tf/docker-convert) section.
* Next to `Docker Image Name`, add the docker image repository and name, see the example below:
* Template:
* `<docker_username>/docker_image_name:tagname`
* Example:
* `username/debian12:latest`
* Click `Convert the docker image`.
* Once the conversion is done, the flist is available as a public link on the ThreeFold Hub.
* To get the flist URL, go to the [TF Hub main page](https://hub.grid.tf/), scroll down to your 3Bot ID and click on it.
* Under `Name`, you will see all your available flists.
* Right-click on the flist you want and select `Copy Clean Link`. This URL will be used when deploying on the ThreeFold Playground. We show below the template and an example of what the flist URL looks like.
* Template:
* ```
https://hub.grid.tf/<3BOT_name.3bot>/<docker_username>-<docker_image_name>-<tagname>.flist
```
* Example:
* ```
https://hub.grid.tf/idrnd.3bot/username-debian12-latest.flist
```
## Deploy the Flist on the TF Playground
* Go to the [ThreeFold Playground](https://play.grid.tf).
* Set your profile manager.
* Go to the [Micro VM](https://play.grid.tf/#/vm) page.
* Choose your parameters (name, VM specs, etc.).
* Under `flist`, paste the Debian flist from the TF Hub you copied previously.
* Make sure the entrypoint is as follows:
* ```
/sbin/zinit init
```
* Choose a 3Node to deploy on
* Click `Deploy`
That's it! You can now SSH into your Debian deployment and change the world one line of code at a time!
*
## Conclusion
In this case study, we've seen the overall process of creating a new flist to deploy a Debian workload on a Micro VM on the ThreeFold Playground.
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.

View File

@ -1,858 +0,0 @@
<h1> Flist Case Study: Nextcloud All-in-One </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Flist: What is It?](#flist-what-is-it)
- [Case Study Objective](#case-study-objective)
- [The Overall Process](#the-overall-process)
- [Docker Image Creation](#docker-image-creation)
- [Nextcloud Flist Directory Tree](#nextcloud-flist-directory-tree)
- [Caddyfile](#caddyfile)
- [Dockerfile](#dockerfile)
- [README.md File](#readmemd-file)
- [scripts Folder](#scripts-folder)
- [caddy.sh](#caddysh)
- [sshd\_init.sh](#sshd_initsh)
- [ufw\_init.sh](#ufw_initsh)
- [nextcloud.sh](#nextcloudsh)
- [nextcloud\_conf.sh](#nextcloud_confsh)
- [zinit Folder](#zinit-folder)
- [ssh-init.yaml and sshd.yaml](#ssh-inityaml-and-sshdyaml)
- [ufw-init.yaml and ufw.yaml](#ufw-inityaml-and-ufwyaml)
- [caddy.yaml](#caddyyaml)
- [dockerd.yaml](#dockerdyaml)
- [nextcloud.yaml](#nextcloudyaml)
- [nextcloud-conf.yaml](#nextcloud-confyaml)
- [Putting it All Together](#putting-it-all-together)
- [Docker Publishing Steps](#docker-publishing-steps)
- [Create Account and Access Token](#create-account-and-access-token)
- [Build and Push the Docker Image](#build-and-push-the-docker-image)
- [Convert the Docker Image to an Flist](#convert-the-docker-image-to-an-flist)
- [Deploy Nextcloud AIO on the TFGrid with Terraform](#deploy-nextcloud-aio-on-the-tfgrid-with-terraform)
- [Create the Terraform Files](#create-the-terraform-files)
- [Deploy Nextcloud with Terraform](#deploy-nextcloud-with-terraform)
- [Nextcloud Setup](#nextcloud-setup)
- [Conclusion](#conclusion)
***
# Introduction
In this case study, we explain how to create a new flist on the ThreeFold ecosystem. We will show the process of creating a Nextcloud All-in-One flist and we will deploy a micro VM on the ThreeFold Playground to access our Nextcloud instance. As a reference, the official Nextcloud flist is available [here](https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist.md).
To achieve all this, we will need to create a Docker Hub account, create a Dockerfile and its associated files, a docker image and a docker container, then convert the docker image to a Zero-OS flist. After all this, we will be deploying our Nextcloud instance on the ThreeFold Playground.
As a general advice, before creating an flist for a ThreeFold deployment, you should make sure that you are able to deploy your workload properly by using a micro VM or a full VM on the TFGrid. Once you know all the steps to deploy your workload, and after some thorough tests, you can take what you've learned and incorporate all this into an flist.
## Flist: What is It?
Before we go any further, let us recall what is an flist. In short, an flist is a technology for storing and efficiently sharing sets of files. While it has many great features, it's purpose in this case is simply to deliver the image contents to Zero-OS for execution as a micro VM. It thus acts as a bundle of files like a normal archive.
One convenient thing about the flist technology is that it is possible to convert any Docker image into an flist, thanks to the [ThreeFold Docker Hub Converter tool](https://hub.grid.tf/docker-convert). It is very easy to do and we will show you how to proceed in this case study. For a quick guide on converting Docker images into flists, read [this section](../flist_hub/convert_docker_image.md) of the ThreeFold Manual.
## Case Study Objective
The goal of this case study is to give you enough information and tools so that you can build your own flist projects and deploy on the ThreeFold Grid.
We will explore the different files needed to create the flist and explain the overall process. Instead of starting from scratch, we will analyze the Nextcloud flist directory in the [tf-images](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) ThreeFold Tech repository. As the project is already done, it will be easier to get an overview of the process and the different components so you can learn to create your own.
## The Overall Process
To give you a bird's-eye view of the whole project, here are the main steps:
* Create the Docker image
* Push the Docker image to the Docker Hub
* Convert the Docker image to a Zero-OS flist
* Deploy a micro VM with the flist on the ThreeFold Playground with Terraform
One important thing to have in mind is that, when we create an flist, what we are doing is basically automating the required steps to deploy a given workload on the TFGrid. Usually, these steps would be done manually and step-by-step by an individual deploying on a micro or a full VM.
Once we've successfully created an flist, we thus have a very quick way to deploy a specific workload while always obtaining the same result. This is why it is highly recommended to test a given deployment on a full or micro VM before building an flist.
For example, in the case of building a Nextcloud All-in-One flist, the prerequisites would be to successfully deploy a Nextcloud AIO instance on a full VM by executing each step sequentially. This specific example is documented in the Terraform section [Nextcloud All-in-One Guide](../../../system_administrators/terraform/advanced/terraform_nextcloud_aio.md) of the System Administrators book.
In our case, the flist we will be using has some specific configurations depending on the way we deploy Nextcloud (e.g. using or not the gateway and a custom domain). The Terraform **main.tf** we will be sharing later on will thus take all this into account for a smooth deployment.
# Docker Image Creation
As we've said previously, we will explore the different components of the existing Nextcloud flist directory. We thus want to check the existing files and try to understand as much as possible how the different components work together. This is also a very good introduction to the ThreeFold ecosystem.
We will be using the files available on the [ThreeFold Tech Github page](https://github.com/threefoldtech). In our case, we want to explore the repository [tf-images](https://github.com/threefoldtech/tf-images).
If you go in the subsection [tfgrid3](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3), you can see many different flists available. In our case, we want to deploy the [Nextcloud All-in-One Flist](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud).
## Nextcloud Flist Directory Tree
The Nextcloud flist directory tree is the following:
```
tree tf-images/tfgrid3/nextcloud
.
├── Caddyfile
├── Dockerfile
├── README.md
├── scripts
│ ├── caddy.sh
│ ├── nextcloud_conf.sh
│ ├── nextcloud.sh
│ ├── sshd_init.sh
│ └── ufw_init.sh
└── zinit
├── caddy.yaml
├── dockerd.yaml
├── nextcloud-conf.yaml
├── nextcloud.yaml
├── sshd.yaml
├── ssh-init.yaml
├── ufw-init.yaml
└── ufw.yaml
```
We can see that the directory is composed of a Caddyfile, a Dockerfile, a README.md and two directories, **scripts** and **zinit**. We will now explore each of those components to have a good grasp of the whole repository and to understand how it all works together.
To get a big picture of this directory, we could say that the **README.md** file provides the necessary documentation for the users to understand the Nextcloud flist, how it is built and how it works, the **Caddyfile** provides the necessary requirements to run the reverse proxy, the **Dockerfile** specifies how the Docker image is built, installing things such as [openssh](https://www.openssh.com/) and the [ufw firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) for secure remote connection, while the two folders, **scripts** and **zinit**, could be said to work hand-in-hand.
Each `.yaml` file is a *unit file* for zinit. That means it specifies a single service for zinit to start. We'll learn more about these files later, but for now we can just note that each script file (ending with `.sh`) has an associated zinit file to make sure that the script is run. There are also some other files for running programs aside from our scripts.
## Caddyfile
For our Nextcloud deployment, we are using Caddy as a reverse proxy. A reverse proxy is an application that sits in front of back-end applications and forwards client requests to those applications.
Since Nextcloud AIO actually includes two web applications, both Nextcloud itself and the AIO management interface, we use the reverse proxy to serve them both on a single domain. It also allows us to make some changes on the fly to the content of the AIO site to considerably enhance the user experience. Finally, we also use Caddy to provide SSL termination if the user reserves a public IP and no gateway, since otherwise SSL termination is provided by the gateway.
File: `Caddyfile`
```
{
order replace after encode
servers {
trusted_proxies static 100.64.0.0/10 10.0.0.0/8
}
}
{$DOMAIN}:{$PORT} {
handle_path /aio* {
replace {
href="/ href="/aio/
src="/ src="/aio/
action=" action="/aio
url(' url('/aio
`value="" placeholder="nextcloud.yourdomain.com"` `value="{$DOMAIN}"`
`"Submit domain"` `"Submit domain" id="domain-submit"`
{$REPLACEMENTS}
<body> {$BODY}
}
reverse_proxy localhost:8000 {
header_down Location "^/(.*)$" "/aio/$1"
header_down Refresh "^/(.*)$" "/aio/$1"
}
}
redir /api/auth/getlogin /aio{uri}
reverse_proxy localhost:11000
handle_errors {
@502-aio expression {err.status_code} == 502 && path('/aio*')
handle @502-aio {
header Content-Type text/html
respond <<HTML
<html>
<head><title>Nextcloud</title></head>
<body>Your Nextcloud management interface isn't ready. If you just deployed this instance, please wait a minute and refresh the page.</body>
</html>
HTML 200
}
@502 expression {err.status_code} == 502
handle @502 {
redir /* /aio
}
}
}
```
We can see in the first section (`trusted_proxies static`) that we set a range of IP addresses as trusted proxy addresses. These include the possible source addresses for gateway traffic, which we mark as trusted for compatibility with some Nextcloud features.
After the global config at the top, the line `{$DOMAIN}:{$PORT}` defines the port that Caddy will listen to and the domain that we are using for our site. This is important, because in the case that port `443` is specified, Caddy will handle SSL certificates automatically.
The following blocks define behavior for different URL paths that users might try to access.
To begin, we have `/aio*`. This is how we place the AIO management app in a "subfolder" of our main domain. To accomplish that we need a few rules that rewrite the contents of the returned pages to correct the links. We also add some text replacements here to accomplish the enhancements mentioned earlier, like automatically filling the domain entry field.
With the `reverse_proxy` line, we specify that requests to all URLs starting with `/aio` should be sent to the web server running on port `8000` of `localhost`. That's the port where the AIO server is listening, as we'll see below. There's also a couple of header rewrite rules here that correct the links for any redirects the AIO site makes.
The `redir` line is needed to support a feature where users open the AIO interface from within Nextcloud. This redirects the original request to the correct equivalent within the `/aio` "subfolder".
Then there's a second `reverse_proxy` line, which is the catch-all for any traffic that didn't get intercepted earlier. This handles the actual Nextcloud app and sends the traffic to its separate server running on port `11000`.
The section starting with `handle_errors` ensures that the user will receive an understandable error message when trying to access the Nextcloud deployment before it has fully started up.
## Dockerfile
We recall that to make a Docker image, you need to create a Dockerfile. As per the [Docker documentation](https://docs.docker.com/engine/reference/builder/), a Dockerfile is "a text document that contains all the commands a user could call on the command line to assemble an image".
File: `Dockerfile`
```Dockerfile
FROM ubuntu:22.04
RUN apt update && \
apt -y install wget openssh-server curl sudo ufw inotify-tools iproute2
RUN wget -O /sbin/zinit https://github.com/threefoldtech/zinit/releases/download/v0.2.5/zinit && \
chmod +x /sbin/zinit
RUN wget -O /sbin/caddy 'https://caddyserver.com/api/download?os=linux&arch=amd64&p=github.com%2Fcaddyserver%2Freplace-response&idempotency=43631173212363' && \
chmod +x /sbin/caddy
RUN curl -fsSL https://get.docker.com -o /usr/local/bin/install-docker.sh && \
chmod +x /usr/local/bin/install-docker.sh
RUN sh /usr/local/bin/install-docker.sh
COPY ./Caddyfile /etc/caddy/
COPY ./scripts/ /scripts/
COPY ./zinit/ /etc/zinit/
RUN chmod +x /scripts/*.sh
ENTRYPOINT ["/sbin/zinit", "init"]
```
We can see from the first line that this Dockerfile uses a base image of Ubuntu Linux version 22.04.
With the first **RUN** command, we refresh the package lists, and then install **openssh**, **ufw** and other dependencies for our Nextcloud uses. Note that we also install **curl** so that we can quickly install **Docker**.
With the second **RUN** command, we install **zinit** and we give it execution permission with the command `chmod +x`. More will be said about zinit in a section below.
With the third **RUN** command, we install **caddy** and we give it execution permission with the command `chmod +x`. Caddy is an extensible, cross-platform, open-source web server written in Go. For more information on Caddy, check the [Caddy website](https://caddyserver.com/).
With fourth **RUN** command, we download and give proper permissions to the script `install-docker.sh`. On a terminal, the common line to install Docker would be `curl -fsSL https://get.docker.com | sudo sh`. To understand really what's going here, we can simply go to the link provided at the line [https://get.docker.com](https://get.docker.com) for more information.
The fifth **RUN** command runs the `install-docker.sh` script to properly install Docker within the image.
Once those commands are run, we proceed to copy into our Docker image the necessary folders `scripts` and `zinit` as well as the Caddyfile. Once this is done, we give execution permissions to all scripts in the scripts folder using `chmod +x`.
Finally, we set an entrypoint in our Dockerfile. As per the [Docker documentation](https://docs.docker.com/engine/reference/builder/), an entrypoint "allows you to configure a container that will run as an executable". Since we are using zinit, we set the entrypoint `/sbin/zinit`.
## README.md File
The **README.md** file has the main goal of explaining clearly to the user the functioning of the Nextcloud directory and its associated flist. In this file, we can explain what our code is doing and offer steps to properly configure the whole deployment.
We also give the necessary steps to create the Docker image and convert it into an flist starting directly with the Nextcloud directory. This can be useful for users that want to create their own flist, instead of using the [official ThreeFold Nextcloud flist](https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist.md).
To read the complete README.md file, go to [this link](https://github.com/threefoldtech/tf-images/blob/development/tfgrid3/nextcloud/README.md).
## scripts Folder
The **scripts** folder contains without surprise the scripts necessary to run the Nextcloud instance.
In the Nextcloud Flist case, there are five scripts:
* **caddy.sh**
* **nextcloud.sh**
* **nextcloud_conf.sh**
* **sshd_init.sh**
* **ufw_init.sh**
Let's take a look at each of them.
### caddy.sh
File: `caddy.sh`
```bash
#!/bin/bash
export DOMAIN=$NEXTCLOUD_DOMAIN
if $IPV4 && ! $GATEWAY; then
export PORT=443
else
export PORT=80
fi
if $IPV4; then
export BODY="\`<body onload=\"if (document.getElementById('domain-submit')) {document.getElementById('domain-submit').click()}\">\`"
else
export BODY="\`<body onload=\"if (document.getElementById('domain-submit')) {document.getElementById('domain-submit').click()}; if (document.getElementById('talk') && document.getElementById('talk').checked) {document.getElementById('talk').checked = false; document.getElementById('options-form-submit').click()}\">\`"
export REPLACEMENTS=' `name="talk"` `name="talk" disabled`
`needs ports 3478/TCP and 3478/UDP open/forwarded in your firewall/router` `running the Talk container requires a public IP and this VM does not have one. It is still possible to use Talk in a limited capacity. Please consult the documentation for details`'
fi
caddy run --config /etc/caddy/Caddyfile
```
The script **caddy.sh** sets the proper port depending on the network configuration (e.g. IPv4 or Gateway) in the first if/else section. In the second if/else section, the script also makes sure that the proper domain is given to Nextcloud All-in-One. This quickens the installation process as the user doesn't have to set the domain in Nextcloud AIO after deployment. We also disable a feature that's not relevant if the user didn't reserve an IPv4 address and we insert a note about that.
### sshd_init.sh
File: `sshd_init.sh`
```bash
#!/bin/bash
mkdir -p ~/.ssh
mkdir -p /var/run/sshd
chmod 600 ~/.ssh
chmod 600 /etc/ssh/*
echo $SSH_KEY >> ~/.ssh/authorized_keys
```
This file starts with a shebang (`#!`) that instructs the operating system to execute the following lines using the [Bash shell](https://www.gnu.org/software/bash/). In essence, it lets us write `./sshd_init.sh` with the same outcome as `bash ./sshd_init.sh`, assuming the file is executable.
The goal of this script is to add the public key within the VM in order for the user to get a secure and remote connection to the VM. The two lines starting with `mkdir` create the necessary folders. The lines starting with `chmod` give the owner the permission to write and read the content within the folders. Finally, the line `echo` will write the public SSH key in a file within the VM. In the case that the flist is used as a weblet, the SSH key is set in the Playground profile manager and passed as an environment variable when we deploy the solution.
### ufw_init.sh
File: `ufw_init.sh`
```bash
#!/bin/bash
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow http
ufw allow https
ufw allow 8443
ufw allow 3478
ufw limit ssh
```
The goal of the `ufw_init.sh` script is to set the correct firewall parameters to make sure that our deployment is secure while also providing the necessary access for the Nextcloud users.
The first two lines starting with `ufw default` are self-explanatory. We want to restrain incoming traffic while making sure that outgoing traffic has no restraints.
The lines starting with `ufw allow` open the ports necessary for our Nextcloud instance. We note that **ssh** is port 22, **http** is port 80 and **https** is port 443. This means, for example, that the line `ufw allow 22` is equivalent to the line `ufw allow ssh`.
Port 8443 can be used to access the AIO interface, as an alternative to using the `/aio` "subfolder" on deployments with a public IPv4 address. Finally, the port 3478 is used for Nextcloud Talk.
The line `ufw limit ssh` will provide additional security by denying connection from IP addresses that attempt to initiate 6 or more connections within a 30-second period.
### nextcloud.sh
File: `nextcloud.sh`
```bash
#!/bin/bash
export COMPOSE_HTTP_TIMEOUT=800
while ! docker info > /dev/null 2>&1; do
echo docker not ready
sleep 2
done
docker run \
--init \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 8000:8000 \
--publish 8080:8080 \
--env APACHE_PORT=11000 \
--env APACHE_IP_BINDING=0.0.0.0 \
--env SKIP_DOMAIN_VALIDATION=true \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest
```
The **nextcloud.sh** script is where the real action starts. This is where we run the Nextcloud All-in-One docker image.
Before discussing the main part of this script, we note that the `while` loop is used to ensure that the `docker run` command starts only after the Docker daemon has properly started.
The code section starting with `docker run` is taken from the [Nextcloud All-in-One repository on Github](https://github.com/nextcloud/all-in-one) with some slight modifications. The last line indicates that the Docker image being pulled will always be the latest version of Nextcloud All-in-One.
We note here that Nextcloud AIO is published on the port 8000 and 8080. We also note that we set restart to **always**. This is very important as it will make sure that the Nextcloud instance is restarted if the Docker daemon reboots. We take the opportunity to note that the way zinit configures micro VMs, the Docker daemon restarts automatically after a reboot. Thus, this latter fact combined with the line `--restart always` ensures that the user that the Nextcloud instance will restart after a VM reboot.
We also set **11000** as the Apache port with an IP binding of **0.0.0.0**. For our deployment, we want to skip the domain validation, thus it is set to **true**.
Considering the line `--sig-proxy=false`, when this command is run interactively, it prevents the user from accidentally killing the spawned AIO container. While it is not of great importance in our case, it means that zinit will not kill the container if the service is stopped.
For more information on this, we invite the readers to consult the [Nextcloud documentation](https://github.com/nextcloud/all-in-one#how-to-use-this).
### nextcloud_conf.sh
File: `nextcloud_conf.sh`
```bash
#!/bin/bash
# Wait for the nextcloud container to become healthy. Note that we can set the
# richtext config parameters even before the app is installed
nc_ready () {
until [[ "`docker inspect -f {{.State.Health.Status}} nextcloud-aio-nextcloud 2> /dev/null`" == "healthy" ]]; do
sleep 1;
done;
}
# When a gateway is used, AIO sets the WOPI allow list to only include the
# gateway IP. Since requests don't originate from the gateway IP, they are
# blocked by default. Here we add the public IP of the VM, or of the router
# upstream of the node
# See: github.com/nextcloud/security-advisories/security/advisories/GHSA-24x8-h6m2-9jf2
if $IPV4; then
interface=$(ip route show default | cut -d " " -f 5)
ipv4_address=$(ip a show $interface | grep -Po 'inet \K[\d.]+')
fi
if $GATEWAY; then
nc_ready
wopi_list=$(docker exec --user www-data nextcloud-aio-nextcloud php occ config:app:get richdocuments wopi_allowlist)
if $IPV4; then
ip=$ipv4_address
else
ip=$(curl -fs https://ipinfo.io/ip)
fi
if [[ $ip ]] && ! echo $wopi_list | grep -q $ip; then
docker exec --user www-data nextcloud-aio-nextcloud php occ config:app:set richdocuments wopi_allowlist --value=$ip
fi
fi
# If the VM has a gateway and a public IPv4, then AIO will set the STUN/TURN
# servers to the gateway domain which does not point to the public IP, so we
# use the IP instead. In this case, we must wait for the Talk app to be
# installed before changing the settings. With inotifywait, we don't need
# a busy loop that could run indefinitely
apps_dir=/mnt/data/docker/volumes/nextcloud_aio_nextcloud/_data/custom_apps/
if $GATEWAY && $IPV4; then
if [[ ! -d ${apps_dir}spreed ]]; then
inotifywait -qq -e create --include spreed $apps_dir
fi
nc_ready
turn_list=$(docker exec --user www-data nextcloud-aio-nextcloud php occ talk:turn:list)
turn_secret=$(echo "$turn_list" | grep secret | cut -d " " -f 4)
turn_server=$(echo "$turn_list" | grep server | cut -d " " -f 4)
if ! echo $turn_server | grep -q $ipv4_address; then
docker exec --user www-data nextcloud-aio-nextcloud php occ talk:turn:delete turn $turn_server udp,tcp
docker exec --user www-data nextcloud-aio-nextcloud php occ talk:turn:add turn $ipv4_address:3478 udp,tcp --secret=$turn_secret
fi
stun_list=$(docker exec --user www-data nextcloud-aio-nextcloud php occ talk:stun:list)
stun_server=$(echo $stun_list | cut -d " " -f 2)
if ! echo $stun_server | grep -q $ipv4_address; then
docker exec --user www-data nextcloud-aio-nextcloud php occ talk:stun:add $ipv4_address:3478
docker exec --user www-data nextcloud-aio-nextcloud php occ talk:stun:delete $stun_server
fi
fi
```
The script **nextcloud_conf.sh** ensures that the network settings are properly configured. In the first section, we use a function called **nc_ready ()**. This function will makes sure that the rest of the script only starts when the Nextcloud container is healthy.
We note that the comments present in this script explain very well what is happening. In short, we want to set the Nextcloud instance according to the user's choice of network. For example, the user can decide to deploy using a ThreeFold gateway or a standard IPv4 connection. If the VM has a gateway and a public IPv4, then Nextcloud All-in-One will set the STUN/TURN servers to the gateway domain which does not point to the public IP, so we use the IP instead.
## zinit Folder
Next, we want to take a look at the zinit folder.
But first, what is zinit? In a nutshell, zinit is a process manager (pid 1) that knows how to launch, monitor and sort dependencies. It thus executes targets in the proper order. For more information on zinit, check the [zinit repository](https://github.com/threefoldtech/zinit).
When we start the Docker container, zinit will parse each unit file in the `/etc/zinit` folder and execute the contained command according to the specified parameters.
In the Nextcloud Flist case, there are eight **.yaml** files:
* **caddy.yaml**
* **dockerd.yaml**
* **nextcloud-conf.yaml**
* **nextcloud.yaml**
* **ssh-init.yaml**
* **sshd.yaml**
* **ufw-init.yaml**
* **ufw.yaml**
### ssh-init.yaml and sshd.yaml
We start by taking a look at the **ssh-init.yaml** and **sshd.yaml** files.
File: `ssh-init.yaml`
```yaml
exec: /scripts/sshd_init.sh
oneshot: true
```
In this zinit service file, we define a service named `ssh-init.yaml`, where we tell zinit to execute the following command: `exec: /scripts/sshd_init.sh`. This unit file thus runs the script `sshd_init.sh` we covered in a previous section.
We also note that `oneshot` is set to `true` and this means that it should only be executed once. This directive is often used for setup scripts that only need to run once. When it is not specified, the default value of `false` means that zinit will continue to start up a service if it ever dies.
Now, we take a look at the file `sshd.yaml`:
File: `sshd.yaml`
```yaml
exec: bash -c "/usr/sbin/sshd -D"
after:
- ssh-init
```
We can see that this file executes a line from the Bash shell. It is important to note that, with zinit and .yaml files, you can easily order the executions of the files with the `after` directive. In this case, it means that the service `sshd` will only run after `ssh-init`.
### ufw-init.yaml and ufw.yaml
Let's take a look at the files **ufw-init.yaml** and **ufw.yaml**.
File: `ufw-init.yaml`
```yaml
exec: /scripts/ufw_init.sh
oneshot: true
```
The file `ufw-init.yaml` is very similar to the previous file `ssh-init.yaml`.
File: `ufw.yaml`
```yaml
exec: ufw --force enable
oneshot: true
after:
- ufw-init
```
We can see that the file `ufw.yaml` will only run once and only after the file `ufw-init.yaml` has been run. This is important since the file `ufw-init.yaml` executes the script `ufw_init.sh`. We recall this script allows different ports in the firewall. Once those ports are defined, we can then run the command `ufw --force enable`. This will start the ufw firewall.
### caddy.yaml
```yaml
exec: /scripts/caddy.sh
oneshot: true
```
This is also very similar to previous files and just runs the Caddy script as a oneshot.
### dockerd.yaml
We now take a look at the file **dockerd.yaml**.
File: `dockerd.yaml`
```yaml
exec: /usr/bin/dockerd --data-root /mnt/data/docker
```
This file will run the [dockerd daemon](https://docs.docker.com/engine/reference/commandline/dockerd/) which is the persistent process that manages containers. We also note that it sets the data to be stored in the directory **/mnt/data/docker**, which is important because we will mount a virtual disk there that will provide better performance, especially for Docker's storage driver.
### nextcloud.yaml
File: `nextcloud.yaml`
```yaml
exec: /scripts/nextcloud.sh
after:
- dockerd
```
The file `nextcloud.yaml` runs after dockerd.
This file will execute the `nextcloud.sh` script we saw earlier. We recall that this script starts the Nextcloud All-in-One image.
### nextcloud-conf.yaml
File: `nextcloud-conf.yaml`
```yaml
exec: /scripts/nextcloud_conf.sh
oneshot: true
after:
- nextcloud
```
Finally, the file `nextcloud-conf.yaml` runs after `nextcloud.yaml`.
This file will execute the `nextcloud-conf.sh` script we saw earlier. We recall that this script starts the Nextcloud All-in-One image. At this point, the deployment is complete.
## Putting it All Together
We've now gone through all the files in the Nextcloud flist directory. You should now have a proper understanding of the interplay between the zinit (.yaml) and the scripts (.sh) files as well as the basic steps to build a Dockerfile and to write clear documentation.
To build your own Nextcloud docker image, you would simply need to clone this directory to your local computer and to follow the steps presented in the next section [Docker Publishing Steps](#docker-publishing-steps).
To have a look at the complete directory, you can always refer to the [Nextcloud flist directory](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) on the ThreeFold tf-images repository.
# Docker Publishing Steps
In this section, we show the necessary steps to publish the Docker image to the Docker Hub.
To do so, we need to create an account and an access token. Then we will build the Docker image and push it to the Docker Hub.
## Create Account and Access Token
To be able to push Docker images to the Docker Hub, you obviously need to create a Docker Hub account! This is very easy and note that there are many great tutorials online about Docker.
Here are the steps to create an account and an access token:
* Go to the [Docker Hub](https://hub.docker.com/)
* Click `Register` and follow the steps given by Docker
* On the top right corner, click on your account name and select `Account Settings`
* On the left menu, click on `Security`
* Click on `New Access Token`
* Choose an Access Token description that you will easily identify then click `Generate`
* Make sure to set the permissions `Read, Write, Delete`
* On your local computer, make sure that the Docker daemon is running
* Write the following in the command line to connect to the Docker hub:
* Run `docker login -u <account_name>`
* Set the password
You now have access to the Docker Hub from your local computer. We will then proceed to push the Docker image to the Docker Hub.
## Build and Push the Docker Image
* Make sure the Docker Daemon is running
* Build the docker container (note that, while the tag is optional, it can help to track different versions)
* Template:
* ```
docker build -t <docker_username>/<docker_repo_name>:<tag> .
```
* Example:
* ```
docker build -t dockerhubuser/nextcloudaio .
```
* Push the docker container to the [Docker Hub](https://hub.docker.com/)
* Template:
* ```
docker push <your_username>/<docker_repo_name>
```
* Example:
* ```
docker push dockerhubuser/nextcloudaio
```
* You should now see your docker image on the [Docker Hub](https://hub.docker.com/) when you go into the menu option `My Profile`.
* Note that you can access this link quickly with the following template:
* ```
https://hub.docker.com/u/<account_name>
```
# Convert the Docker Image to an Flist
We will now convert the Docker image into a Zero-OS flist.
* Go to the [ThreeFold Hub](https://hub.grid.tf/).
* Sign in with the ThreeFold Connect app.
* Go to the [Docker Hub Converter](https://hub.grid.tf/docker-convert) section.
* Next to `Docker Image Name`, add the docker image repository and name, see the example below:
* Template:
* `<docker_username>/docker_image_name:tagname`
* Example:
* `dockerhubuser/nextcloudaio:latest`
* Click `Convert the docker image`.
* Once the conversion is done, the flist is available as a public link on the ThreeFold Hub.
* To get the flist URL, go to the [TF Hub main page](https://hub.grid.tf/), scroll down to your 3Bot ID and click on it.
* Under `Name`, you will see all your available flists.
* Right-click on the flist you want and select `Copy Clean Link`. This URL will be used when deploying on the ThreeFold Playground. We show below the template and an example of what the flist URL looks like.
* Template:
* ```
https://hub.grid.tf/<3BOT_name.3bot>/<docker_username>-<docker_image_name>-<tagname>.flist
```
* Example:
* ```
https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist
```
# Deploy Nextcloud AIO on the TFGrid with Terraform
We now proceed to deploy a Nextcloud All-in-One instance by using the Nextcloud flist we've just created.
To do so, we will deploy a micro VM with the Nextcloud flist on the TFGrid using Terraform.
## Create the Terraform Files
For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads.
To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. **var.size** for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main.tf as is.
For this example, we will be deployment with a ThreeFold gateway as well as a gateway domain.
* Copy the following content and save the file under the name `credentials.auto.tfvars`:
```
mnemonics = "..."
network = "main"
SSH_KEY = "..."
size = "50"
cpu = "2"
memory = "4096"
gateway_id = "50"
vm1_id = "5453"
deployment_name = "nextcloudgateway"
nextcloud_flist = "https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist"
```
Make sure to add your own seed phrase and SSH public key. Simply replace the three dots by the content. Note that you can deploy on a different node than node 5453 for the **vm1** node. If you want to deploy on another node than node 5453 for the **gateway** node, make sure that you choose a gateway node. To find a gateway node, go on the [ThreeFold Dashboard](https://dashboard.grid.tf/) Nodes section of the Explorer and select **Gateways (Only)**.
Obviously, you can decide to increase or modify the quantity in the variables `size`, `cpu` and `memory`.
Note that in our case, we set the flist to be the official Nextcloud flist. Simply replace the URL with your newly created Nextcloud flist to test it!
* Copy the following content and save the file under the name `main.tf`:
```
variable "mnemonics" {
type = string
default = "your mnemonics"
}
variable "network" {
type = string
default = "main"
}
variable "SSH_KEY" {
type = string
default = "your SSH pub key"
}
variable "deployment_name" {
type = string
}
variable "size" {
type = string
}
variable "cpu" {
type = string
}
variable "memory" {
type = string
}
variable "nextcloud_flist" {
type = string
}
variable "gateway_id" {
type = string
}
variable "vm1_id" {
type = string
}
terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
}
}
}
provider "grid" {
mnemonics = var.mnemonics
network = var.network
}
data "grid_gateway_domain" "domain" {
node = var.gateway_id
name = var.deployment_name
}
resource "grid_network" "net" {
nodes = [var.gateway_id, var.vm1_id]
ip_range = "10.1.0.0/16"
name = "network"
description = "My network"
add_wg_access = true
}
resource "grid_deployment" "d1" {
node = var.vm1_id
network_name = grid_network.net.name
disks {
name = "data"
size = var.size
}
vms {
name = "vm1"
flist = var.nextcloud_flist
cpu = var.cpu
memory = var.memory
rootfs_size = 15000
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
GATEWAY = "true"
IPV4 = "false"
NEXTCLOUD_DOMAIN = data.grid_gateway_domain.domain.fqdn
}
mounts {
disk_name = "data"
mount_point = "/mnt/data"
}
}
}
resource "grid_name_proxy" "p1" {
node = var.gateway_id
name = data.grid_gateway_domain.domain.name
backends = [format("http://%s:80", grid_deployment.d1.vms[0].ip)]
network = grid_network.net.name
tls_passthrough = false
}
output "wg_config" {
value = grid_network.net.access_wg_config
}
output "vm1_ip" {
value = grid_deployment.d1.vms[0].ip
}
output "vm1_ygg_ip" {
value = grid_deployment.d1.vms[0].ygg_ip
}
output "fqdn" {
value = data.grid_gateway_domain.domain.fqdn
}
```
## Deploy Nextcloud with Terraform
We now deploy Nextcloud with Terraform. Make sure that you are in the correct folder containing the main and variables files.
* Initialize Terraform:
* ```
terraform init
```
* Apply Terraform to deploy Nextcloud:
* ```
terraform apply
```
Note that, at any moment, if you want to see the information on your Terraform deployment, write the following:
* ```
terraform show
```
## Nextcloud Setup
Once you've deployed Nextcloud, you can access the Nextcloud setup page by pasting the URL displayed on the line `fqdn = "..."` of the Terraform output.
# Conclusion
In this case study, we've seen the overall process of creating a new flist to deploy a Nextcloud instance on a Micro VM on the TFGrid with Terraform.
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@ -1,33 +0,0 @@
<h1> TF Hub API Token </h1>
<h2> Table of Contents </h2>
- [Generate an API Token](#generate-an-api-token)
- [Verify the Token Validity](#verify-the-token-validity)
***
## Generate an API Token
To generate an API Token on the TF Hub, follow those steps:
* Go to the [ThreeFold Hub](https://hub.grid.tf/)
* Open the top right drop-down menu
* Click on `Generate API Token`
* Take note of the token and keep it somewhere safe
## Verify the Token Validity
To make sure the generated token is valid, in the terminal write the following with your own API Token:
```bash
curl -H "Authorization: bearer <API_Token>" https://hub.grid.tf/api/flist/me
```
You should see the following line with your own 3BotID
```bash
{"status": "success", "payload": {"username": "<3BotID>.3bot"}}
```
You can then use this API Token in the terminal to [get and update information through the API](./zos_hub.md#get-and-update-information-through-the-api).

View File

@ -1,45 +0,0 @@
<h1> Convert Docker Image to Flist </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Upload the Image](#upload-the-image)
- [Flist on the Hub](#flist-on-the-hub)
***
## Introduction
We show the steps to convert a docker image to an Flist.
## Upload the Image
1. Upload the Docker image to Docker Hub with the following command:
```bash
docker push <image_name>
```
2. Navigate to the docker converter link: https://hub.grid.tf/docker-convert
![ ](./img/docker_convert.png)
3. Copy the name of the uploaded Docker image to the Docker Image Name field.
4. Then press the convert button.
When the image is ready, some information will be displayed.
![ ](./img/flist_ready.png)
## Flist on the Hub
To navigate to the created flist, you can search with the newly created file name in the search tab.
![ ](./img/search.png)
You can also navigate to your repository in the contributors section from the Zero-Os Hub and navigate to the newly created flist.
Then press the preview button to display the flist's url and some other data.
![ ](./img/preview.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 208 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

View File

@ -1,142 +0,0 @@
<h1> Zero-OS Hub </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Upload Your Files](#upload-your-files)
- [Merge Multiple Flists](#merge-multiple-flists)
- [Convert Docker Images and Tar Files](#convert-docker-images-and-tar-files)
- [Upload Customize Flists](#upload-customize-flists)
- [Upload Homemade Flists](#upload-homemade-flists)
- [Upload your Existing Flist to Reduce Bandwidth](#upload-your-existing-flist-to-reduce-bandwidth)
- [Authenticate via 3Bot](#authenticate-via-3bot)
- [Get and Update Information Through the API](#get-and-update-information-through-the-api)
- [Public API Endpoints (No Authentication Required)](#public-api-endpoints-no-authentication-required)
- [Restricted API Endpoints (Authentication Required)](#restricted-api-endpoints-authentication-required)
- [API Request Templates and Examples](#api-request-templates-and-examples)
***
## Introduction
The [ThreeFold Zero-OS Hub](https://hub.grid.tf/) allows you to do multiple things and acts as a public centralization of flists.
The ZOS Hub is mainly there to gives an easy way to distribute flist files, which are databases of metadata that you can use in any Zero-OS container or virtual machine.
## Upload Your Files
In order to publish easily your files, you can upload a `.tar.gz` and the hub will convert it automatically to a flist
and store the contents in the hub backend. After that you can use your flist directly on a container.
## Merge Multiple Flists
In order to reduce the maintenance of your images, products, etc. flist allows you to keep your
different products and files separately and then merge them with another flist to make it usable without
keeping the system up-to-date.
Example: there is an official `ubuntu 16.04` flist image, you can make a flist which contains your application files
and then merge your flist with ubuntu, so the resulting flist is your product on the last version of ubunbu.
You don't need to take care about the base system yourself, just merge it with the one provided.
## Convert Docker Images and Tar Files
The ZOS Hub allows you to convert Docker Hub images and Tar files into flists thanks to the Docker Hub Converter.
You can convert a docker image (eg: `busybox`, `ubuntu`, `fedora`, `couchdb`, ...) to an flist directly from the backend, this allows you to use your existing docker image in our infrastructure out-of-the-box. Go to the [Docker Hub Converter](https://hub.grid.tf/docker-convert) to use this feature. For more information on the process, read the section [Convert Docker Image to flist](./convert_docker_image.md) of the TF Manual.
You can also easily convert a Tar file into an flist via the [Upload section](https://hub.grid.tf/upload) of the ZOS Hub.
## Upload Customize Flists
The ZOS Hub also allows you to customize an flist via the [Customization section](https://hub.grid.tf/merge) of the ZOS Hub. Note that this is currently in beta.
## Upload Homemade Flists
The ZOS Hub allows you to upload flist that you've made yourself via the section [Upload a homemade flist](https://hub.grid.tf/upload-flist).
## Upload your Existing Flist to Reduce Bandwidth
In addition with the hub-client (a side product) you can upload efficiently contents of file
to make the backend up-to-date and upload a self-made flist. This allows you to do all the jobs yourself
and gives you the full control of the chain. The only restriction is that the contents of the files you host
on the flist needs to exists on the backend, otherwise your flist will be rejected.
## Authenticate via 3Bot
All the operations on the ZOS Hub needs to be done via a `3Bot` (default) authentication. Only downloading a flist can be done anonymously. To authenticate request via the API, you need to generate an API Token as shown in the section [ZOS Hub API Token](./api_token.md).
## Get and Update Information Through the API
The hub host a basic REST API which can gives you some informations about flists, renaming them, remove them, etc.
To use authenticated endpoints, you need to provide a itsyou.online valid `jwt` via `Authorization: bearer <jwt>` header.
This `jwt` can contains special `memberof` to allows you cross-repository actions.
If your `jwt` contains memberof, you can choose which user you want to use by specifying cookie `active-user`.
See example below.
### Public API Endpoints (No Authentication Required)
- `/api/flist` (**GET**)
- Returns a json array with all repository/flists found
- `/api/repositories` (**GET**)
- Returns a json array with all repositories found
- `/api/fileslist` (**GET**)
- Returns a json array with all repositories and files found
- `/api/flist/<repository>` (**GET**)
- Returns a json array of each flist found inside specified repository.
- Each entry contains `filename`, `size`, `updated` date and `type` (regular or symlink), optionally `target` if it's a symbolic link.
- `/api/flist/<repository>/<flist>` (**GET**)
- Returns json object with flist dumps (full file list)
### Restricted API Endpoints (Authentication Required)
- `/api/flist/me` (**GET**)
- Returns json object with some basic information about yourself (authenticated user)
- `/api/flist/me/<flist>` (**GET**, **DELETE**)
- **GET**: same as `/api/flist/<your-repository>/<flist>`
- **DELETE**: remove that specific flist
- `/api/flist/me/<source>/link/<linkname>` (**GET**)
- Create a symbolic link `linkname` pointing to `source`
- `/api/flist/me/<linkname>/crosslink/<repository>/<sourcename>` (**GET**)
- Create a cross-repository symbolic link `linkname` pointing to `repository/sourcename`
- `/api/flist/me/<source>/rename/<destination>` (**GET**)
- Rename `source` to `destination`
- `/api/flist/me/promote/<sourcerepo>/<sourcefile>/<localname>` (**GET**)
- Copy cross-repository `sourcerepo/sourcefile` to your `[local-repository]/localname` file
- This is useful when you want to copy flist from one repository to another one, if your jwt allows it
- `/api/flist/me/upload` (**POST**)
- **POST**: uploads a `.tar.gz` archive and convert it to an flist
- Your file needs to be passed via `file` form attribute
- `/api/flist/me/upload-flist` (**POST**)
- **POST**: uploads a `.flist` file and store it
- Note: the flist is checked and full contents is verified to be found on the backend, if some chunks are missing, the file will be discarded.
- Your file needs to be passed via `file` form attribute
- `/api/flist/me/merge/<target>` (**POST**)
- **POST**: merge multiple flist together
- You need to passes a json array of flists (in form `repository/file`) as POST body
- `/api/flist/me/docker` (**POST**)
- **POST**: converts a docker image to an flist
- You need to passes `image` form argument with docker-image name
- The resulting conversion will stay on your repository
### API Request Templates and Examples
The main template to request information from the API is the following:
```bash
curl -H "Authorization: bearer <API_token>" https://hub.grid.tf/api/flist/me/<flist_name> -X <COMMAND>
```
For example, if we take the command `DELETE` of the previous section and we want to delete the flist `example-latest.flist` with the API Token `abc12`, we would write the following line:
```bash
curl -H "Authorization: bearer abc12" https://hub.grid.tf/api/flist/me/example-latest.flist -X DELETE
```
As another template example, if we wanted to rename the flist `current-name-latest.flist` to `new-name-latest.flist`, we would use the following template:
```bash
curl -H "Authorization: bearer <API_token>" https://hub.grid.tf/api/flist/me/<current_flist_name>/rename/<new_flist_name> -X GET
```
To upload an flist to the ZOS Hub, you would use the following template:
```bash
curl -H "Authorization: bearer <API_Token>" -X POST -F file=@my-local-archive.tar.gz \
https://hub.grid.tf/api/flist/me/upload
```

View File

@ -1,26 +0,0 @@
<h1> Supported Flists </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Flists and Parameters](#flists-and-parameters)
- [More Flists](#more-flists)
***
## Introduction
We provide basic information on the currently supported Flists.
## Flists and Parameters
|flist|entrypoint|env vars|
|:--:|:--:|--|
|[Alpine](https://hub.grid.tf/tf-official-apps/threefoldtech-alpine-3.flist.md)|`/entrypoint.sh`|`SSH_KEY`|
|[Ubuntu](https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist.md)|`/init.sh`|`SSH_KEY`|
|[CentOS](https://hub.grid.tf/tf-official-apps/threefoldtech-centos-8.flist.md)|`/entrypoint.sh`|`SSH_KEY`|
|[K3s](https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist.md)|`/sbin/zinit init`|- `SSH_KEY` <br/>- `K3S_TOKEN` <br/>- `K3S_DATA_DIR`<br/>- `K3S_FLANNEL_IFACE`<br/>- `K3S_NODE_NAME`<br/> - `K3S_URL` `https://${masterIp}:6443`|
## More Flists
You can convert any docker image to an flist. Feel free to explore the different possibilities on the [ThreeFold Hub](https://hub.grid.tf/).

View File

@ -1,104 +0,0 @@
<h1> Deploying Gateways</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Gateway Name](#gateway-name)
- [Example](#example)
- [Gateway FQDN](#gateway-fqdn)
- [Example](#example-1)
***
## Introduction
After [deploying a VM](./grid3_go_vm.md) you can deploy Gateways to further expose your VM.
## Gateway Name
This generates a FQDN for your VM.
## Example
```go
import (
"fmt"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types"
"github.com/threefoldtech/zos/pkg/gridtypes/zos"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", true, false)
// Get a free node to deploy
domain := true
status := "up"
filter := types.NodeFilter{
Domain: &domain,
Status: &status,
}
nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter)
nodeID := uint32(nodeIDs[0].NodeID)
// Create gateway to deploy
gateway := workloads.GatewayNameProxy{
NodeID: nodeID,
Name: "mydomain",
Backends: []zos.Backend{"http://[300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f]:8080"},
TLSPassthrough: true,
}
err = tfPluginClient.GatewayNameDeployer.Deploy(ctx, &gateway)
gatewayObj, err := tfPluginClient.State.LoadGatewayNameFromGrid(nodeID, gateway.Name, gateway.Name)
fmt.Println(gatewayObj.FQDN)
}
```
This deploys a Gateway Name Proxy that forwards requests to your VM. You should see an output like this:
```bash
mydomain.gent01.dev.grid.tf
```
## Gateway FQDN
In case you have a FQDN already pointing to the node, you can expose your VM using Gateway FQDN.
## Example
```go
import (
"fmt"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/zos/pkg/gridtypes/zos"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true)
// Create gateway to deploy
gateway := workloads.GatewayFQDNProxy{
NodeID: 14,
Name: "mydomain",
Backends: []zos.Backend{"http://[300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f]:8080"},
FQDN: "my.domain.com",
TLSPassthrough: true,
}
err = tfPluginClient.GatewayFQDNDeployer.Deploy(ctx, &gateway)
gatewayObj, err := tfPluginClient.State.LoadGatewayFQDNFromGrid(nodeID, gateway.Name, gateway.Name)
}
```
This deploys a Gateway FQDN Proxy that forwards requests to from node 14 public IP to your VM.

View File

@ -1,6 +0,0 @@
<h1> GPU and Go </h1>
<h2> Table of Contents </h2>
- [GPU and Go Introduction](grid3_go_gpu_support.md)
- [Deploy a VM with GPU](grid3_go_vm_with_gpu.md)

View File

@ -1,116 +0,0 @@
<h1> GPU Support </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Example](#example)
- [More Information](#more-information)
***
## Introduction
We present here an example on how to deploy using the Go client. This is part of our integration tests.
## Example
```go
func TestVMWithGPUDeployment(t *testing.T) {
tfPluginClient, err := setup()
assert.NoError(t, err)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
publicKey, privateKey, err := GenerateSSHKeyPair()
assert.NoError(t, err)
twinID := uint64(tfPluginClient.TwinID)
nodeFilter := types.NodeFilter{
Status: &statusUp,
FreeSRU: convertGBToBytes(20),
FreeMRU: convertGBToBytes(8),
RentedBy: &twinID,
HasGPU: &trueVal,
}
nodes, err := deployer.FilterNodes(ctx, tfPluginClient, nodeFilter)
if err != nil {
t.Skip("no available nodes found")
}
nodeID := uint32(nodes[0].NodeID)
nodeClient, err := tfPluginClient.NcPool.GetNodeClient(tfPluginClient.SubstrateConn, nodeID)
assert.NoError(t, err)
gpus, err := nodeClient.GPUs(ctx)
assert.NoError(t, err)
network := workloads.ZNet{
Name: "gpuNetwork",
Description: "network for testing gpu",
Nodes: []uint32{nodeID},
IPRange: gridtypes.NewIPNet(net.IPNet{
IP: net.IPv4(10, 20, 0, 0),
Mask: net.CIDRMask(16, 32),
}),
AddWGAccess: false,
}
disk := workloads.Disk{
Name: "gpuDisk",
SizeGB: 20,
}
vm := workloads.VM{
Name: "gpu",
Flist: "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist",
CPU: 4,
Planetary: true,
Memory: 1024 * 8,
GPUs: ConvertGPUsToStr(gpus),
Entrypoint: "/init.sh",
EnvVars: map[string]string{
"SSH_KEY": publicKey,
},
Mounts: []workloads.Mount{
{DiskName: disk.Name, MountPoint: "/data"},
},
NetworkName: network.Name,
}
err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network)
assert.NoError(t, err)
defer func() {
err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network)
assert.NoError(t, err)
}()
dl := workloads.NewDeployment("gpu", nodeID, "", nil, network.Name, []workloads.Disk{disk}, nil, []workloads.VM{vm}, nil)
err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl)
assert.NoError(t, err)
defer func() {
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl)
assert.NoError(t, err)
}()
vm, err = tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl.Name)
assert.NoError(t, err)
assert.Equal(t, vm.GPUs, ConvertGPUsToStr(gpus))
time.Sleep(30 * time.Second)
output, err := RemoteRun("root", vm.YggIP, "lspci -v", privateKey)
assert.NoError(t, err)
assert.Contains(t, string(output), gpus[0].Vendor)
}
```
## More Information
For more information on this, you can check this [Client Pull Request](https://github.com/threefoldtech/tfgrid-sdk-go/pull/207/) on how to support the new calls to list GPUs and to deploy a machine with GPU.

View File

@ -1,45 +0,0 @@
<h1>Go Client Installation</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Requirements](#requirements)
- [Steps](#steps)
- [References](#references)
***
## Introduction
We present the general steps to install the ThreeFold Grid3 Go Client.
## Requirements
Make sure that you have at least Go 1.19 installed on your machine.
- [Go](https://golang.org/doc/install) >= 1.19
## Steps
* Create a new directory
* ```bash
mkdir tf_go_client
```
* Change directory
* ```bash
cd tf_go_client
```
* Creates a **go.mod** file to track the code's dependencies
* ```bash
go mod init main
```
* Install the Grid3 Go Client
* ```bash
go get github.com/threefoldtech/tfgrid-sdk-go/grid-client
```
This will make Grid3 Go Client packages available to you.
## References
For more information, you can read the official [Go documentation](https://go.dev/doc/).

View File

@ -1,120 +0,0 @@
<h1> Deploying Kubernetes Clusters</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We show how to deploy a Kubernetes cluster with the Go client.
## Example
```go
import (
"fmt"
"net"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types"
"github.com/threefoldtech/zos/pkg/gridtypes"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true)
// Get a free node to deploy
freeMRU := uint64(1)
freeSRU := uint64(1)
status := "up"
filter := types.NodeFilter{
FreeMRU: &freeMRU,
FreeSRU: &freeSRU,
Status: &status,
}
nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter)
masterNodeID := uint32(nodeIDs[0].NodeID)
workerNodeID1 := uint32(nodeIDs[1].NodeID)
workerNodeID2 := uint32(nodeIDs[2].NodeID)
// Create a new network to deploy
network := workloads.ZNet{
Name: "newNetwork",
Description: "A network to deploy",
Nodes: []uint32{masterNodeID, workerNodeID1, workerNodeID2},
IPRange: gridtypes.NewIPNet(net.IPNet{
IP: net.IPv4(10, 1, 0, 0),
Mask: net.CIDRMask(16, 32),
}),
AddWGAccess: true,
}
// Create master and worker nodes to deploy
master := workloads.K8sNode{
Name: "master",
Node: masterNodeID,
DiskSize: 1,
CPU: 2,
Memory: 1024,
Planetary: true,
Flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
}
worker1 := workloads.K8sNode{
Name: "worker1",
Node: workerNodeID1,
DiskSize: 1,
CPU: 2,
Memory: 1024,
Flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
}
worker2 := workloads.K8sNode{
Name: "worker2",
Node: workerNodeID2,
DiskSize: 1,
Flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
CPU: 2,
Memory: 1024,
}
k8sCluster := workloads.K8sCluster{
Master: &master,
Workers: []workloads.K8sNode{worker1, worker2},
Token: "tokens",
SSHKey: publicKey,
NetworkName: network.Name,
}
// Deploy the network first
err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network)
// Deploy the k8s cluster
err = tfPluginClient.K8sDeployer.Deploy(ctx, &k8sCluster)
// Load the k8s cluster
k8sClusterObj, err := tfPluginClient.State.LoadK8sFromGrid([]uint32{masterNodeID, workerNodeID1, workerNodeID2}, master.Name)
// Print master node Yggdrasil IP
fmt.Println(k8sClusterObj.Master.YggIP)
// Cancel the VM deployment
err = tfPluginClient.K8sDeployer.Cancel(ctx, &k8sCluster)
// Cancel the network deployment
err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network)
}
```
You should see an output like this:
```bash
300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f
```

View File

@ -1,35 +0,0 @@
<h1>Load Client</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [TFPluginClient Configuration](#tfpluginclient-configuration)
- [Creating Client](#creating-client)
***
## Introduction
We cover how to load client using the Go client.
## TFPluginClient Configuration
- mnemonics
- keyType: can be `ed25519` or `sr25519`
- network: can be `dev`, `qa`, `test` or `main`
## Creating Client
Import `deployer` package to your project:
```go
import "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
```
Create new Client:
```go
func main() {
client, err := deployer.NewTFPluginClient(mnemonics, keyType, network, "", "", "", 0, true)
}
```

View File

@ -1,186 +0,0 @@
<h1> Deploying QSFS </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We show how to deploy QSFS workloads with the Go client.
## Example
```go
import (
"context"
"fmt"
"net"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types"
"github.com/threefoldtech/zos/pkg/gridtypes"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true)
// Get a free node to deploy
freeMRU := uint64(2)
freeSRU := uint64(20)
status := "up"
filter := types.NodeFilter{
FreeMRU: &freeMRU,
FreeSRU: &freeSRU,
Status: &status,
}
nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter)
nodeID := uint32(nodeIDs[0].NodeID)
// Create data and meta ZDBs
dataZDBs := []workloads.ZDB{}
metaZDBs := []workloads.ZDB{}
for i := 1; i <= DataZDBNum; i++ {
zdb := workloads.ZDB{
Name: "qsfsDataZdb" + strconv.Itoa(i),
Password: "password",
Public: true,
Size: 1,
Description: "zdb for testing",
Mode: zos.ZDBModeSeq,
}
dataZDBs = append(dataZDBs, zdb)
}
for i := 1; i <= MetaZDBNum; i++ {
zdb := workloads.ZDB{
Name: "qsfsMetaZdb" + strconv.Itoa(i),
Password: "password",
Public: true,
Size: 1,
Description: "zdb for testing",
Mode: zos.ZDBModeUser,
}
metaZDBs = append(metaZDBs, zdb)
}
// Deploy ZDBs
dl1 := workloads.NewDeployment("qsfs", nodeID, "", nil, "", nil, append(dataZDBs, metaZDBs...), nil, nil)
err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl1)
// result ZDBs
resDataZDBs := []workloads.ZDB{}
resMetaZDBs := []workloads.ZDB{}
for i := 1; i <= DataZDBNum; i++ {
res, err := tfPluginClient.State.LoadZdbFromGrid(nodeID, "qsfsDataZdb"+strconv.Itoa(i), dl1.Name)
resDataZDBs = append(resDataZDBs, res)
}
for i := 1; i <= MetaZDBNum; i++ {
res, err := tfPluginClient.State.LoadZdbFromGrid(nodeID, "qsfsMetaZdb"+strconv.Itoa(i), dl1.Name)
resMetaZDBs = append(resMetaZDBs, res)
}
// backends
dataBackends := []workloads.Backend{}
metaBackends := []workloads.Backend{}
for i := 0; i < DataZDBNum; i++ {
dataBackends = append(dataBackends, workloads.Backend{
Address: "[" + resDataZDBs[i].IPs[1] + "]" + ":" + fmt.Sprint(resDataZDBs[i].Port),
Namespace: resDataZDBs[i].Namespace,
Password: resDataZDBs[i].Password,
})
}
for i := 0; i < MetaZDBNum; i++ {
metaBackends = append(metaBackends, workloads.Backend{
Address: "[" + resMetaZDBs[i].IPs[1] + "]" + ":" + fmt.Sprint(resMetaZDBs[i].Port),
Namespace: resMetaZDBs[i].Namespace,
Password: resMetaZDBs[i].Password,
})
}
// Create a new qsfs to deploy
qsfs := workloads.QSFS{
Name: "qsfs",
Description: "qsfs for testing",
Cache: 1024,
MinimalShards: 2,
ExpectedShards: 4,
RedundantGroups: 0,
RedundantNodes: 0,
MaxZDBDataDirSize: 512,
EncryptionAlgorithm: "AES",
EncryptionKey: "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af",
CompressionAlgorithm: "snappy",
Groups: workloads.Groups{{Backends: dataBackends}},
Metadata: workloads.Metadata{
Type: "zdb",
Prefix: "test",
EncryptionAlgorithm: "AES",
EncryptionKey: "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af",
Backends: metaBackends,
},
}
// Create a new network to deploy
network := workloads.ZNet{
Name: "newNetwork",
Description: "A network to deploy",
Nodes: []uint32{nodeID},
IPRange: gridtypes.NewIPNet(net.IPNet{
IP: net.IPv4(10, 1, 0, 0),
Mask: net.CIDRMask(16, 32),
}),
AddWGAccess: true,
}
vm := workloads.VM{
Name: "vm",
Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
CPU: 2,
Planetary: true,
Memory: 1024,
Entrypoint: "/sbin/zinit init",
EnvVars: map[string]string{
"SSH_KEY": publicKey,
},
Mounts: []workloads.Mount{
{DiskName: qsfs.Name, MountPoint: "/qsfs"},
},
NetworkName: network.Name,
}
// Deploy the network first
err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network)
// Deploy the VM/QSFS deployment
dl2 := workloads.NewDeployment("qsfs", nodeID, "", nil, network.Name, nil, append(dataZDBs, metaZDBs...), []workloads.VM{vm}, []workloads.QSFS{qsfs})
err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl2)
// Load the QSFS using the state loader
qsfsObj, err := tfPluginClient.State.LoadQSFSFromGrid(nodeID, qsfs.Name, dl2.Name)
// Load the VM using the state loader
vmObj, err := tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl2.Name)
// Print the VM Yggdrasil IP
fmt.Println(vmObj.YggIP)
// Cancel the VM,QSFS deployment
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl1)
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl2)
// Cancel the network deployment
err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network)
}
```
Running this code should result in a VM with QSFS deployed on an available node and get an output like this:
```bash
Yggdrasil IP: 300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f
```

View File

@ -1,17 +0,0 @@
# Grid Go Client
Grid Go Client is a Go client created to interact and develop on Threefold Grid using Go language.
Please make sure to check the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) before continuing.
<h2> Table of Contents </h2>
- [Installation](../go/grid3_go_installation.md)
- [Loading Client](../go/grid3_go_load_client.md)
- [Deploy a VM](../go/grid3_go_vm.md)
- [Deploy a VM with GPU](../go/grid3_go_vm_with_gpu.md)
- [Deploy Multiple VMs](../go/grid3_go_vms.md)
- [Deploy Gateways](../go/grid3_go_gateways.md)
- [Deploy Kubernetes](../go/grid3_go_kubernetes.md)
- [Deploy a QSFS](../go/grid3_go_qsfs.md)
- [GPU Support](../go/grid3_go_gpu_support.md)

View File

@ -1,99 +0,0 @@
<h1> Deploying a VM</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We show how to deploy a VM with the Go client.
## Example
```go
import (
"context"
"fmt"
"net"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types"
"github.com/threefoldtech/zos/pkg/gridtypes"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, keyType, network, "", "", "", 0, true)
// Get a free node to deploy
freeMRU := uint64(2)
freeSRU := uint64(20)
status := "up"
filter := types.NodeFilter{
FreeMRU: &freeMRU,
FreeSRU: &freeSRU,
Status: &status,
}
nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter)
nodeID := uint32(nodeIDs[0].NodeID)
// Create a new network to deploy
network := workloads.ZNet{
Name: "newNetwork",
Description: "A network to deploy",
Nodes: []uint32{nodeID},
IPRange: gridtypes.NewIPNet(net.IPNet{
IP: net.IPv4(10, 1, 0, 0),
Mask: net.CIDRMask(16, 32),
}),
AddWGAccess: true,
}
// Create a new VM to deploy
vm := workloads.VM{
Name: "vm",
Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
CPU: 2,
PublicIP: true,
Planetary: true,
Memory: 1024,
RootfsSize: 20 * 1024,
Entrypoint: "/sbin/zinit init",
EnvVars: map[string]string{
"SSH_KEY": publicKey,
},
IP: "10.20.2.5",
NetworkName: network.Name,
}
// Deploy the network first
err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network)
// Deploy the VM deployment
dl := workloads.NewDeployment("vm", nodeID, "", nil, network.Name, nil, nil, []workloads.VM{vm}, nil)
err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl)
// Load the VM using the state loader
vmObj, err := tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl.Name)
// Print the VM Yggdrasil IP
fmt.Println(vmObj.YggIP)
// Cancel the VM deployment
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl)
// Cancel the network deployment
err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network)
}
```
Running this code should result in a VM deployed on an available node and get an output like this:
```bash
300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f
```

View File

@ -1,121 +0,0 @@
<h1> Deploy a VM with GPU </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
In this section, we explore how to deploy a virtual machine equipped with GPU. We deploy the VM using Go. The VM will be deployed on a 3Node with an available GPU.
## Example
```go
import (
"context"
"fmt"
"net"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types"
"github.com/threefoldtech/zos/pkg/gridtypes"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true)
// Get a free node to deploy
freeMRU := uint64(2)
freeSRU := uint64(20)
status := "up"
trueVal := true
twinID := uint64(tfPluginClient.TwinID)
filter := types.NodeFilter{
FreeMRU: &freeMRU,
FreeSRU: &freeSRU,
Status: &status,
RentedBy: &twinID,
HasGPU: &trueVal,
}
nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter)
nodeID := uint32(nodeIDs[0].NodeID)
// Get the available gpus on the node
nodeClient, err := tfPluginClient.NcPool.GetNodeClient(tfPluginClient.SubstrateConn, nodeID)
gpus, err := nodeClient.GPUs(ctx)
// Create a new network to deploy
network := workloads.ZNet{
Name: "newNetwork",
Description: "A network to deploy",
Nodes: []uint32{nodeID},
IPRange: gridtypes.NewIPNet(net.IPNet{
IP: net.IPv4(10, 1, 0, 0),
Mask: net.CIDRMask(16, 32),
}),
AddWGAccess: true,
}
// Create a new disk to deploy
disk := workloads.Disk{
Name: "gpuDisk",
SizeGB: 20,
}
// Create a new VM to deploy
vm := workloads.VM{
Name: "vm",
Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
CPU: 2,
PublicIP: true,
Planetary: true,
// Insert your GPUs' IDs here
GPUs: []zos.GPU{zos.GPU(gpus[0].ID)},
Memory: 1024,
RootfsSize: 20 * 1024,
Entrypoint: "/sbin/zinit init",
EnvVars: map[string]string{
"SSH_KEY": publicKey,
},
Mounts: []workloads.Mount{
{DiskName: disk.Name, MountPoint: "/data"},
},
IP: "10.20.2.5",
NetworkName: network.Name,
}
// Deploy the network first
err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network)
// Deploy the VM deployment
dl := workloads.NewDeployment("gpu", nodeID, "", nil, network.Name, []workloads.Disk{disk}, nil, []workloads.VM{vm}, nil)
err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl)
// Load the VM using the state loader
vmObj, err := tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl.Name)
// Print the VM Yggdrasil IP
fmt.Println(vmObj.YggIP)
// Cancel the VM deployment
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl)
// Cancel the network deployment
err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network)
}
```
Running this code should result in a VM with a GPU deployed on an available node. The output should look like this:
```bash
Yggdrasil IP: 300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f
```

View File

@ -1,125 +0,0 @@
<h1> Deploying Multiple VMs</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We show how to deploy multiple VMs with the Go client.
## Example
```go
import (
"context"
"fmt"
"net"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer"
"github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads"
"github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types"
"github.com/threefoldtech/zos/pkg/gridtypes"
)
func main() {
// Create Threefold plugin client
tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true)
// Get a free node to deploy
freeMRU := uint64(2)
freeSRU := uint64(2)
status := "up"
filter := types.NodeFilter {
FreeMRU: &freeMRU,
FreeSRU: &freeSRU,
Status: &status,
}
nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter)
nodeID1 := uint32(nodeIDs[0].NodeID)
nodeID2 := uint32(nodeIDs[1].NodeID)
// Create a new network to deploy
network := workloads.ZNet{
Name: "newNetwork",
Description: "A network to deploy",
Nodes: []uint32{nodeID1, nodeID2},
IPRange: gridtypes.NewIPNet(net.IPNet{
IP: net.IPv4(10, 1, 0, 0),
Mask: net.CIDRMask(16, 32),
}),
AddWGAccess: true,
}
// Create new VMs to deploy
vm1 := workloads.VM{
Name: "vm1",
Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
CPU: 2,
PublicIP: true,
Planetary: true,
Memory: 1024,
RootfsSize: 20 * 1024,
Entrypoint: "/sbin/zinit init",
EnvVars: map[string]string{
"SSH_KEY": publicKey,
},
IP: "10.20.2.5",
NetworkName: network.Name,
}
vm2 := workloads.VM{
Name: "vm2",
Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
CPU: 2,
PublicIP: true,
Planetary: true,
Memory: 1024,
RootfsSize: 20 * 1024,
Entrypoint: "/sbin/zinit init",
EnvVars: map[string]string{
"SSH_KEY": publicKey,
},
IP: "10.20.2.6",
NetworkName: network.Name,
}
// Deploy the network first
err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network)
// Load the network using the state loader
// this loader should load the deployment as json then convert it to a deployment go object with workloads inside it
networkObj, err := tfPluginClient.State.LoadNetworkFromGrid(network.Name)
// Deploy the VM deployments
dl1 := workloads.NewDeployment("vm1", nodeID1, "", nil, network.Name, nil, nil, []workloads.VM{vm1}, nil)
dl2 := workloads.NewDeployment("vm2", nodeID2, "", nil, network.Name, nil, nil, []workloads.VM{vm2}, nil)
err = tfPluginClient.DeploymentDeployer.BatchDeploy(ctx, []*workloads.Deployment{&dl1, &dl2})
// Load the VMs using the state loader
vmObj1, err := tfPluginClient.State.LoadVMFromGrid(nodeID1, vm1.Name, dl1.Name)
vmObj2, err := tfPluginClient.State.LoadVMFromGrid(nodeID2, vm2.Name, dl2.Name)
// Print the VMs Yggdrasil IP
fmt.Println(vmObj1.YggIP)
fmt.Println(vmObj2.YggIP)
// Cancel the VM deployments
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl1)
err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl2)
// Cancel the network
err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network)
}
```
Running this code should result in two VMs deployed on two separate nodes while being on the same network and you should see an output like this:
```bash
300:e9c4:9048:57cf:f4e0:2343:f891:6037
300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f
```

View File

@ -1,9 +0,0 @@
# Grid Deployment
The TFGrid whole source code is open-source and instances of the grid can be deployed by anyone thanks to the distribution of daily grid snapshots of the complete ThreeFold Grid stacks.
## Table of Contents
- [TFGrid Stacks](./tfgrid_stacks.md)
- [Full VM Grid Deployment](./grid_deployment_full_vm.md)
- [Grid Snapshots](./snapshots.md)

View File

@ -1,152 +0,0 @@
<h1> Grid Deployment on a Full VM </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [DNS Settings](#dns-settings)
- [DNS Verification](#dns-verification)
- [Prepare the VM](#prepare-the-vm)
- [Set the Firewall](#set-the-firewall)
- [Launch the Script](#launch-the-script)
- [Access the Grid Services](#access-the-grid-services)
- [Manual Commands](#manual-commands)
- [Update the Deployment](#update-the-deployment)
***
## Introduction
We present the steps to deploy a network instance of the TFGrid on a full VM.
For this guide, we will be deploying a mainnet instance. While the steps are similar for testnet and devnet, you will have to adjust your deployment depending on which network you use.
## Prerequisites
For this guide, you will need to deploy a full VM on the ThreeFold Grid with at least the following minimum specs:
- IPv4
- IPv6
- 32GB of RAM
- 1000 GB of SSD
- 8 vcores
After deploying the full VM, take note of the IPv4 and IPv6 addresses to properly set the DNS records and then SSH into the VM.
## DNS Settings
You need to set an A record for the IPv4 address and an AAAA record for the IPv6 address with a wildcard subdomain.
The following table explicitly shows how to set the A and AAAA records for your domain.
| Type | Host | Value |
| ---- | ---- | -------------- |
| A | \* | <ipv4_address> |
| AAAA | \* | <ipv6_address> |
### DNS Verification
You can use tools such as [DNSChecker](https://dnschecker.org/) or [dig](https://linux.die.net/man/1/dig) on a terminal to check if the DNS propagadation is complete.
## Prepare the VM
- Download the ThreeFold Tech `grid_deployment` repository
```
git clone https://github.com/threefoldtech/grid_deployment
cd grid_deployment/docker-compose/mainnet
```
- Generate a TFChain node key with `subkey`
```
echo .subkey_mainnet >> .gitignore
../subkey generate-node-key > .nodekey_mainnet
cat .nodekey_mainnet
```
- Create and the set environment variables file
```
cp .secrets.env-example .secrets.env
```
- Adjust the environment file
```
nano .secrets.env
```
- To adjust the `.secrets.env` file, take into account the following:
- **DOMAIN**="example.com"
- Write your own domain
- **TFCHAIN_NODE_KEY**="abc123"
- Write the output of the command `cat .nodekey_mainnet`
- **ACTIVATION_SERVICE_MNEMONIC**="word1 word2 ... word24"
- Write the seed phrase of an account on mainnet with at least 10 TFT in the wallet
- **GRID_PROXY_MNEMONIC**="word1 word2 ... word24"
- Write the seed phrase of an account on mainnet with at least 10 TFT in the wallet and a registered twin ID\*
> \*Note: If you've created an account using the ThreeFold Dashboard on mainnet, the twin ID is automatically registered.
## Set the Firewall
You can use UFW to set the firewall:
```
ufw allow 80/tcp
ufw allow 443/tcp
ufw allow 30333/tcp
ufw allow 22/tcp
ufw enable
ufw status
```
## Launch the Script
Once you've prepared the VM, you can simply run the script to install the grid stack and deploy it online.
```
sh install_grid_bknd.sh
```
This will take some time since you are downloading the whole mainnet grid snapshots.
## Access the Grid Services
Once you've deployed the grid stack online, you can access the different grid services by usual the usual subdomains:
```
dashboard.your.domain
metrics.your.domain
tfchain.your.domain
graphql.your.domain
relay.your.domain
gridproxy.your.domain
activation.your.domain
stats.your.domain
```
## Manual Commands
Once you've run the install script, you can deploy manually the grid stack with the following command:
```
docker compose --env-file .secrets.env --env-file .env up -d
```
You can also check if the environment variables are properly set:
```
docker compose --env-file .secrets.env --env-file .env config
```
If you want to see the output during deployment, remove `-d` in the command above as follows:
```
docker compose --env-file .secrets.env --env-file .env up
```
This can be helpful to troubleshoot errors.
## Update the Deployment
Go into the folder of the proper network, e.g. mainnet, and run the following commands:
```
git pull -r
docker compose --env-file .secrets.env --env-file .env up -d
```

View File

@ -1,196 +0,0 @@
<h1>Snapshots for Grid Backend Services</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Services](#services)
- [ThreeFold Public Snapshots](#threefold-public-snapshots)
- [Deploy the Services with Scripts](#deploy-the-services-with-scripts)
- [Create the Snapshots](#create-the-snapshots)
- [Start All the Services](#start-all-the-services)
- [Stop All the Services](#stop-all-the-services)
- [Expose the Snapshots with Rsync](#expose-the-snapshots-with-rsync)
- [Create the Service Configuration File](#create-the-service-configuration-file)
- [Start the Service](#start-the-service)
***
## Introduction
To facilitate deploying grid backend services, we provide snapshots to significantly reduce sync time. This can be setup anywhere from scratch. Once all services are synced, one can use the scripts to create snapshots automatically.
To learn how to deploy your own grid stack, read [this section](./grid_deployment_full_vm.md).
## Services
There are 3 grid backend services that collect enough data to justify creating snapshots:
- ThreeFold blockchain - TFChain
- Graphql - Indexer
- Graphql - Processor
## ThreeFold Public Snapshots
ThreeFold hosts all available snapshots at: [https://bknd.snapshot.grid.tf/](https://bknd.snapshot.grid.tf/). Those snapshots can be downloaded with rsync:
- Mainnet:
```
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/tfchain-mainnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/indexer-mainnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/processor-mainnet-latest.tar.gz .
```
- Testnet:
```
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/tfchain-testnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/indexer-testnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/processor-testnet-latest.tar.gz .
```
- Devnet:
```
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/tfchain-devnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/indexer-devnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/processor-devnet-latest.tar.gz .
```
## Deploy the Services with Scripts
You can deploy the 3 individual services using known methods such as [Docker](../../system_administrators/computer_it_basics/docker_basics.md). To facilitate the process, scripts are provided that run the necessary docker commands.
The first script creates the snapshots, while the second and third scripts serve to start and stop all services.
You can use the start script to start all services and then set a cron job to execute periodically the snapshot creation script. This will ensure that you always have the latest version available on your server.
### Create the Snapshots
You can set a cron job to execute a script running rsync to create the snapshots and generate logs at a given interval.
- First download the script.
- Main net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh
```
- Test net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh
```
- Dev net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh
```
- Set the permissions of the script
```
chmod +x create_snapshot.sh
```
- Make sure to a adjust the snapshot creation script for your specific deployment
- Set a cron job
```
crontab -e
```
- Here is an example of a cron job where we execute the script every day at 1 AM and send the logs to `/var/log/snapshots/snapshots-cron.log`.
```sh
0 1 * * * sh /opt/snapshots/create-snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1
```
### Start All the Services
You can start all services by running the provided scripts.
- Download the script.
- Main net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh
```
- Test net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh
```
- Dev net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh
```
- Set the permissions of the script
```
chmod +x startall.sh
```
- Run the script to start all services via docker engine.
```
./startall.sh
```
### Stop All the Services
You can stop all services by running the provided scripts.
- Download the script.
- Main net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh
```
- Test net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh
```
- Dev net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh
```
- Set the permissions of the script
```
chmod +x stopall.sh
```
- Run the script to stop all services via docker engine.
```
./stopall.sh
```
## Expose the Snapshots with Rsync
We use rsync with a systemd service to expose the snapshots to the community.
### Create the Service Configuration File
To setup a public rsync server, create and edit the following file:
`/etc/rsyncd.conf`
```sh
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
port = 34873
max connections = 20
exclude = lost+found/
transfer logging = yes
use chroot = yes
reverse lookup = no
[gridsnapshots]
path = /storage/rsync-public/mainnet
comment = THREEFOLD GRID MAINNET SNAPSHOTS
read only = true
timeout = 300
list = false
[gridsnapshotstest]
path = /storage/rsync-public/testnet
comment = THREEFOLD GRID TESTNET SNAPSHOTS
read only = true
timeout = 300
list = false
[gridsnapshotsdev]
path = /storage/rsync-public/devnet
comment = THREEFOLD GRID DEVNET SNAPSHOTS
read only = true
timeout = 300
list = false
```
### Start the Service
Start and enable via systemd:
```sh
systemctl start rsync
systemctl enable rsync
systemctl status rsync
```

View File

@ -1,206 +0,0 @@
<h1>Snapshots for Grid Backend Services</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Services](#services)
- [ThreeFold Public Snapshots](#threefold-public-snapshots)
- [Deploy the Services with Scripts](#deploy-the-services-with-scripts)
- [Start All the Services](#start-all-the-services)
- [Stop All the Services](#stop-all-the-services)
- [Create the Snapshots](#create-the-snapshots)
- [Expose the Snapshots with Rsync](#expose-the-snapshots-with-rsync)
- [Create the Service Configuration File](#create-the-service-configuration-file)
- [Start the Service](#start-the-service)
***
## Introduction
To facilitate deploying grid backend services, we provide snapshots to significantly reduce sync time. This can be setup anywhere from scratch. Once all services are synced, one can use the scripts to create snapshots automatically.
## Prerequisites
There are a few prerequisites to properly run the ThreeFold services.
- [Docker engine](../computer_it_basics/docker_basics.md#install-docker-desktop-and-docker-engine)
- [Rsync](../computer_it_basics/file_transfer.md#rsync)
## Services
There are 3 grid backend services that collect enough data to justify creating snapshots:
- ThreeFold blockchain - TFChain
- Graphql - Indexer
- Graphql - Processor
## ThreeFold Public Snapshots
ThreeFold hosts all available snapshots at: [https://bknd.snapshot.grid.tf/](https://bknd.snapshot.grid.tf/). Those snapshots can be downloaded with rsync:
- Mainnet:
```
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/tfchain-mainnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/indexer-mainnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/processor-mainnet-latest.tar.gz .
```
- Testnet:
```
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/tfchain-testnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/indexer-testnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/processor-testnet-latest.tar.gz .
```
- Devnet:
```
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/tfchain-devnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/indexer-devnet-latest.tar.gz .
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/processor-devnet-latest.tar.gz .
```
Let's now see how to use those snapshots to run the services via scripts.
## Deploy the Services with Scripts
You can deploy the 3 individual services using known methods such as [Docker](https://manual.grid.tf/computer_it_basics/docker_basics.html). To facilitate the process, scripts are provided that run the necessary docker commands.
The first script creates the snapshots, while the second and third scripts serve to start and stop all services.
You can use the start script to start all services and then set a cron job to execute periodically the snapshot creation script. This will ensure that you always have the latest version available on your server.
### Start All the Services
You can start all services by running the provided scripts.
- Download the script.
- Main net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh
```
- Test net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh
```
- Dev net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh
```
- Set the permissions of the script
```
chmod +x startall.sh
```
- Run the script to start all services via docker engine.
```
./startall.sh
```
### Stop All the Services
You can stop all services by running the provided scripts.
- Download the script.
- Main net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh
```
- Test net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh
```
- Dev net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh
```
- Set the permissions of the script
```
chmod +x stopall.sh
```
- Run the script to stop all services via docker engine.
```
./stopall.sh
```
### Create the Snapshots
You can set a cron job to execute a script running rsync to create the snapshots and generate logs at a given interval.
- First download the script.
- Main net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh
```
- Test net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh
```
- Dev net
```
wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh
```
- Set the permissions of the script
```
chmod +x create_snapshot.sh
```
- Make sure to a adjust the snapshot creation script for your specific deployment
- Set a cron job
```
crontab -e
```
- Here is an example of a cron job where we execute the script every day at 1 AM and send the logs to `/var/log/snapshots/snapshots-cron.log`.
```sh
0 1 * * * sh /opt/snapshots/create-snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1
```
## Expose the Snapshots with Rsync
We use rsync with a systemd service to expose the snapshots to the community.
### Create the Service Configuration File
To setup a public rsync server, create and edit the following file:
`/etc/rsyncd.conf`
```sh
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
port = 34873
max connections = 20
exclude = lost+found/
transfer logging = yes
use chroot = yes
reverse lookup = no
[gridsnapshots]
path = /storage/rsync-public/mainnet
comment = THREEFOLD GRID MAINNET SNAPSHOTS
read only = true
timeout = 300
list = false
[gridsnapshotstest]
path = /storage/rsync-public/testnet
comment = THREEFOLD GRID TESTNET SNAPSHOTS
read only = true
timeout = 300
list = false
[gridsnapshotsdev]
path = /storage/rsync-public/devnet
comment = THREEFOLD GRID DEVNET SNAPSHOTS
read only = true
timeout = 300
list = false
```
### Start the Service
Start and enable via systemd:
```sh
systemctl start rsync
systemctl enable rsync
systemctl status rsync
```
If you're interested about hosting your own instance of the grid to strenghten the ThreeFold ecosystem, make sure to read the next section, [Guardians of the Grid](./tfgrid_guardians.md).

View File

@ -1,32 +0,0 @@
<h1> TFGrid Stacks </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Advantages](#advantages)
- [Run Your Own Stack](#run-your-own-stack)
***
## Introduction
ThreeFold is an open-source project and anyone can run the full stack of the TFGrid in a totally decentralized manner. In practice, this means that anyone can grab a docker compose file shared by ThreeFold of the TFGrid stacks and run an instance of the grid services on their own domain.
This means that you could host your own instance of the ThreeFold Dashboard at `dashboard.yourdomain.com` that would serve your own instance of the complete TFGrid stack! Users could then access the ThreeFold Dashboard via your own domain.
The process is actually very straightforward and we even provide a script to streamline the process.
## Advantages
Setting such instances of the TFGrid ensures resiliency and decentralization of the ThreeFold ecosystem.
As a very concrete example, image that one instance of the Dashboard goes offline, `dashboard.grid.tf`, then users could still access the Dashboard from another instance. The more users of the TFGrid deploy their own instance, the more resilient the grid becomes.
The overall ThreeFold ecosystem becomes more resilient to failures of individual nodes.
## Run Your Own Stack
To set your own instance of the TFGrid, you can download a snapshot of the grid and deploy the TFGrid services with Docker. We even provide scripts to quicken the whole process!
Read more about snapshots in the [next section](./grid_deployment_full_vm.md).

View File

@ -1,19 +0,0 @@
<h1> Internals </h1>
We present in this section of the developers book a partial list of system components. Content will be added progressively.
<h2> Table of Contents </h2>
- [Reliable Message Bus (RMB)](rmb/rmb_toc.md)
- [Introduction to RMB](rmb/rmb_intro.md)
- [RMB Specs](rmb/rmb_specs.md)
- [RMB Peer](rmb/uml/peer.md)
- [RMB Relay](rmb/uml/relay.md)
- [ZOS](zos/index.md)
- [Manual](./zos/manual/manual.md)
- [Workload Types](./zos/manual/workload_types.md)
- [Internal Modules](./zos/internals/internals.md)
- [Capacity](./zos/internals/capacity.md)
- [Performance Monitor Package](./zos/performance/performance.md)
- [API](./zos/manual/api.md)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

View File

@ -1,107 +0,0 @@
<h1> Introduction to Reliable Message Bus (RMB) </h1>
<h2> Table of Contents </h2>
- [What is RMB](#what-is-rmb)
- [Why](#why)
- [Specifications](#specifications)
- [How to Use RMB](#how-to-use-rmb)
- [Libraries](#libraries)
- [Known Libraries](#known-libraries)
- [No Known Libraries](#no-known-libraries)
- [What is rmb-peer](#what-is-rmb-peer)
- [Download](#download)
- [Building](#building)
- [Running tests](#running-tests)
***
## What is RMB
Reliable message bus is a secure communication panel that allows `bots` to communicate together in a `chat` like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT.
Out of the box RMB provides the following:
- Guarantee authenticity of the messages. You are always sure that the received message is from whoever is pretending to be
- End to End encryption
- Support for 3rd party hosted relays. Anyone can host a relay and people can use it safely since there is no way messages can be inspected while using e2e. That's similar to `home` servers by `matrix`
![layout](img/layout.png)
***
## Why
RMB is developed by ThreefoldTech to create a global network of nodes that are available to host capacity. Each node will act like a single bot where you can ask to host your capacity. This enforced a unique set of requirements:
- Communication needed to be reliable
- Minimize and completely eliminate message loss
- Reduce downtime
- Node need to authenticate and authorize calls
- Guarantee identity of the other peer so only owners of data can see it
- Fast request response time
Starting from this we came up with a more detailed requirements:
- User (or rather bots) need their identity maintained by `tfchain` (a blockchain) hence each bot needs an account on tfchain to be able to use `rmb`
- Then each message then can be signed by the `bot` keys, hence make it easy to verify the identity of the sender of a message. This is done both ways.
- To support federation (using 3rd party relays) we needed to add e2e encryption to make sure messages that are surfing the public internet can't be sniffed
- e2e encryption is done by deriving an encryption key from the same identity seed, and share the public key on `tfchain` hence it's available to everyone to use
***
## Specifications
For details about protocol itself please check the [specs](./rmb_specs.md).
***
## How to Use RMB
There are many ways to use `rmb` because it was built for `bots` and software to communicate. Hence, there is no mobile app for it for example, but instead a set of libraries where you can use to connect to the network, make chitchats with other bots then exit.
Or you can keep the connection forever to answer other bots requests if you are providing a service.
***
## Libraries
If there is a library in your preferred language, then you are in luck! Simply follow the library documentations to implement a service bot, or to make requests to other bots.
### Known Libraries
- Golang [rmb-sdk-go](https://github.com/threefoldtech/rmb-sdk-go)
- Typescript [rmb-sdk-ts](https://github.com/threefoldtech/rmb-sdk-ts)
***
### No Known Libraries
If there are no library in your preferred language, here's what you can do:
- Implement a library in your preferred language
- If it's too much to do all the signing, verification, e2e in your language then use `rmb-peer`
***
## What is rmb-peer
think of `rmb-peer` as a gateway that stands between you and the `relay`. `rmb-peer` uses your mnemonics (your identity secret key) to assume your identity and it connects to the relay on your behalf, it maintains the connection forever and takes care of
- reconnecting if connection was lost
- verifying received messages
- decrypting received messages
- sending requests on your behalf, taking care of all crypto heavy lifting.
Then it provide a simple (plain-text) api over `redis`. means to send messages (or handle requests) you just need to be able to push and pop messages from some redis queues. Messages are simple plain text json.
> More details can be found [here](./rmb_specs.md)
***
## Download
Please check the latest [releases](https://github.com/threefoldtech/rmb-rs/releases) normally you only need the `rmb-peer` binary, unless you want to host your own relay.
***
## Building
```bash
git clone git@github.com:threefoldtech/rmb-rs.git
cd rmb-rs
cargo build --release --target=x86_64-unknown-linux-musl
```
***
## Running tests
While inside the repository
```bash
cargo test
```

View File

@ -1,258 +0,0 @@
<h1> RMB Specs </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Overview of the Operation of RMB Relay](#overview-of-the-operation-of-rmb-relay)
- [Connections](#connections)
- [Peer](#peer)
- [Peer implementation](#peer-implementation)
- [Message Types](#message-types)
- [Output Requests](#output-requests)
- [Incoming Response](#incoming-response)
- [Incoming Request](#incoming-request)
- [Outgoing Response](#outgoing-response)
- [End2End Encryption](#end2end-encryption)
- [Rate Limiting](#rate-limiting)
***
# Introduction
RMB is (reliable message bus) is a set of protocols and tools (client and daemon) that aims to abstract inter-process communication between multiple processes running over multiple nodes.
The point behind using RMB is to allow the clients to not know much about the other process, or where it lives (client doesn't know network addresses, or identity). Unlike HTTP(S) or gRPC where the caller must know exact address (or dns-name) and endpoints of the service it's trying to call. Instead RMB requires you to only know about
- Twin ID (numeric ID) of the endpoint as defined by `tfchain`
- Command (string) is simply the function to call
- The request "body" which is binary blob that is passed to the command as is
- implementation of the command need then to interpret this data as intended (out of scope of rmb)
Twins are stored on tfchain. hence identity of twins is granted not to be spoofed, or phished. When a twin is created he needs to define 2 things:
- RMB Relay
- His Elliptic Curve public key (we use secp256k1 (K-256) elliptic curve)
> This data is stored on tfchain forever, and only the twin can change it using his secure-key. Hence phishing is impossible. A twin can decide later to change this encryption key or relay.
Once all twins has their data set correctly on the chain. Any 2 twins can communicate with full end-to-end encryption as follows:
- A twin establish a WS connection to his relay of choice
- A twin create an `envelope` as defined by the protobuf [schema](https://github.com/threefoldtech/rmb-rs/blob/main/proto/types.proto)
- Twin fill in all envelope information (more about this later)
- Twin pushes the envelope to the relay
- If the destination twin is also using the same relay, message is directly forwarded to this twin
- If federation is needed (twin using different relay), message is forwarded to the proper twin.
> NOTE: since a sender twin need to also encrypt the message for the receiver twin, a twin queries the `tf-chain` for the twin information. Usually it caches this data locally for reuse, hence clients need to make sure this data is always up-to-date.
On the relay, the relay checks federation information set on the envelope and then decide to either to forward it internally to one of it's connected clients, or forward it to the destination relay. Hence relays need to be publicly available.
When the relay receive a message that is destined to a `local` connected client, it queue it for delivery. The relay can maintain a queue of messages per twin to a limit. If the twin does not come back online to consume queued messages, the relay will start to drop messages for that specific twin client.
Once a twin come online and connect to its peer, the peer will receive all queued messages. the messages are pushed over the web-socket as they are received. the client then can decide how to handle them (a message can be a request or a response). A message type can be inspected as defined by the schema.
***
# Overview of the Operation of RMB Relay
![relay](img/relay.png)
## Connections
By design, there can be only `ONE TWIN` with that specific ID. Hence only has `ONE RELAY` set on tfchain per twin. This force a twin to always use this defined relay if it wishes to open multiple connections to its relay. In other words, a twin once sets up a relay on its public information can only use that relay for all of its connections. If decided to change the relay address, all connections must use the new relay otherwise messages will get lost as they will be delivered to the wrong relay.
In an RPC system, the response of a request must be delivered to the requester. Hence if a twin is maintaining multiple connections to its relay, it need to identify `uniquely` the connection to allow the relay to route back the responses to the right requester. We call this `id` a `session-id`. The `session-id` must be unique per twin.
The relay can maintain **MULTIPLE** connections per peer given that each connection has a unique **SID** (session id). But for each (twin-id, session-id) combo there can be only one connection. if a new connection with the same (twin-id, session-id) is created, the older connection is dropped.
The message received always has the session-id as part of the source address. a reply message then must have destination set back to the source as is, this allows the relay to route the message back correctly without the need to maintain an internal state.
The `rmb-peer` process reserved the `None` sid. It connects with No session id, hence you can only run one `rmb-peer` per `twin` (identity). But the same twin (identity) can make other connection with other rmb clients (for example rmb-sdk-go direct client) to establish more connections with unique session ids.
## Peer
Any language or code that can open `WebSocket` connection to the relay can work as a peer. A peer need to do the following:
- Authenticate with the relay. This is by providing a `JWT` that is signed by the twin key (more on that later)
- Handle received binary mesasge
- Send binary messages
Each message is an object of type `Envelope` serialized as with protobuf. Type definition can be found under `proto/types.proto`
## Peer implementation
This project already have a peer implementation that works as local peer gateway. By running this peer instance it allows you to
run multiple services (and clients) behind that gateway and they appear to the world as a single twin.
- The peer gateway (rmb-peer) starts and connects to realy
- If requests are received, they are verified, decrypted and pushed to a redis queue that as command specific (from the envelope)
- A service can then be waiting on this redis queue for new messages
- The service can process the command, and push a response back to a specific redis queue for responses.
- The gateway can then pull ready responses from the responses queue, create a valid envelope, encrypt, and sign and send to destination
![peer](img/peer.png)
### Message Types
Concerning, `rmb-peer` message types, to make it easy for apps to work behind an `rmb-peer`, we use JSON message for communication between the local process and the rmb-peer. the rmb-peer still
maintains a fully binary communication with the relay.
A request message is defined as follows
#### Output Requests
This is created by a client who wants to request make a request to a remote service
> this message is pushed to `msgbus.system.local` to be picked up by the peer
```rust
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct JsonOutgoingRequest {
#[serde(rename = "ver")]
pub version: usize,
#[serde(rename = "ref")]
pub reference: Option<String>,
#[serde(rename = "cmd")]
pub command: String,
#[serde(rename = "exp")]
pub expiration: u64,
#[serde(rename = "dat")]
pub data: String,
#[serde(rename = "tag")]
pub tags: Option<String>,
#[serde(rename = "dst")]
pub destinations: Vec<u32>,
#[serde(rename = "ret")]
pub reply_to: String,
#[serde(rename = "shm")]
pub schema: String,
#[serde(rename = "now")]
pub timestamp: u64,
}
```
#### Incoming Response
A response message is defined as follows this is what is received as a response by a client in response to his outgoing request.
> This response is what is pushed to `$ret` queue defined by the outgoing request, hence the client need to wait on this queue until the response is received or it times out
```rust
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct JsonError {
pub code: u32,
pub message: String,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct JsonIncomingResponse {
#[serde(rename = "ver")]
pub version: usize,
#[serde(rename = "ref")]
pub reference: Option<String>,
#[serde(rename = "dat")]
pub data: String,
#[serde(rename = "src")]
pub source: String,
#[serde(rename = "shm")]
pub schema: Option<String>,
#[serde(rename = "now")]
pub timestamp: u64,
#[serde(rename = "err")]
pub error: Option<JsonError>,
}
```
#### Incoming Request
An incoming request is a modified version of the request that is received by a service running behind RMB peer
> this request is received on `msgbus.${request.cmd}` (always prefixed with `msgbus`)
```rust
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct JsonIncomingRequest {
#[serde(rename = "ver")]
pub version: usize,
#[serde(rename = "ref")]
pub reference: Option<String>,
#[serde(rename = "src")]
pub source: String,
#[serde(rename = "cmd")]
pub command: String,
#[serde(rename = "exp")]
pub expiration: u64,
#[serde(rename = "dat")]
pub data: String,
#[serde(rename = "tag")]
pub tags: Option<String>,
#[serde(rename = "ret")]
pub reply_to: String,
#[serde(rename = "shm")]
pub schema: String,
#[serde(rename = "now")]
pub timestamp: u64,
}
```
Services that receive this needs to make sure their responses `destination` to have the same value as the incoming request `source`
#### Outgoing Response
A response message is defined as follows this is what is sent as a response by a service in response to an incoming request.
Your bot (server) need to make sure to set `destination` to the same value as the incoming request `source`
The
> this response is what is pushed to `msgbus.system.reply`
```rust
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct JsonOutgoingResponse {
#[serde(rename = "ver")]
pub version: usize,
#[serde(rename = "ref")]
pub reference: Option<String>,
#[serde(rename = "dat")]
pub data: String,
#[serde(rename = "dst")]
pub destination: String,
#[serde(rename = "shm")]
pub schema: Option<String>,
#[serde(rename = "now")]
pub timestamp: u64,
#[serde(rename = "err")]
pub error: Option<JsonError>,
}
```
***
# End2End Encryption
Relay is totally opaque to the messages. Our implementation of the relay does not poke into messages except for the routing attributes (source, and destinations addresses, and federation information). But since the relay is designed to be hosted by other 3rd parties (hence federation) you should
not fully trust the relay or whoever is hosting it. Hence e2e was needed
As you already understand e2e is completely up to the peers to implement, and even other implementations of the peers can agree on a completely different encryption algorithm and key sharing algorithm (again, relay does not care). But in our implementation of the e2e (rmb-peer) things goes like this
- Each twin has a `pk` field on tfchain. when rmb-peer start, it generates an `secp256k1` key from the same seed as the user tfchain mnemonics. Note that this will not make the encryption key and the signing key any related, they just are driven from the same seed.
- On start, if the key is not already set on the twin object, the key is updated.
- If a peer A is trying to send a message to peer B. but peer B does not has his `pk` set, peer A will send the message in plain-text format (please check the protobuf envelope type for details)
- If peer B has public key set, peer A will prefer e2e encryption and will does the following:
- Drive a shared secret point with `ecdh` algorithm, the key is the `sha256` of that point
- `shared = ecdh(A.sk, B.pk)`
- create a 12 bytes random nonce
- encrypt data as `encrypted = aes-gcm.encrypt(shared-key, nonce, plain-data)`
- create cipher as `cipher nonce + encrypted`
- fill `envelope.cipher = cipher`
- on receiving a message peer B does the same in the opposite direction
- split data and nonce (nonce is always first 12 bytes)
- derive the same shared key
- `shared = ecdh(B.sk, A.pk)`
- `plain-data = aes-gcm.decrypt(shared-key, nonce, encrypted)`
***
# Rate Limiting
To avoid abuse of the server, and prevent DoS attacks on the relay, a rate limiter is used to limit the number of clients' requests.\
It was decided that the rate limiter should only watch websocket connections of users, since all other requests/connections with users consume little resources, and since the relay handles the max number of users inherently.\
The limiter's configurations are passed as a command line argument `--limit <count>, <size>`. `<count>` represents the number of messages a twin is allowed to send in each time window, `<size>` represents the total size of messages in bytes a twin is allowed to send in each time window.\
Currently there are two implementations of the rate limiter:
- `NoLimit` which imposes no limits on users.
- `FixedWindowLimiter` which breaks the timeline into fixed time windows, and allows a twin to send a fixed number of messages, with a fixed total size, in each time window. If a twin exceeded their limits in some time window, their message is dropped, an error message is sent back to the user, the relay dumps a log about this twin, and the user gets to keep their connection with the relay.

View File

@ -1,18 +0,0 @@
<h1> Reliable Message Bus (RMB) </h1>
Reliable message bus is a secure communication panel that allows bots to communicate together in a chat like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT.
Out of the box RMB provides the following:
- Guarantee authenticity of the messages.
- You are always sure that the received message is from whoever is pretending to be.
- End to End encryption.
- Support for 3rd party hosted relays.
- Anyone can host a relay and people can use it safely since there is no way messages can be inspected while using e2e. That's similar to home servers by matrix.
<h2> Table of Contents </h2>
- [Introduction to RMB](rmb_intro.md)
- [RMB Specs](rmb_specs.md)
- [RMB Peer](uml/peer.md)
- [RMB Relay](uml/relay.md)

View File

@ -1,44 +0,0 @@
<h1> RMB Peer </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We present an example of RMB peer. Note that the extension for this kind of file is `.wsd`.
## Example
```
@startuml RMB
participant "Local Process" as ps
database "Local Redis" as redis
participant "Rmb Peer" as peer
participant "Rmb Relay" as relay
note across: Handling Out Request
peer --> relay: Establish connection
ps -> redis: PUSH message on \n(msgbus.system.local)
redis -> peer : POP message from \n(msgbus.system.local)
peer -> relay: message pushed over the websocket to the relay
...
relay -> peer: received response
peer -> redis: PUSH over $msg.reply_to queue
...
note across: Handling In Request
relay --> peer: Received a request
peer -> redis: PUSh request to `msgbus.$cmd`
redis -> ps: POP new request msg
ps -> ps: Process message
ps -> redis: PUSH to (msgbus.system.reply)
redis -> peer: POP from (msgbus.system.reply)
peer -> relay: send response message
@enduml
```

View File

@ -1,40 +0,0 @@
<h1> RMB Peer </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We present an example of RMB relay. Note that the extension for this kind of file is `.wsd`.
## Example
```
@startuml RMB
actor "Peer 1" as peer1
participant "Relay 1" as relay1
participant "Relay 2" as relay2
actor "Peer 2" as peer2
actor "Peer 3" as peer3
peer1 --> relay1: Establish WS connection
peer2 --> relay1: Establish WS connection
peer3 --> relay2: Establish WS connection
peer1 -> relay1: Send message (Envelope)\n(destination "Peer 2")
relay1 -> peer2: Forward message directly
peer1 -> relay1: Send message (Envelope)\n(destination "Peer 3")
note right
"Peer 3" does not live on "Relay 1" hence federation is
needed
end note
relay1 -> relay2: Federation of message for\n Peer 3
relay2 -> peer3: Forward message directly
@enduml
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

View File

@ -1,12 +0,0 @@
@startuml
start
:power on node;
repeat
:mount boot flist;
:copy files to node root;
:reconfigure services;
:restart services;
repeat while (new flist version?) is (yes)
-> power off;
stop
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

File diff suppressed because one or more lines are too long

View File

@ -1,50 +0,0 @@
@startuml
package "node-ready"{
[local-modprobe]
[udev-trigger]
[redis]
[haveged]
[cgroup]
[redis]
}
package "boot" {
[storaged]
[internet]
[networkd]
[identityd]
}
package "internal modules"{
[flistd]
[containerd]
[contd]
[upgraded]
[provisiond]
}
[local-modprobe]<-- [udev-trigger]
[udev-trigger] <-- [storaged]
[udev-trigger] <-- [internet]
[storaged] <-- [identityd]
[identityd] <- [networkd]
[internet] <-- [networkd]
[networkd] <-- [containerd]
[storaged] <-- [containerd]
[containerd] <-- [contd]
[storaged] <-- [flistd]
[networkd] <-- [flistd]
[flistd] <-- [upgraded]
[networkd] <-- [upgraded]
[networkd] <-- [provisiond]
[flistd] <-- [provisiond]
[contd] <-- [provisiond]
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

View File

@ -1,37 +0,0 @@
@startuml
title Provisioning of a resource space
autonumber
actor User as user
' entity Farmer as farmer
entity Network as network
database Blockchain as bc
boundary Node as node
collections "Resource space" as rs
== Resource research ==
user -> network: Send resource request
activate network
network -> node: broadcast resource request
activate node
deactivate network
...broadcast to all nodes...
node -> user: Send offer
user -> user: inspect offer
== Resource space negotiation ==
user -> node: accept offer
user <-> node: key exchange
user -> bc: money is locked on blockchain
...
node -> rs: create resrouce space
activate rs
node -> user: notify space is created
node -> bc: notify he created the space
user -> rs: make sure it can access the space
user -> bc: validate can access the space
bc -> node: money is released to the node
deactivate node
== Usage of the space ==
user -> rs: deploy workload
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

View File

@ -1,42 +0,0 @@
@startuml
title Provisioning a workload on the TFGrid
autonumber
actor "User" as user
actor "Farmer" as farmer
database "TF Explorer" as explorer
database Blockchain as blockchain
boundary Node as node
== Price definition ==
farmer -> explorer: Farmer set the price of its Resource units
== Resource research ==
activate explorer
user -> explorer: User look where to deploy the workload
user <- explorer: Gives detail about the farmer owning the node selected
== Resource reservation ==
user -> explorer: write description of the workload
explorer -> user: return a list of transaction to execute on the blockchain
== Reservation processing ==
user -> blockchain: execute transactions
explorer <-> blockchain: verify transactions are done
explorer -> explorer: reservation status changed to `deploy`
== Resource provisioning ==
node <-> explorer: read description of the workloads
node -> node: provision workload
alt provision successfull
node -> explorer: write result of the provisining
explorer -> blockchain: forward token to the farmer
blockchain -> farmer: tokens are available to the farmer
user <- explorer: read the connection information to his workload
else provision error
node -> explorer: write result of the provisining
explorer -> explorer: cancel reservation
node -> node: free up capacity
explorer -> blockchain: token refunded to user
blockchain <-> user: tokens are available to the user again
end
deactivate explorer
== Resource monitoring ==
user <-> node: use / monitor workload
@enduml

View File

@ -1,20 +0,0 @@
@startuml
== Initialization ==
Module -> MsgBroker: Announce Module
MsgBroker -> Module: create bi-directional channel
== Utilisation ==
loop
DSL -> MsgBroker: put RPC message
activate MsgBroker
Module <- MsgBroker: pull RPC message
activate Module
Module -> Module: execute method
Module -> MsgBroker: put reponse
deactivate Module
MsgBroker -> DSL : read reponse
deactivate MsgBroker
end
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

View File

@ -1,22 +0,0 @@
@startuml
actor User as user
box "To Be Defined" #LightBlue
participant Market
end box
entity Farmer as farmer
boundary Node as node
user -> farmer: Request space
activate farmer
farmer -> node: reserve space
activate node
farmer -> user: confirmation
deactivate farmer
...
note over user, node: communication allows only owner of space
user -> node: deploy services
...
user -> farmer: destroy space
farmer -> node: delete space
deactivate node
@enduml

View File

@ -1,30 +0,0 @@
#!/bin/bash
# This is the same as the first case at qemu/README.md in a single script
sudo ip link add zos0 type bridge
sudo ip link set zos0 up
sudo ip addr add 192.168.123.1/24 dev zos0
md5=$(echo $USER| md5sum )
ULA=${md5:0:2}:${md5:2:4}:${md5:6:4}
sudo ip addr add fd${ULA}::1/64 dev zos0
# you might want to add fe80::1/64
sudo ip addr add fe80::1/64 dev zos0
sudo iptables -t nat -I POSTROUTING -s 192.168.123.0/24 -j MASQUERADE
sudo ip6tables -t nat -I POSTROUTING -s fd${ULA}::/64 -j MASQUERADE
sudo iptables -t filter -I FORWARD --source 192.168.123.0/24 -j ACCEPT
sudo iptables -t filter -I FORWARD --destination 192.168.123.0/24 -j ACCEPT
sudo sysctl -w net.ipv4.ip_forward=1
sudo dnsmasq --strict-order \
--except-interface=lo \
--interface=zos0 \
--bind-interfaces \
--dhcp-range=192.168.123.20,192.168.123.50 \
--dhcp-range=::1000,::1fff,constructor:zos0,ra-stateless,12h \
--conf-file="" \
--pid-file=/var/run/qemu-dnsmasq-zos0.pid \
--dhcp-leasefile=/var/run/qemu-dnsmasq-zos0.leases \
--dhcp-no-override

View File

@ -1,61 +0,0 @@
# Adding a new package
Binary packages are added via providing [a build script](../../bins/), then an automated workflow will build/publish an flist with this binary.
For example, to add `rmb` binary, we need to provide a bash script with a `build_rmb` function:
```bash
RMB_VERSION="0.1.2"
RMB_CHECKSUM="4fefd664f261523b348fc48e9f1c980b"
RMB_LINK="https://github.com/threefoldtech/rmb-rs/releases/download/v${RMB_VERSION}/rmb"
download_rmb() {
echo "download rmb"
download_file ${RMB_LINK} ${RMB_CHECKSUM} rmb
}
prepare_rmb() {
echo "[+] prepare rmb"
github_name "rmb-${RMB_VERSION}"
}
install_rmb() {
echo "[+] install rmb"
mkdir -p "${ROOTDIR}/bin"
cp ${DISTDIR}/rmb ${ROOTDIR}/bin/
chmod +x ${ROOTDIR}/bin/*
}
build_rmb() {
pushd "${DISTDIR}"
download_rmb
popd
prepare_rmb
install_rmb
}
```
Note that, you can just download a statically build binary instead of building it.
The other step is to add it to workflow to be built automatically, in [bins workflow](../../.github/workflows/bins.yaml), add your binary's job:
```yaml
jobs:
containerd:
...
...
rmb:
uses: ./.github/workflows/bin-package.yaml
with:
package: rmb
secrets:
token: ${{ secrets.HUB_JWT }}
```
Once e.g. a `devnet` release is published, your package will be built then pushed to an flist repository. After that, you can start your local zos node, wait for it to finish downloading, then you should find your binary available.

View File

@ -1,70 +0,0 @@
# Quick start
- [Quick start](#quick-start)
- [Starting a local zos node](#starting-a-local-zos-node)
- [Accessing node](#accessing-node)
- [Development](#development)
## Starting a local zos node
* Make sure `qemu` and `dnsmasq` are installed
* [Create a farm](../manual/manual.md#creating-a-farm)
* [Download a zos image](https://bootstrap.grid.tf/kernel/zero-os-development-zos-v3-generic-7e587e499a.efi)
* Make sure `zos0` bridge is allowed by qemu, you can add `allow zos0` in `/etc/qemu/bridge.conf` (create the file if it's not there)
* Setup the network using this script [this script](./net.sh)
Then, inside zos repository
```
make -C cmds
cd qemu
mv <downloaded image path> ./zos.efi
sudo ./vm.sh -n myzos-01 -c "farmer_id=<your farm id here> printk.devmsg=on runmode=dev"
```
You should see the qemu console and boot logs, wait for awhile and you can [browse farms](https://dashboard.dev.grid.tf/explorer/farms) to see your node is added/detected automatically.
To stop the machine you can do `Control + a` then `x`.
You can read more about setting up a qemu development environment and more network options [here](../../qemu/README.md).
## Accessing node
After booting up, the node should start downloading external packages, this would take some time depending on your internet connection.
See [how to ssh into it.](../../qemu/README.md#to-ssh-into-the-machine)
How to get the node IP?
Given the network script `dhcp-range`, it usually would be one of `192.168.123.43`, `192.168.123.44` or `192.168.123.45`.
Or you can simply install `arp-scan` then do something like:
```
✗ sudo arp-scan --interface=zos0 --localnet
Interface: zos0, type: EN10MB, MAC: de:26:45:e6:87:95, IPv4: 192.168.123.1
Starting arp-scan 1.9.7 with 256 hosts (https://github.com/royhills/arp-scan)
192.168.123.44 54:43:83:1f:eb:81 (Unknown)
```
Now we know for sure it's `192.168.123.44`.
To check logs and see if the downloading of packages is still in progress, you can simply do:
```
zinit log
```
## Development
While the overlay will enable your to boot with the binaries that's been built locally, sometimes you'll need to test the changes of certain modules without restarting the node (or intending to do so for e.g. testing a migration).
For example if we changed anything related to `noded`, we can do the following:
Inside zos repository:
* Build binaries locally
* `make -C cmds`
* Copy the binary inside the machine
* `scp bin/zos root@192.168.123.44:/bin/noded`
* SSH into the machine then use `zinit` to restart it:
* `zinit stop noded && zinit start noded`

View File

@ -1,6 +0,0 @@
Development
===========
* [Quick start](./quickstart.md)
* [Testing](./testing.md)
* [Binary packages](./packages.md)

View File

@ -1,157 +0,0 @@
# Testing
Beside unit testing, you might want to test your change in an integrated environment, the following are two options to do it.
- [Testing](#testing)
- [Using grid/node client](#using-gridnode-client)
- [Using a test app](#using-a-test-app)
- [An example to talk to container and qsfs modules](#an-example-to-talk-to-container-and-qsfs-modules)
- [An example of directly using zinit package](#an-example-of-directly-using-zinit-package)
## Using grid/node client
You can simply use any grid client to deploy a workload of any type, you should specify your node's twin ID (and make sure you are on the correct network).
Inside the node, you can do `noded -id` and `noded -net` to get your current node ID and network. Also, [you can check your farm](https://dashboard.dev.grid.tf/explorer/farms) and get node information from there.
Another option is the golang [node client](../manual/manual.md#interaction).
While deploying on your local node, logs with `zinit log` would be helpful to see any possible errors and to debug your code.
## Using a test app
If you need to test a specific module or functionality, you can create a simple test app inside e.g. [tools directory](../../tools/).
Inside this simple test app, you can import any module or talk to another one using [zbus](../internals/internals.md#ipc).
### An example to talk to container and qsfs modules
```go
// tools/del/main.go
package main
import (
"context"
"flag"
"strings"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/threefoldtech/zbus"
"github.com/threefoldtech/zos/pkg"
"github.com/threefoldtech/zos/pkg/stubs"
)
func main() {
zerolog.SetGlobalLevel(zerolog.DebugLevel)
zbus, err := zbus.NewRedisClient("unix:///var/run/redis.sock")
if err != nil {
log.Err(err).Msg("cannot init zbus client")
return
}
var workloadType, workloadID string
flag.StringVar(&workloadType, "type", "", "workload type (qsfs or container)")
flag.StringVar(&workloadID, "id", "", "workload ID")
flag.Parse()
if workloadType == "" || workloadID == "" {
log.Error().Msg("you need to provide both type and id")
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if workloadType == "qsfs" {
qsfsd := stubs.NewQSFSDStub(zbus)
err := qsfsd.SignalDelete(ctx, workloadID)
if err != nil {
log.Err(err).Msg("cannot delete qsfs workload")
}
} else if workloadType == "container" {
args := strings.Split(workloadID, ":")
if len(args) != 2 {
log.Error().Msg("container id must contain namespace, e.g. qsfs:wl129")
}
containerd := stubs.NewContainerModuleStub(zbus)
err := containerd.SignalDelete(ctx, args[0], pkg.ContainerID(args[1]))
if err != nil {
log.Err(err).Msg("cannot delete container workload")
}
}
}
```
Then we can simply build, upload and execute this in our node:
```
cd tools/del
go build
scp del root@192.168.123.44:/root/del
```
Then ssh into `192.168.123.44` and simply execute your test app:
```
./del
```
### An example of directly using zinit package
```go
// tools/zinit_test
package main
import (
"encoding/json"
"fmt"
"regexp"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/threefoldtech/zos/pkg/zinit"
)
func main() {
zerolog.SetGlobalLevel(zerolog.DebugLevel)
z := zinit.New("/var/run/zinit.sock")
regex := fmt.Sprintf(`^ip netns exec %s %s`, "ndmz", "/sbin/udhcpc")
_, err := regexp.Compile(regex)
if err != nil {
log.Err(err).Msgf("cannot compile %s", regex)
return
}
// try match
matched, err := z.Matches(zinit.WithExecRegex(regex))
if err != nil {
log.Err(err).Msg("cannot filter services")
}
matchedStr, err := json.Marshal(matched)
if err != nil {
log.Err(err).Msg("cannot convert matched map to json")
}
log.Debug().Str("matched", string(matchedStr)).Msg("matched services")
// // try destroy
// err = z.Destroy(10*time.Second, matched...)
// if err != nil {
// log.Err(err).Msg("cannot destroy matched services")
// }
}
```

View File

@ -1,6 +0,0 @@
# FAQ
This section consolidated all the common question we get about how 0-OS work and how to operate it.
- **Q**: What is the preferred configuration for my raid controller when running 0-OS ?
**A**: 0-OS goal is to expose raw capacity. So it is best to always try to give him access to the most raw access to the disks. In case of raid controllers, the best is to try to set it up in [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD) mode if available.

View File

@ -1,11 +0,0 @@
# Services Boot Sequence
Here is dependency graph of all the services started by 0-OS:
![boot sequence](../assets/boot_sequence.png)
## Pseudo boot steps
both `node-ready` and `boot` are not actual services, but instead they are there to define a `boot stage`. for example once `node-ready` service is (ready) it means all crucial system services defined by 0-initramfs are now running.
`boot` service is similar, but guarantees that some 0-OS services are running (for example `storaged`), before starting other services like `flistd` which requires `storaged`

View File

@ -1,89 +0,0 @@
<h1>Capacity</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [System reserved capacity](#system-reserved-capacity)
- [Reserved Memory](#reserved-memory)
- [Reserved Storage](#reserved-storage)
- [User Capacity](#user-capacity)
- [Memory](#memory)
- [Storage](#storage)
***
## Introduction
This document describes how ZOS does the following tasks:
- Reserved system resources
- Memory
- Storage
- Calculation of free usable capacity for user workloads
## System reserved capacity
ZOS always reserve some amount of the available physical resources to its own operation. The system tries to be as protective
as possible of it's critical services to make sure that the node is always reachable and usable even if it's under heavy load
ZOS make sure it reserves Memory and Storage (but not CPU) as per the following:
### Reserved Memory
ZOS reserve 10% of the available system memory for basic services AND operation overhead. The operation overhead can happen as a side effect of running user workloads. For example, a user network while in theory does not consume any memory, in matter of fact it also consume some memory (kernel buffers, etc...). Same for a VM. A user VM can be assigned say 5G but the process that running the VM can/will take few extra megabytes to operate.
This is why we decided to play on the safe side, and reserve 10% of total system memory to the system overhead, with a **MIN** reserved memory of 2GB
```python
reserved = min(total_in_gb * 0.1, 2G)
```
### Reserved Storage
While ZOS does not require installation, but it needs to download and store many things to operate correctly. This include the following:
- Node identity. Information about the node id and keys
- The system binaries, those what include all zos to join the grid and operate as expected
- Workload flists. Those are the flists of the user workloads. Those are downloaded on demand so they don't always exist.
- State information. Tracking information maintained by ZOS to track the state of workloads, owner-ship, and more.
This is why the system on first start allocates and reserve a part of the available SSD storage and is called `zos-cache`. Initially is `5G` (was 100G in older version) but because the `dynamic` nature of the cache we can't fix it at `5G`
The required space to be reserved by the system can dramatically change based on the amount of workloads running on the system. For example if many users are running many different VMs, the system will need to download (and cache) different VM images, hence requiring more cache.
This is why the system periodically checks the reserved storage and then dynamically expand or shrink to a more suitable value in increments of 5G. The expansion happens around the 20% of current cache size, and shrinking if went below 20%.
## User Capacity
All workloads requires some sort of a resource(s) to run and that is actually what the user hae to pay for. Any workload can consume resources in one of the following criteria:
- CU (compute unit in vCPU)
- MU (memory unit in bytes)
- NU (network unit in bytes)
- SU (ssd storage in bytes)
- HU (hdd storage in bytes)
A workloads, based on the type can consume one or more of those resource types. Some workloads will have a well known "size" on creation, others might be dynamic and won't be know until later.
For example, a disk workload SU consumption will be know ahead. Unlike the NU used by a network which will only be known after usage over a certain period of time.
A single deployment can has multiple workloads each requires a certain amount of one or more capacity types (listed above). ZOS then for each workloads type compute the amount of resources needed per workload, and then check if it can provide this amount of capacity.
> This means that a deployment that define 2 VMs can partially succeed to deploy one of the VMs but not the other one if the amount of resources it requested are higher than what the node can provide
### Memory
How the system decide if there are enough memory to run a certain workload that demands MU resources goes as follows:
- compute the "theoretically used" memory by all user workloads excluding `self`. This is basically the sum of all consumed MU units of all active workloads (as defined by their corresponding deployments, not as per actually used in the system).
- The theoretically used memory is topped with the system reserved memory.
- The the system checks actually used memory on the system this is done simply by doing `actual_used = memory.total - memory.available`
- The system now can simply `assume` an accurate used memory by doing `used = max(actual_used, theoretically_used)`
- Then `available = total - used`
- Then simply checks that `available` memory is enough to hold requested workload memory!
### Storage
Storage is much simpler to allocate than memory. It's completely left to the storage subsystem to find out if it can fit the requested storage on the available physical disks or not, if not possible the workloads is marked as error.
Storage tries to find the requested space based on type (SU or HU), then find the optimal way to fit that on the available disks, or spin up a new one if needed.

View File

@ -1,14 +0,0 @@
# Compatibility list
This document track all the hardware that have been tested, the issues encountered and possible workarounds.
**Legend**
✅ : fully supported
⚠️ : supported with some tweaking
🛑 : not supported
| vendor | Hardware | Support | Issues | workaround |
| --- | --- | --- | --- | --- |
| Supermicro | SYS-5038ML-H8TRF | ✅ | | |
| Gigabyte Technology Co | AB350N-Gaming WIFI | ✅ | | |

View File

@ -1,106 +0,0 @@
<h1>Container Module</h1>
<h2> Table of Contents </h2>
- [ZBus](#zbus)
- [Home Directory](#home-directory)
- [Introduction](#introduction)
- [zinit unit](#zinit-unit)
- [Interface](#interface)
***
## ZBus
Storage module is available on zbus over the following channel
| module | object | version |
|--------|--------|---------|
| container|[container](#interface)| 0.0.1|
## Home Directory
contd keeps some data in the following locations
| directory | path|
|----|---|
| root| `/var/cache/modules/containerd`|
## Introduction
The container module, is a proxy to [containerd](https://github.com/containerd/containerd). The proxy provides integration with zbus.
The implementation is the moment is straight forward, which includes preparing the OCI spec for the container, the tenant containerd namespace,
setting up proper capabilities, and finally creating the container instance on `containerd`.
The module is fully stateless, all container information is queried during runtime from `containerd`.
### zinit unit
`contd` must run after containerd is running, and the node boot process is complete. Since it doesn't keep state, no dependency on `stroaged` is needed
```yaml
exec: contd -broker unix:///var/run/redis.sock -root /var/cache/modules/containerd
after:
- containerd
- boot
```
## Interface
```go
package pkg
// ContainerID type
type ContainerID string
// NetworkInfo defines a network configuration for a container
type NetworkInfo struct {
// Currently a container can only join one (and only one)
// network namespace that has to be pre defined on the node
// for the container tenant
// Containers don't need to know about anything about bridges,
// IPs, wireguards since this is all is only known by the network
// resource which is out of the scope of this module
Namespace string
}
// MountInfo defines a mount point
type MountInfo struct {
Source string // source of the mount point on the host
Target string // target of mount inside the container
Type string // mount type
Options []string // mount options
}
//Container creation info
type Container struct {
// Name of container
Name string
// path to the rootfs of the container
RootFS string
// Env env variables to container in format {'KEY=VALUE', 'KEY2=VALUE2'}
Env []string
// Network network info for container
Network NetworkInfo
// Mounts extra mounts for container
Mounts []MountInfo
// Entrypoint the process to start inside the container
Entrypoint string
// Interactivity enable Core X as PID 1 on the container
Interactive bool
}
// ContainerModule defines rpc interface to containerd
type ContainerModule interface {
// Run creates and starts a container on the node. It also auto
// starts command defined by `entrypoint` inside the container
// ns: tenant namespace
// data: Container info
Run(ns string, data Container) (ContainerID, error)
// Inspect, return information about the container, given its container id
Inspect(ns string, id ContainerID) (Container, error)
Delete(ns string, id ContainerID) error
}
```

View File

@ -1,74 +0,0 @@
<h1>Flist Module</h1>
<h2> Table of Contents </h2>
- [Zbus](#zbus)
- [Home Directory](#home-directory)
- [Introduction](#introduction)
- [Public interface ](#public-interface-)
- [zinit unit](#zinit-unit)
***
## Zbus
Flist module is available on zbus over the following channel:
| module | object | version |
|--------|--------|---------|
|flist |[flist](#public-interface)| 0.0.1
## Home Directory
flist keeps some data in the following locations:
| directory | path|
|----|---|
| root| `/var/cache/modules/containerd`|
## Introduction
This module is responsible to "mount an flist" in the filesystem of the node. The mounted directory contains all the files required by containers or (in the future) VMs.
The flist module interface is very simple. It does not expose any way to choose where to mount the flist or have any reference to containers or VM. The only functionality is to mount a given flist and receive the location where it is mounted. It is up to the above layer to do something useful with this information.
The flist module itself doesn't contain the logic to understand the flist format or to run the fuse filesystem. It is just a wrapper that manages [0-fs](https://github.com/threefoldtech/0-fs) processes.
Its only job is to download the flist, prepare the isolation of all the data and then start 0-fs with the proper arguments.
## Public interface [![GoDoc](https://godoc.org/github.com/threefoldtech/zos/pkg/flist?status.svg)](https://godoc.org/github.com/threefoldtech/zos/pkg/flist)
```go
//Flister is the interface for the flist module
type Flister interface {
// Mount mounts an flist located at url using the 0-db located at storage
// in a RO mode. note that there is no way u can unmount a ro flist because
// it can be shared by many users, it's then up to system to decide if the
// mount is not needed anymore and clean it up
Mount(name, url string, opt MountOptions) (path string, err error)
// UpdateMountSize change the mount size
UpdateMountSize(name string, limit gridtypes.Unit) (path string, err error)
// Umount a RW mount. this only unmounts the RW layer and remove the assigned
// volume.
Unmount(name string) error
// HashFromRootPath returns flist hash from a running g8ufs mounted with NamedMount
HashFromRootPath(name string) (string, error)
// FlistHash returns md5 of flist if available (requesting the hub)
FlistHash(url string) (string, error)
Exists(name string) (bool, error)
}
```
## zinit unit
The zinit unit file of the module specifies the command line, test command, and the order in which the services need to be booted.
Flist module depends on the storage and network pkg.
This is because it needs connectivity to download flist and data and it needs storage to be able to cache the data once downloaded.
Flist doesn't do anything special on the system except creating a bunch of directories it will use during its lifetime.

View File

@ -1,121 +0,0 @@
# Gateway Module
## ZBus
Gateway module is available on zbus over the following channel
| module | object | version |
| ------- | --------------------- | ------- |
| gateway | [gateway](#interface) | 0.0.1 |
## Home Directory
gateway keeps some data in the following locations
| directory | path |
| --------- | ---------------------------- |
| root | `/var/cache/modules/gateway` |
The directory `/var/cache/modules/gateway/proxy` contains the route information used by traefik to forward traffic.
## Introduction
The gateway modules is used to register traefik routes and services to act as a reverse proxy. It's the backend supporting two kinds of workloads: `gateway-fqdn-proxy` and `gateway-name-proxy`.
For the FQDN type, it receives the domain and a list of backends in the form `http://ip:port` or `https://ip:port` and registers a route for this domain forwarding traffic to these backends. It's a requirement that the domain resolves to the gateway public ip. The `tls_passthrough` parameter determines whether the tls termination happens on the gateway or in the backends. When it's true, the backends must be in the form `https://ip:port`, and the backends must be https-enabled servers.
The name type is the same as the FQDN type except that the `name` parameter is added as a prefix to the gatweay domain to determine the fqdn. It's forbidden to use a FQDN type workload to reserve a domain managed by the gateway.
The fqdn type is enabled only if there's a public config on the node. The name type works only if a domain exists in the public config. To make a full-fledged gateway node, these DNS records are required:
```
gatwaydomain.com A ip.of.the.gateway
*.gatewaydomain.com CNAME gatewaydomain.com
__acme-challenge.gatewaydomain.com NS gatdwaydomain.com
```
### zinit unit
```yaml
exec: gateway --broker unix:///var/run/redis.sock --root /var/cache/modules/gateway
after:
- boot
```
## Implementation details
Traefik is used as the reverse proxy forwarding traffic to upstream servers. All worklaods deployed on the node is associated with a domain that resolves to the node IP. In the name workload case, it's a subdomain of the gateway main domain. In the FQDN case, the user must create a DNS A record pointing it to the node IP. The node by default redirects all http traffic to https.
When an https request reaches the node, it looks at the domain and determines the correct service that should handle the request. The services defintions are in `/var/cache/modules/gateway/proxy/` and is hot-reloaded by traefik every time a service is added/removed to/from it. Zos currently supports enabling `tls_passthrough` in which case the https request is passed as is to the backend (at the TCP level). The default is `tls_passthrough` is false which means the node terminates the TLS traffic and then forwards the request as http to the backend.
Example of a FQDN service definition with tls_passthrough enabled:
```yaml
tcp:
routers:
37-2039-testname-route:
rule: HostSNI(`remote.omar.grid.tf`)
service: 37-2039-testname
tls:
passthrough: "true"
services:
37-2039-testname:
loadbalancer:
servers:
- address: 137.184.106.152:443
```
Example of a "name" service definition with tls_passthrough disabled:
```yaml
http:
routers:
37-1976-workloadname-route:
rule: Host(`workloadname.gent01.dev.grid.tf`)
service: 40-1976-workloadname
tls:
certResolver: dnsresolver
domains:
- sans:
- '*.gent01.dev.grid.tf'
services:
40-1976-workloadname:
loadbalancer:
servers:
- url: http://[backendip]:9000
```
The `certResolver` option has two valid values, `resolver` and `dnsresolver`. The `resolver` is an http resolver and is used in FQDN services with `tls_passthrough` disabled. It uses the http challenge to generate a single-domain certificate. The `dnsresolver` is used for name services with `tls_passthrough` disabled. The `dnsresolver` is responsible for generating a wildcard certificate to be used for all subdomains of the gateway domain. Its flow is described below.
The CNAME record is used to make all subdomains (reserved or not) resolve to the ip of the gateway. Generating a wildcard certificate requires adding a TXT record at `__acme-challenge.gatewaydomain.com`. The NS record is used to delegate this specific subdomain to the node. So if someone did `dig TXT __acme-challenge.gatewaydomain.com`, the query is served by the node, not the DNS provider used for the gateway domain.
Traefik has, as a config parameter, multiple dns [providers](https://doc.traefik.io/traefik/https/acme/#providers) to communicate with when it wants to add the required TXT record. For non-supported providers, a bash script can be provided to do the record generation and clean up (i.e. External program). The bash [script](https://github.com/threefoldtech/zos/blob/main/pkg/gateway/static/cert.sh) starts dnsmasq managing a dns zone for the `__acme-challenge` subdomain with the given TXT record. It then kills the dnsmasq process and removes the config file during cleanup.
## Interface
```go
type Backend string
// GatewayFQDNProxy definition. this will proxy name.<zos.domain> to backends
type GatewayFQDNProxy struct {
// FQDN the fully qualified domain name to use (cannot be present with Name)
FQDN string `json:"fqdn"`
// Passthroug whether to pass tls traffic or not
TLSPassthrough bool `json:"tls_passthrough"`
// Backends are list of backend ips
Backends []Backend `json:"backends"`
}
// GatewayNameProxy definition. this will proxy name.<zos.domain> to backends
type GatewayNameProxy struct {
// Name the fully qualified domain name to use (cannot be present with Name)
Name string `json:"name"`
// Passthroug whether to pass tls traffic or not
TLSPassthrough bool `json:"tls_passthrough"`
// Backends are list of backend ips
Backends []Backend `json:"backends"`
}
type Gateway interface {
SetNamedProxy(wlID string, prefix string, backends []string, TLSPassthrough bool) (string, error)
SetFQDNProxy(wlID string, fqdn string, backends []string, TLSPassthrough bool) error
DeleteNamedProxy(wlID string) error
Metrics() (GatewayMetrics, error)
}
```

View File

@ -1,99 +0,0 @@
# 0-OS, a bit of history and introduction to Version 2
## Once upon a time
----
A few years ago, we were trying to come up with some solutions to the problem of self-healing IT.
We boldly started that : the current model of cloud computing in huge data-centers is not going to be able to scale to fit the demand in IT capacity.
The approach we took to solve this problem was to enable localized compute and storage units at the edge of the network, close to where it is needed.
That basically meant that if we were to deploy physical hardware to the edges, nearby the users, we would have to allow information providers to deploy their solutions on that edge network and hardware. That means also sharing hardware resources between users, where we would have to make damn sure noone can peek around in things that are not his.
When we talk about sharing capacity in a secure environment, virtualization comes to mind. It's not a new technology and it has been around for quite some time. This solution comes with a cost though. Virtual machines, emulating a full hardware platform on real hardware is costly in terms of used resources, and eat away at the already scarce resources we want to provide for our users.
Containerizing technologies were starting to get some hype at the time. Containers provide for basically the same level of isolation as Full Virtualisation, but are a lot less expensive in terms of resource utilization.
With that in mind, we started designing the first version of 0-OS. The required features were:
- be able to be fully in control of the hardware
- give the possibility to different users to share the same hardware
- deploy this capacity at the edge, close to where it is needed
- the System needs to self-heal. Because of their location and sheer scale, manual maintenance was not an option. Self-healing is a broad topic, and will require a lot of experience and fine-tuning, but it was meant to culminate at some point so that most of the actions that sysadmins execute, would be automated.
- Have an a small as possible attack surface, as well for remote types of attack, as well as protecting users from each-other
The result of that thought process resulted in 0-OS v1. A linux kernel with the minimal components on top that allows to provide for these features.
In the first incantation of 0-OS, the core framework was a single big binary that got started as the first process of the system (PID 1). All the managment features were exposed through an API that was only accessible locally.
The idea was to have an orchestration system running on top that was going to be responsible to deploy Virtual Machines and Containers on the system using that API.
This API exposes 3 main primitives:
- networking: zerotier, vlan, macvlan, bridge, openvswitch...
- storage: plain disk, 0-db, ...
- compute: VM, containers
That was all great and it allowed us to learn a lot. But some limitations started to appear. Here is a non exhaustive list of the limitations we had to face after a couple of years of utilization:
- Difficulty to push new versions and fixes on the nodes. The fact that 0-OS was a single process running as PID 1, forced us to completely reboot the node every time we wanted to push an update.
- The API, while powerful, still required to have some logic on top to actually deploy usable solutions.
- We noticed that some features we implemented were never or extremely rarely used. This was just increasing the possible attack surface for no real benefits.
- The main networking solution we choose at the time, zerotier, was not scaling as well as we hoped for.
- We wrote a lot of code ourselves, instead of relying on already existing open source libraries that would have made that task a lot easier, but also, these libraries were a lot more mature and have had a lot more exposure for ironing out possible bugs and vulnerabilities than we could have created and tested ourselves with the little resources we have at hand.
## Now what ?
With the knowledge and lessons gathered during these first years of usage, we
concluded that trying to fix the already existing codebase would be cumbersome
and we also wanted to avoid any technical debt that could haunt us for years
after. So we decided for a complete rewrite of that stack, taking a new and
fully modular approach, where every component could be easily replaced and
upgraded without the need for a reboot.
Hence Version 2 saw the light of day.
Instead of trial and error, and muddling along trying to fit new features in
that big monolithic codebase, we wanted to be sure that the components were
reduced to a more manageable size, having a clearly cut Domain Separation.
Instead of creating solutions waiting for a problem, we started looking at things the other way around. Which is logical, as by now, we learned what the real puzzles to solve were, albeit sometimes by painful experience.
## Tadaa!
----
The [first commit](https://github.com/threefoldtech/zosv2/commit/7b783c888673d1e9bc400e4abbb17272e995f5a4) of the v2 repository took place the 11 of February 2019.
We are now 6 months in, and about to bake the first release of 0-OS v2.
Clocking in at almost 27KLoc, it was a very busy half-year. (admitted, there are the spec and docs too in that count ;-) )
Let's go over the main design decisions that were made and explain briefly each component.
While this is just an introduction, we'll add more articles digging deeper in the technicalities and approaches of each component.
## Solutions to puzzles (there are no problems)
----
**UPDATES**
One of the first puzzles we wanted to solve was the difficulty to push upgrades.
In order to solve that, we designed 0-OS components as completely stand-alone modules. Each subsystem, be it storage, networking, containers/VMs, is managed by it's own component (mostly a daemon), and communicate with each-other through a local bus. And as we said, each component can then be upgraded separately, together with the necessary data migrations that could be required.
**WHAT API?**
The second big change is our approach to the API, or better, lack thereof.
In V2 we dropped the idea to expose the primitives of the Node over an API.
Instead, all the required knowledge to deploy workloads is directly embedded in 0-OS.
So in order to have the node deploy a workload, we have created a blueprint like system where the user describes what his requirements in terms of compute power, storage and networking are, and the node applies that blueprint to make it reality.
That approach has a few advantages:
- It greatly reduces the attack surface of the node because there is no more direct interaction between a user and a node.
- And it also allows us to have a greater control over how things are organized in the node itself. The node being its own boss, can decide to re-organize itself whenever needed to optimize the capacity it can provide.
- Having a blueprint with requirements, gives the grid the possibility to verify that blueprint on multiple levels before applying it. That is: as well on top level as on node level a blueprint can be verified for validity and signatures before any other action will be executed.
**PING**
The last major change is how we want to handle networking.
The solution used during the lifetime of V1 exposed its limitations when we started scaling our networks to hundreds of nodes.
So here again we started from scratch and created our own overlay network solution.
That solution is based on the 'new kid on the block' in terms of VPN: [Wireguard](https://wireguard.io) and it's approach and usage will be fully explained in the next 0-OS article.
For the eager ones of you, there are some specifications and also some documentation [here](https://github.com/threefoldtech/zosv2/tree/master/docs/network) and [there](https://github.com/threefoldtech/zosv2/tree/master/specs/network).
## That's All, Folks (for now)
So this little article as an intro to the brave new world of 0-OS.
The Zero-OS team engages itself to regularly keep you updated on it's progress, the new features that will surely be added, and for the so inclined, add a lot more content for techies on how to actually use that novel beast.
[Till next time](https://youtu.be/b9434BoGkNQ)

View File

@ -1,143 +0,0 @@
<h1> Node ID Generation</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [ZBus](#zbus)
- [Home Directory](#home-directory)
- [Introduction](#introduction-1)
- [On Node Booting](#on-node-booting)
- [ID generation](#id-generation)
- [Cryptography](#cryptography)
- [zinit unit](#zinit-unit)
- [Interface](#interface)
***
## Introduction
We explain the node ID generation process.
## ZBus
Identity module is available on zbus over the following channel
| module | object | version |
|--------|--------|---------|
| identity|[manager](#interface)| 0.0.1|
## Home Directory
identity keeps some data in the following locations
| directory | path|
|----|---|
| root| `/var/cache/modules/identity`|
## Introduction
Identity manager is responsible for maintaining the node identity (public key). The manager make sure the node has one valid ID during the entire lifetime of the node. It also provide service to sign, encrypt and decrypt data using the node identity.
On first boot, the identity manager will generate an ID and then persist this ID for life.
Since the identity daemon is the only one that can access the node private key, it provides an interface to sign, verify and encrypt data. This methods are available for other modules on the local node to use.
## On Node Booting
- Check if node already has a seed generated
- If yes, load the node identity
- If not, generate a new ID
- Start the zbus daemon.
## ID generation
At this time of development the ID generated by identityd is the base58 encoded public key of a ed25519 key pair.
The key pair itself is generated from a random seed of 32 bytes. It is this seed that is actually saved on the node. And during boot the key pair is re-generated from this seed if it exists.
## Cryptography
The signing and encryption capabilities of the identity module rely on this ed25519 key pair.
For signing, it directly used the key pair.
For public key encryption, the ed25519 key pair is converted to its cure25519 equivalent and then use use to encrypt the data.
### zinit unit
The zinit unit file of the module specify the command line, test command, and the order where the services need to be booted.
`identityd` require `storaged` to make sure the seed is persisted over reboots, to make sure node has the same ID during the full life time of the node.
The identityd daemon is only considered running if the seed file exists.
```yaml
exec: /bin/identityd
test: test -e /var/cache/modules/identity/seed.txt
after:
- storaged
```
## Interface
For an up to date interface please check code [here](https://github.com/threefoldtech/zos/blob/main/pkg/identity.go)
```go
package pkg
// Identifier is the interface that defines
// how an object can be used as an identity
type Identifier interface {
Identity() string
}
// StrIdentifier is a helper type that implement the Identifier interface
// on top of simple string
type StrIdentifier string
// Identity implements the Identifier interface
func (s StrIdentifier) Identity() string {
return string(s)
}
// IdentityManager interface.
type IdentityManager interface {
// NodeID returns the node id (public key)
NodeID() StrIdentifier
// NodeIDNumeric returns the node registered ID.
NodeIDNumeric() (uint32, error)
// FarmID return the farm id this node is part of. this is usually a configuration
// that the node is booted with. An error is returned if the farmer id is not configured
FarmID() (FarmID, error)
// Farm returns name of the farm. Or error
Farm() (string, error)
//FarmSecret get the farm secret as defined in the boot params
FarmSecret() (string, error)
// Sign signs the message with privateKey and returns a signature.
Sign(message []byte) ([]byte, error)
// Verify reports whether sig is a valid signature of message by publicKey.
Verify(message, sig []byte) error
// Encrypt encrypts message with the public key of the node
Encrypt(message []byte) ([]byte, error)
// Decrypt decrypts message with the private of the node
Decrypt(message []byte) ([]byte, error)
// EncryptECDH aes encrypt msg using a shared key derived from private key of the node and public key of the other party using Elliptic curve Diffie Helman algorithm
// the nonce if prepended to the encrypted message
EncryptECDH(msg []byte, publicKey []byte) ([]byte, error)
// DecryptECDH decrypt aes encrypted msg using a shared key derived from private key of the node and public key of the other party using Elliptic curve Diffie Helman algorithm
DecryptECDH(msg []byte, publicKey []byte) ([]byte, error)
// PrivateKey sends the keypair
PrivateKey() []byte
}
// FarmID is the identification of a farm
type FarmID uint32
```

View File

@ -1,8 +0,0 @@
<h1> Identity Module </h1>
Identity daemon is responsible for two major operations that are crucial for the node operation.
<h2> Table of Contents </h2>
- [Node ID Generation](identity.md)
- [Node Live Software Update](upgrade.md)

View File

@ -1,98 +0,0 @@
<h1> Node Upgrade</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Philosophy](#philosophy)
- [Booting a new node](#booting-a-new-node)
- [Runtime upgrade of a node](#runtime-upgrade-of-a-node)
- [Technical](#technical)
- [Flist layout](#flist-layout)
***
## Introduction
We provide information concerning node upgrade with ZOS. We also explain the philosophy behind ZOS.
## Philosophy
0-OS is meant to be a black box no one can access. While this provide some nice security features it also makes it harder to manage. Specially when it comes to update/upgrade.
Hence, zos only trust few sources for upgrade packages. When the node boots up it checks the sources for the latest release and make sure all the local binaries are up-to-date before continuing the booting. The flist source must be rock-solid secured, that's another topic for different documentation.
The run mode defines which flist the node is going to use to boot. Run mode can be specified by passing `runmode=<mode>` to the kernel boot params. Currently we have those different run modes.
- dev: ephemeral network only setup to develop and test new features. Can be created and reset at anytime
- test: Mostly stable features that need to be tested at scale, allow preview and test of new features. Always the latest and greatest. This network can be reset sometimes, but should be relatively stable.
- prod: Released of stable version. Used to run the real grid with real money. Cannot be reset ever. Only stable and battle tested feature reach this level.
## Booting a new node
The base image for zos contains a very small subset of tools, plus the boot program. Standing alone, the image is not really useful. On boot and
after initial start of the system, the boot program kicks in and it does the following:
- Detect the boot flist that the node must use to fully start. The default is hard-coded into zos, but this can be overridden by the `flist=` kernel param. The `flist=` kernel param can get deprecated without a warning, since it's a development flag.
- The bootstrap, will then mount this flist using 0-fs, this of course requires a working connection to the internet. Hence bootstrap is configured to wait for the `internet` service.
- The flist information (name, and version) is saved under `/tmp/flist.name` and `/tmp/flist.info`.
- The bootstrap makes sure to copy all files in the flist to the proper locations under the system rootfs, this include `zinit` config files.
- Then zinit is asked to monitor new installed services, zinit takes care of those services and make sure they are properly working at all times.
- Bootstrap, umounts the flist, cleans up before it exits.
- Boot process continues.
## Runtime upgrade of a node
Once the node is up and running, identityd takes over and it does the following:
- It loads the boot info files `/tmp/flist.name` and `/tmp/flist.info`
- If the `flist.name` file does **not** exist, `identityd` will assume the node is booted with other means than an flist (for example overlay). In that case, identityd will log this, and disable live upgrade of the node.
- If the `flist.name` file exists, the flist will be monitored on the `https://hub.grid.tf` for changes. Any change in the version will initiate a life upgrade routine.
- Once the flist change is detected, identityd will mount the flist, make sure identityd is running the latest version. If not, identityd will update itself first before continuing.
- services that will need update will be gracefully stopped.
- `identityd` will then make sure to update all services from the flist, and config files. and restart the services properly.
- services are started again after all binaries has been copied
## Technical
0-OS is designed to provide maximum uptime for its workload, rebooting a node should never be required to upgrade any of its component (except when we push a kernel upgrade).
![flow](../../assets/0-OS-upgrade.png)
### Flist layout
The files in the upgrade flist needs to be located in the filesystem tree at the same destination they would need to be in 0-OS. This allow the upgrade code to stays simple and only does a copy from the flist to the root filesystem of the node.
Booting a new node, or updating a node uses the same flist. Hence, a boot flist must container all required services for node operation.
Example:
0-OS filesystem:
```
/etc/zinit/identityd.yaml
/etc/zinit/networkd.yaml
/etc/zinit/contd.yaml
/etc/zinit/init/node-ready.sh
/etc/zinit/init
/etc/zinit/redis.yaml
/etc/zinit/storaged.yaml
/etc/zinit/flistd.yaml
/etc/zinit/readme.md
/etc/zinit/internet.yaml
/etc/zinit/containerd.yaml
/etc/zinit/boot.yaml
/etc/zinit/provisiond.yaml
/etc/zinit/node-ready.yaml
/etc/zinit
/etc
/bin/zlf
/bin/provisiond
/bin/flistd
/bin/identityd
/bin/contd
/bin/capacityd
/bin/storaged
/bin/networkd
/bin/internet
/bin
```

View File

@ -1,88 +0,0 @@
<h1> Internal Modules</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Booting](#booting)
- [Bootstrap](#bootstrap)
- [Zinit](#zinit)
- [Architecture](#architecture)
- [IPC](#ipc)
- [ZOS Processes (modules)](#zos-processes-modules)
- [Capacity](#capacity)
***
## Introduction
This document explains in a nutshell the internals of ZOS. This includes the boot process, architecture, the internal modules (and their responsibilities), and the inter-process communication.
## Booting
ZOS is a linux based operating system in the sense that we use the main-stream linux kernel with no modifications (but heavily customized). The base image of ZOS includes linux, busybox, [zinit](https://github.com/threefoldtech/zinit) and other required tools that are needed during the boot process. The base image is also shipped with a bootstrap utility that is self-updating on boot which kick starts everything.
For more details about the ZOS base image please check [0-initramfs](https://github.com/threefoldtech/0-initramfs).
`ZOS` uses zinit as its `init` or `PID 1` process. `zinit` acts as a process manager and it takes care of starting all required services in the right order. Using simple configuration that is available under `/etc/zinit`.
The base `ZOS` image has a zinit config to start the basic services that are required for booting. These include (mainly) but are not limited to:
- internet: A very basic service that tries to connect zos to the internet as fast (and as simple) as possible (over ethernet) using dhcp. This is needed so the system can continue the boot process. Once this one succeeds, it exits and leaves node network management to the more sophisticated ZOS module `networkd` which is yet to be downloaded and started by bootstrap.
- redis: This is required by all zos modules for its IPC (inter process communication).
- bootstrap: The bootstrap process which takes care of downloading all required zos binaries and modules. This one requires the `internet` service to actually succeed.
## Bootstrap
`bootstrap` is a utility that resides on the base image. It takes care of downloading and configuring all zos main services by doing the following:
- It checks if there is a more recent version of itself available. If it exists, the process first updates itself before proceeding.
- It checks zos boot parameters (for example, which network you are booting into) as set by <https://bootstrap.grid.tf/>.
- Once the network is known, let's call it `${network}`. This can either be `production`, `testing`, or `development`. The proper release is downloaded as follows:
- All flists are downloaded from one of the [hub](https://hub.grid.tf/) `tf-zos-v3-bins.dev`, `tf-zos-v3-bins.test`, or `tf-zos-v3-bins` repos. Based on the network, only one of those repos is used to download all the support tools and binaries. Those are not included in the base image because they can be updated, added, or removed.
- The flist `https://hub.grid.tf/tf-zos/zos:${network}-3:latest.flist.md` is downloaded (note that ${network} is replaced with the actual value). This flist includes all zos services from this repository. More information about the zos modules are explained later.
- Once all binaries are downloaded, `bootstrap` finishes by asking zinit to start monitoring the newly installed services. The bootstrap exits and will never be started again as long as zos is running.
- If zos is restarted the entire bootstrap process happens again including downloading the binaries because ZOS is completely stateless (except for some cached runtime data that is preserved across reboots on a cache disk).
## Zinit
As mentioned earlier, `zinit` is the process manager of zos. Bootstrap makes sure it registers all zos services for zinit to monitor. This means that zinit will take care that those services are always running, and restart them if they have crashed for any reason.
## Architecture
For `ZOS` to be able to run workloads of different types it has split its functionality into smaller modules. Where each module is responsible for providing a single functionality. For example `storaged` which manages machine storages, hence it can provide low level storage capacity to other services that need it.
As an example, imagine that you want to start a `virtual machine`. For a `virtual machine` to be able to run it will require a `rootfs` image or the image of the VM itself this is normally provided via an `flist` (managed by `flistd`), then you would need an actual persistent storage (managed by `storaged`), a virtual nic (managed by `networkd`), another service that can put everything together in a form of a VM (`vmd`). Then finally a service that orchestrates all of this and translates the user request to an actual workload `provisiond`, you get the picture.
### IPC
All modules running in zos needs to be able to interact with each other. As it shows from the previous example. For example, `provision` daemon need to be able to ask `storage` daemon to prepare a virtual disk. A new `inter-process communication` protocol and library was developed to enable this with those extra features:
- Modules do not need to know where other modules live, there are no ports, and/or urls that have to be known by all services.
- A single module can run multiple versions of an API.
- Ease of development.
- Auto generated clients.
For more details about the message bus please check [zbus](https://github.com/threefoldtech/zbus)
`zbus` uses redis as a message bus, hence redis is started in the early stages of zos booting.
`zbus` allows auto generation of `stubs` which are generated clients against a certain module interface. Hence a module X can interact with a module Y by importing the generated clients and then start making function calls.
## ZOS Processes (modules)
Modules of zos are completely internal. There is no way for an external user to talk to them directly. The idea is the node exposes a public API over rmb, while internally this API can talk to internal modules over `zbus`.
Here is a list of the major ZOS modules.
- [Identity](identity/index.md)
- [Node](node/index.md)
- [Storage](storage/index.md)
- [Network](network/index.md)
- [Flist](flist/index.md)
- [Container](container/index.md)
- [VM](vmd/index.md)
- [Provision](provision/index.md)
## Capacity
In [this document](./capacity.md), you can find detail description of how ZOS does capacity planning.

View File

@ -1,57 +0,0 @@
> Note: This is unmaintained, try on your own responsibility
# MacOS Developer
0-OS (v2) uses a Linux kernel and is really build with a linux environment in mind.
As a developer working from a MacOS environment you will have troubles running the 0-OS code.
Using [Docker][docker] you can work from a Linux development environment, hosted from your MacOS Host machine.
In this README we'll do exactly that using the standard Ubuntu [Docker][docker] container as our base.
## Setup
0. Make sure to have Docker installed, and configured (also make sure you have your code folder path shared in your Docker preferences).
1. Start an _Ubuntu_ Docker container with your shared code directory mounted as a volume:
```bash
docker run -ti -v "$HOME/oss":/oss ubuntu /bin/bash
```
2. Make sure your environment is updated and upgraded using `apt-get`.
3. Install Go (`1.13`) from src using the following link or the one you found on [the downloads page](https://golang.org/dl/):
```bash
wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz
sudo tar -xvf go1.13.3.linux-amd64.tar.gz
sudo mv go /usr/local
```
4. Add the following to your `$HOME/.bashrc` and `source` it:
```vim
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
```
5. Confirm you have Go installed correctly:
```
go version && go env
```
6. Go to your `zos` code `pkg` directory hosted from your MacOS development machine within your docker `/bin/bash`:
```bash
cd /oss/github.com/threefoldtech/zos/pkg
```
7. Install the dependencies for testing:
```bash
make getdeps
```
8. Run tests and verify all works as expected:
```bash
make test
```
9. Build `zos`:
```bash
make build
```
If you can successfully do step (8) and step (9) you
can now contribute to `zos` as a MacOS developer.
Testing and compiling you'll do from within your container's shell,
coding you can do from your beloved IDE on your MacOS development environment.
[docker]: https://www.docker.com

View File

@ -1,66 +0,0 @@
## Farmers providing transit for Tenant Networks (TN or Network)
For networks of a user to be reachable, these networks need penultimate Network resources that act as exit nodes for the WireGuard mesh.
For that Users need to sollicit a routable network with farmers that provide such a service.
### Global registry for network resources. (`GRNR`?)
Threefold through BCDB shoud keep a store where Farmers can register also a network service for Tenant Network (TN) reachablility.
In a network transaction the first thing asked should be where a user wants to purchase it's transit. That can be with a nearby (latency or geolocation) Exit Provider (can e.g. be a Farmer), or with a Exit Provider outside of the geolocation for easier routing towards the primary entrypoint. (VPN-like services coming to mind)
With this, we could envision in a later stage to have the Network Resources to be IPv6 multihomed with policy-based routing. That adds the possibiltiy to have multiple exit nodes for the same Network, with different IPv6 routes to them.
### Datastructure
A registered Farmer can also register his (dc-located?) network to be sold as transit space. For that he registers:
- the IPv4 addresses that can be allocated to exit nodes.
- the IPv6 prefix he obtained to be used in the Grid
- the nodes that will serve as exit nodes.
These nodes need to have IPv[46] access to routable address space through:
- Physical access in an interface of the node
- Access on a public `vlan` or via `vxlan / mpls / gre`
Together with the registered nodes that will be part of that Public segment, the TNoDB (BCDB) can verify a Network Object containing an ExitPoint for a Network and add it to the queue for ExitNodes to fetch and apply.
Physcally Nodes can be connected in several ways:
- living directly on the Internet (with a routable IPv4 and/or IPv6 Address) without Provider-enforced firewalling (outgoing traffic only)
- having an IPv4 allocation --and-- and IPv6 allocation
- having a single IPv4 address --and-- a single IPv6 allocation (/64) or even (Oh God Why) a single IPv6 addr.
- living in a Farm that has Nodes only reachable through NAT for IPv4 and no IPv6
- living in a Farm that has NAT IPv4 and routable IPv6 with an allocation
- living in a single-segment having IPv4 RFC1918 and only one IPv6 /64 prefix (home Nodes mostly)
#### A Network resource allocation.
We define Network Resource (NR) as a routable IPv6 `/64` Prefix, so for every time a new TNo is generated and validated, containing a new serial number and an added/removed NR, there has been a request to obtain a valid IPv6 Prefix (/64) to be added to the TNo.
Basically it's just a list of allocations in that prefix, that are in use. Any free Prefix will do, as we do routing in the exit nodes with a `/64` granularity.
The TNoDB (BCDB) then validates/updates the Tenant Network object with that new Network Resource and places it on a queue to be fetched by the interested Nodes.
#### The Nodes responsible for ExitPoints
A Node responsible for ExitPoints as wel as a Public endpoint will know so because of how it's registered in the TNoDB (BCDB). That is :
- it is defined as an exit node
- the TNoDB hands out an Object that describes it's public connectivity. i.e. :
- the public IPv4 address(es) it can use
- the IPv6 Prefix in the network segment that contains the penultimate default route
- an eventual Private BGP AS number for announcing the `/64` Prefixes of a Tenant Network, and the BGP peer(s).
With that information, a Node can then build the Network Namespace from which it builds the Wireguard Interfaces prior to sending them in the ExitPoint Namespace.
So the TNoDB (BCDB) hands out
- Tenant Network Objects
- Public Interface Objects
They are related :
- A Node can have Network Resources
- A Network Resource can have (1) Public Interface
- Both are part of a Tenant Network
A TNo defines a Network where ONLY the ExitPoint is flagged as being one. No more.
When the Node (networkd) needs to setup a Public node, it will need to act differently.
- Verify if the Node is **really** public, if so use standard WG interface setup
- If not, verify if there is already a Public Exit Namespace defined, create WG interface there.
- If there is Public Exit Namespace, request one, and set it up first.

View File

@ -1,264 +0,0 @@
# Network
- [How does a farmer configure a node as exit node](#How-does-a-farmer-configure-a-node-as-exit-node)
- [How to create a user private network](#How-to-create-a-user-private-network)
## How does a farmer configure a node as exit node
For the network of the grid to work properly, some of the nodes in the grid need to be configured as "exit nodes". An "exit node" is a node that has a publicly accessible IP address and that is responsible routing IPv6 traffic, or proxy IPv4 traffic.
A farmer that wants to configure one of his nodes as "exit node", needs to register it in the TNODB. The node will then automatically detect it has been configured to be an exit node and do the necessary network configuration to start acting as one.
At the current state of the development, we have a [TNODB mock](../../tools/tnodb_mock) server and a [tffarmer CLI](../../tools/tffarm) tool that can be used to do these configuration.
Here is an example of how a farmer could register one of his node as "exit node":
1. Farmer needs to create its farm identity
```bash
tffarmer register --seed myfarm.seed "mytestfarm"
Farm registered successfully
Name: mytestfarm
Identity: ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=
```
2. Boot your nodes with your farm identity specified in the kernel parameters.
Take that farm identity create at step 1 and boot your node with the kernel parameters `farmer_id=<identity>`
for your test farm that would be `farmer_id=ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=`
Once the node is booted, it will automatically register itself as being part of your farm into the [TNODB](../../tools/tnodb_mock) server.
You can verify that you node registered itself properly by listing all the node from the TNODB by doing a GET request on the `/nodes` endpoints:
```bash
curl http://tnodb_addr/nodes
[{"node_id":"kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=","farm_id":"ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=","Ifaces":[]}]
```
3. Farmer needs to specify its public allocation range to the TNODB
```bash
tffarmer give-alloc 2a02:2788:0000::/32 --seed myfarm.seed
prefix registered successfully
```
4. Configure the public interface of the exit node if needed
In this step the farmer will tell his node how it needs to connect to the public internet. This configuration depends on the farm network setup, this is why this is up to the farmer to provide the detail on how the node needs to configure itself.
In a first phase, we create the internet access in 2 ways:
- the node is fully public: you don't need to configure a public interface, you can skip this step
- the node has a management interface and a nic for public
then `configure-public` is required, and the farmer has the public interface connected to a specific public segment with a router to the internet in front.
```bash
tffarmer configure-public --ip 172.20.0.2/24 --gw 172.20.0.1 --iface eth1 kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=
#public interface configured on node kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=
```
We still need to figure out a way to get the routes properly installed, we'll do static on the toplevel router for now to do a demo.
The node is now configured to be used as an exit node.
5. Mark a node as being an exit node
The farmer then needs to select which node he agrees to use as an exit node for the grid
```bash
tffarmer select-exit kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=
#Node kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA= marked as exit node
```
## How to create a user private network
1. Choose an exit node
2. Request an new allocation from the farm of the exit node
- a GET request on the tnodb_mock at `/allocations/{farm_id}` will give you a new allocation
3. Creates the network schema
Steps 1 and 2 are easy enough to be done even manually but step 3 requires a deep knowledge of how networking works
as well as the specific requirement of 0-OS network system.
This is why we provide a tool that simplify this process for you, [tfuser](../../tools/tfuser).
Using tfuser creating a network becomes trivial:
```bash
# creates a new network with node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk as exit node
# and output the result into network.json
tfuser generate --schema network.json network create --node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk
```
network.json will now contains something like:
```json
{
"id": "",
"tenant": "",
"reply-to": "",
"type": "network",
"data": {
"network_id": "J1UHHAizuCU6s9jPax1i1TUhUEQzWkKiPhBA452RagEp",
"resources": [
{
"node_id": {
"id": "DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk",
"farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi",
"reachability_v4": "public",
"reachability_v6": "public"
},
"prefix": "2001:b:a:8ac6::/64",
"link_local": "fe80::8ac6/64",
"peers": [
{
"type": "wireguard",
"prefix": "2001:b:a:8ac6::/64",
"Connection": {
"ip": "2a02:1802:5e::223",
"port": 1600,
"key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=",
"private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662"
}
}
],
"exit_point": true
}
],
"prefix_zero": "2001:b:a::/64",
"exit_point": {
"ipv4_conf": null,
"ipv4_dnat": null,
"ipv6_conf": {
"addr": "fe80::8ac6/64",
"gateway": "fe80::1",
"metric": 0,
"iface": "public"
},
"ipv6_allow": []
},
"allocation_nr": 0,
"version": 0
}
}
```
Which is a valid network schema. This network only contains a single exit node though, so not really useful.
Let's add another node to the network:
```bash
tfuser generate --schema network.json network add-node --node 4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4
```
result looks like:
```json
{
"id": "",
"tenant": "",
"reply-to": "",
"type": "network",
"data": {
"network_id": "J1UHHAizuCU6s9jPax1i1TUhUEQzWkKiPhBA452RagEp",
"resources": [
{
"node_id": {
"id": "DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk",
"farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi",
"reachability_v4": "public",
"reachability_v6": "public"
},
"prefix": "2001:b:a:8ac6::/64",
"link_local": "fe80::8ac6/64",
"peers": [
{
"type": "wireguard",
"prefix": "2001:b:a:8ac6::/64",
"Connection": {
"ip": "2a02:1802:5e::223",
"port": 1600,
"key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=",
"private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662"
}
},
{
"type": "wireguard",
"prefix": "2001:b:a:b744::/64",
"Connection": {
"ip": "<nil>",
"port": 0,
"key": "3auHJw3XHFBiaI34C9pB/rmbomW3yQlItLD4YSzRvwc=",
"private_key": "96dc64ff11d05e8860272b91bf09d52d306b8ad71e5c010c0ccbcc8d8d8f602c57a30e786d0299731b86908382e4ea5a82f15b41ebe6ce09a61cfb8373d2024c55786be3ecad21fe0ee100339b5fa904961fbbbd25699198c1da86c5"
}
}
],
"exit_point": true
},
{
"node_id": {
"id": "4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4",
"farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi",
"reachability_v4": "hidden",
"reachability_v6": "hidden"
},
"prefix": "2001:b:a:b744::/64",
"link_local": "fe80::b744/64",
"peers": [
{
"type": "wireguard",
"prefix": "2001:b:a:8ac6::/64",
"Connection": {
"ip": "2a02:1802:5e::223",
"port": 1600,
"key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=",
"private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662"
}
},
{
"type": "wireguard",
"prefix": "2001:b:a:b744::/64",
"Connection": {
"ip": "<nil>",
"port": 0,
"key": "3auHJw3XHFBiaI34C9pB/rmbomW3yQlItLD4YSzRvwc=",
"private_key": "96dc64ff11d05e8860272b91bf09d52d306b8ad71e5c010c0ccbcc8d8d8f602c57a30e786d0299731b86908382e4ea5a82f15b41ebe6ce09a61cfb8373d2024c55786be3ecad21fe0ee100339b5fa904961fbbbd25699198c1da86c5"
}
}
],
"exit_point": false
}
],
"prefix_zero": "2001:b:a::/64",
"exit_point": {
"ipv4_conf": null,
"ipv4_dnat": null,
"ipv6_conf": {
"addr": "fe80::8ac6/64",
"gateway": "fe80::1",
"metric": 0,
"iface": "public"
},
"ipv6_allow": []
},
"allocation_nr": 0,
"version": 1
}
}
```
Our network schema is now ready, but before we can provision it onto a node, we need to sign it and send it to the bcdb.
To be able to sign it we need to have a pair of key. You can use `tfuser id` command to create an identity:
```bash
tfuser id --output user.seed
```
We can now provision the network on both nodes:
```bash
tfuser provision --schema network.json \
--node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk \
--node 4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4 \
--seed user.seed
```

View File

@ -1,54 +0,0 @@
#!/usr/bin/bash
mgmtnic=(
0c:c4:7a:51:e3:6a
0c:c4:7a:51:e9:e6
0c:c4:7a:51:ea:18
0c:c4:7a:51:e3:78
0c:c4:7a:51:e7:f8
0c:c4:7a:51:e8:ba
0c:c4:7a:51:e8:0c
0c:c4:7a:51:e7:fa
)
ipminic=(
0c:c4:7a:4c:f3:b6
0c:c4:7a:4d:02:8c
0c:c4:7a:4d:02:91
0c:c4:7a:4d:02:62
0c:c4:7a:4c:f3:7e
0c:c4:7a:4d:02:98
0c:c4:7a:4d:02:19
0c:c4:7a:4c:f2:e0
)
cnt=1
for i in ${mgmtnic[*]} ; do
cat << EOF
config host
option name 'zosv2tst-${cnt}'
option dns '1'
option mac '${i}'
option ip '10.5.0.$((${cnt} + 10))'
EOF
let cnt++
done
cnt=1
for i in ${ipminic[*]} ; do
cat << EOF
config host
option name 'ipmiv2tst-${cnt}'
option dns '1'
option mac '${i}'
option ip '10.5.0.$((${cnt} + 100))'
EOF
let cnt++
done
for i in ${mgmtnic[*]} ; do
echo ln -s zoststconf 01-$(echo $i | sed s/:/-/g)
done

View File

@ -1,35 +0,0 @@
<h1> Definitions</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Node](#node)
- [TNo : Tenant Network Object](#tno--tenant-network-object)
- [NR: Network Resource](#nr-network-resource)
***
## Introduction
We present definitions of words used through the documentation.
## Node
TL;DR: Computer.
A Node is a computer with CPU, Memory, Disks (or SSD's, NVMe) connected to _A_ network that has Internet access. (i.e. it can reach www.google.com, just like you on your phone, at home)
That Node will, once it has received an IP address (IPv4 or IPv6), register itself when it's new, or confirm it's identity and it's online-ness (for lack of a better word).
## TNo : Tenant Network Object
TL;DR: The Network Description.
We named it so, because it is a data structure that describes the __whole__ network a user can request (or setup).
That network is a virtualized overlay network.
Basically that means that transfer of data in that network *always* is encrypted, protected from prying eyes, and __resources in that network can only communicate with each other__ **unless** there is a special rule that allows access. Be it by allowing access through firewall rules, *and/or* through a proxy (a service that forwards requests on behalf of, and ships replies back to the client).
## NR: Network Resource
TL;DR: the Node-local part of a TNo.
The main building block of a TNo; i.e. each service of a user in a Node lives in an NR.
Each Node hosts User services, whatever type of service that is. Every service in that specific node will always be solely part of the Tenant's Network. (read that twice).
So: A Network Resource is the thing that interconnects all other network resources of the TN (Tenant Network), and provides routing/firewalling for these interconnects, including the default route to the BBI (Big Bad Internet), aka ExitPoint.
All User services that run in a Node are in some way or another connected to the Network Resource (NR), which will provide ip packet forwarding and firewalling to all other network resources (including the Exitpoint) of the TN (Tenant Network) of the user. (read that three times, and the last time, read it slowly and out loud)

View File

@ -1,74 +0,0 @@
# 0-OS v2 and it's network setup
## Introduction
0-OS nodes participating in the Threefold grid, need connectivity of course. They need to be able to communicate over
the Internet with each-other in order to do various things:
- download it's OS modules
- perform OS module upgrades
- register itself to the grid, and send regular updates about it's status
- query the grid for tasks to execute
- build and run the Overlay Network
- download flists and the effective files to cache
The nodes themselves can have connectivity in a few different ways:
- Only have RFC1918 private addresses, connected to the Internet through NAT, NO IPv6
Mostly, these are single-NIC (Network card) machines that can host some workloads through the Overlay Network, but
cant't expose services directly. These are HIDDEN nodes, and are mostly booted with an USB stick from
bootstrap.grid.tf .
- Dual-stacked: having RFC1918 private IPv4 and public IPv6 , where the IPv6 addresses are received from a home router,
but firewalled for outgoing traffic only. These nodes are effectively also HIDDEN
- Nodes with 2 NICs, one that has effectively a NIC connected to a segment that has real public
addresses (IPv4 and/or IPv6) and one NIC that is used for booting and local
management. (OOB) (like in the drawing for farmer setup)
For Farmers, we need to have Nodes to be reachable over IPv6, so that the nodes can:
- expose services to be proxied into containers/vms
- act as aggregating nodes for Overlay Networks for HIDDEN Nodes
Some Nodes in Farms should also have a publicly reachable IPv4, to make sure that clients that only have IPv4 can
effectively reach exposed services.
But we need to stress the importance of IPv6 availability when you're running a multi-node farm in a datacentre: as the
grid is boldly claiming to be a new Internet, we should make sure we adhere to the new protocols that are future-proof.
Hence: IPv6 is the base, and IPv4 is just there to accomodate the transition.
Nowadays, RIPE can't even hand out consecutive /22 IPv4 blocks any more for new LIRs, so you'll be bound to market to
get IPv4, mostly at rates of 10-15 Euro per IP. Things tend to get costly that way.
So anyway, IPv6 is not an afterthought in 0-OS, we're starting with it.
## Network setup for farmers
This is a quick manual to what is needed for connecting a node with zero-OS V2.0
### Step 1. Testing for IPv6 availability in your location
As descibed above the network in which the node is instaleld has to be IPv6 enabled. This is not an afterthought as we are building a new internet it has to ba based on the new and forward looking IP addressing scheme. This is something you have to investigate, negotiate with you connectivity provider. Many (but not all home connectivity products and certainly most datacenters can provide you with IPv6. There are many sources of infromation on how to test and check whether your connection is IPv6 enabled, [here is a starting point](http://www.ipv6enabled.org/ipv6_enabled/ipv6_enable.php)
### Step 2. Choosing you setup for connecitng you nodes.
Once you have established that you have IPv6 enabled on the network you are about to deploy, you have to make sure that there is an IPv6 DHCP facility available. Zero-OS does not work with static IPv6 addresses (at this point in time). So you have choose and create one of the following setups:
#### 2.1 Home setup
Use your (home) ISP router Ipv6 DHCP capabilities to provide (private) IPv6 addresses. The principle will work the same as for IPv4 home connections, everything happens enabled by Network Adress Translation (just like anything else that uses internet connectivity). This should be relatively straightforward if you have established that your conenction has IPv6 enabled.
#### 2.2 Datacenter / Expert setup
In this situation there are many options on how to setup you node. This requires you as the expert to make a few decisions on how to connect what what the best setup is that you can support for the operaitonal time of your farm. The same basics principles apply:
- You have to have a block of (public) IPv6 routed to you router, or you have to have your router setup to provide Network Address Translation (NAT)
- You have to have a DHCP server in your network that manages and controls IPV6 ip adress leases. Depending on your specific setup you have this DHCP server manage a public IPv6y range which makes all nodes directly connected to the public internet or you have this DHCP server manage a private block og IPv6 addresses which makes all you nodes connect to the internet through NAT.
As a farmer you are in charge of selecting and creating the appropriate network setup for your farm.
## General notes
The above setup will allows your node(s) to appear in explorer on the TF Grid and will allowd you to earn farming tokens. At stated in the introduction ThreeFold is creating next generation internet capacity and therefore has IPv6 as it's base building block. Connecting to the current (dominant) IPv4 network happens for IT workloads through so called webgateways. As the word sais these are gateways that provide connectivity between the currenct leading IPv4 adressing scheme and IPv6.
We have started a forum where people share their experiences and configurations. This will be work in progress and forever growing.
**IMPORTANT**: You as a farmer do not need access to IPV4 to be able to rent capacity for IT workloads that need to be visible on IPV4, this is something that can happen elswhere on the TF Grid.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

View File

@ -1,87 +0,0 @@
<h1> Introduction to Networkd</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Boot and initial setup](#boot-and-initial-setup)
- [Networkd functionality](#networkd-functionality)
- [Techie talk](#techie-talk)
- [Wireguard explanations](#wireguard-explanations)
- [Caveats](#caveats)
***
## Introduction
We provide an introduction to Networkd, the network manager of 0-OS.
## Boot and initial setup
At boot, be it from an usb stick or PXE, ZOS starts up the kernel, with a few necessary parameters like farm ID and/or possible network parameters, but basically once the kernel has started, [zinit](https://github.com/threefoldtech/zinit) among other things, starts the network initializer.
In short, that process loops over the available network interfaces and tries to obtain an IP address that also provides for a default gateway. That means: it tries to get Internet connectivity. Without it, ZOS stops there, as not being able to register itself, nor start other processes, there wouldn't be any use for it to be started anyway.
Once it has obtained Internet connectivity, ZOS can then proceed to make itself known to the Grid, and acknowledge it's existence. It will then regularly poll the Grid for tasks.
Once initialized, with the network daemon running (a process that will handle all things related to networking), ZOS will set up some basic services so that workloads can themselves use that network.
## Networkd functionality
The network daemon is in itself responsible for a few tasks, and working together with the [provision daemon](../provision) it mainly sets up the local infrastructure to get the user network resources, together with the wireguard configurations for the user's mesh network.
The Wireguard mesh is an overlay network. That means that traffic of that network is encrypted and encapsulated in a new traffic frame that the gets transferred over the underlay network, here in essence the network that has been set up during boot of the node.
For users or workloads that run on top of the mesh, the mesh network looks and behaves like any other directly connected workload, and as such that workload can reach other workloads or services in that mesh with the added advantage that that traffic is encrypted, protecting services and communications over that mesh from too curious eyes.
That also means that workloads between nodes in a local network of a farmer is even protected from the farmer himself, in essence protecting the user from the farmer in case that farmer could become too curious.
As the nodes do not have any way to be accessed, be it over the underlaying network or even the local console of the node, a user can be sure that his workload cannot be snooped upon.
## Techie talk
- **boot and initial setup**
For ZOS to work at all (the network is the computer), it needs an internet connection. That is: it needs to be able to communicate with the BCDB over the internet.
So ZOS starts with that: with the `internet` process, that tries go get the node to receive an IP address. That process will have set-up a bridge (`zos`), connected to an interface that is on an Internet-capable network. That bridge will have an IP address that has Internet access.
Also, that bridge is there for future public interfaces into workloads.
Once ZOS can reach the Internet, the rest of the system can be started, where ultimately, the `networkd` daemon is started.
- **networkd initial setup**
`networkd` starts with recensing the available Network interfaces, and registers them to the BCDB (grid database), so that farmers can specify non-standard configs like for multi-nic machines. Once that is done, `networkd` registers itself to the zbus, so it can receive tasks to execute from the provsioning daemon (`provisiond`).
These tasks are mostly setting up network resources for users, where a network resource is a subnet in the user's wireguard mesh.
- **multi-nic setups**
When someone is a farmer, exploiting nodes somewhere in a datacentre, where the nodes have multiple NICs, it is advisable (though not necessary) to differentiate OOB traffic (like initial boot setup) from user traffic (as well the overlay network as the outgoing NAT for nodes for IPv4) to be on a different NIC. With these parameters, a user will have to make sure their switches are properly configured, more in docs later.
- **registering and configurations**
Once a node has booted and properly initialized, registering and configuring the node to be able to accept workloads and their associated network configs, is a two-step process.
First, the node registers it's live network setup to the BCDB. That is : all NICs with their associated IP addresses and routes are registered so a farm admin can in a second phase configure eventual separate NICs to handle different kinds of workloads.
In that secondary phase, a farm admin can then set-up the NICs and their associated IP's manually, so that workloads can start using them.
## Wireguard explanations
- **wireguard as pointopoint links and what that means**
Wireguard is a special type of VPN, where every instance is as well server for multiple peers as client towards multiple peers. That way you can create fanning-out connections als receive connections from multiple peers, creating effectively a mesh of connections Like this : ![like so](HIDDEN-PUBLIC.png)
- **wireguard port management**
Every wireguard point (a network resource point) needs a destination/port combo when it's publicly reachable. The destination is a public ip, but the port is the differentiator. So we need to make sure every network wireguard listening port is unique in the node where it runs, and can be reapplied in case of a node's reboot.
ZOS registers the ports **already in use** to the BCDB, so a user can the pick a port that is not yet used.
- **wireguard and hidden nodes**
Hidden nodes are nodes that are in essence hidden behind a firewall, and unreachable from the Internet to an internal network, be it as an IPv4 NATed host or an IPv6 host that is firewalled in any way, where it's impossible to have connection initiations form the Internet to the node.
As such, these nodes can only partake in a network as client-only towards publicly reachable peers, and can only initiate the connections themselves. (ref previous drawing).
To make sure connectivity stays up, the clients (all) have a keepalive towards all their peers so that communications towards network resources in hidden nodes can be established.
## Caveats
- **hidden nodes**
Hidden nodes live (mostly) behind firewalls that keep state about connections and these states have a lifetime. We try at best to keep these communications going, but depending of the firewall your mileage may vary (YMMV ;-))
- **local underlay network reachability**
When multiple nodes live in a same hidden network, at the moment we don't try to have the nodes establish connectivity between themselves, so all nodes in that hidden network can only reach each other through the intermediary of a node that is publicly reachable. So to get some performance, a farmer will have to have real routable nodes available in the vicinity.
So for now, a farmer is better off to have his nodes really reachable over a public network.
- **IPv6 and IPv4 considerations**
While the mesh can work over IPv4 __and__ IPv6 at the same time, the peers can only be reached through one protocol at the same time. That is a peer is IPv4 __or__ IPv6, not both. Hence if a peer is reachable over IPv4, the client towards that peer needs to reach it over IPv4 too and thus needs an IPv4 address.
We advise strongly to have all nodes properly set-up on a routable unfirewalled IPv6 network, so that these problems have no reason to exist.

View File

@ -1,134 +0,0 @@
<h1> Zero-Mesh</h1>
<h2> Table of Contents </h2>
- [What It Is](#what-it-is)
- [Overlay Network](#overlay-network)
- [ZOS networkd](#zos-networkd)
- [Internet reachability per Network Resource](#internet-reachability-per-network-resource)
- [Interworkings](#interworkings)
- [Network Resource Internals](#network-resource-internals)
***
## What It Is
When a user wants to deploy a workload, whatever that may be, that workload needs connectivity.
If there is just one service to be run, things can be simple, but in general there are more than one services that need to interact to provide a full stack. Sometimes these services can live on one node, but mostly these service will be deployed over multiple nodes, in different containers.
The Mesh is created for that, where containers can communicate over an encrypted path, and that network can be specified in terms of IP addresses by the user.
## Overlay Network
Zero-Mesh is an overlay network. That requires that nodes need a proper working network with existing access to the Internet in the first place, being full-blown public access, or behind a firewall/home router that provides for Private IP NAT to the internet.
Right now Zero-Mesh has support for both, where nodes behind a firewall are HIDDEN nodes, and nodes that are directly connected, be it over IPv6 or IPv4 as 'normal' nodes.
Hidden nodes can thus only be participating as client nodes for a specific user Mesh, and all publicly reachable nodes can act as aggregators for hidden clients in that user Mesh.
Also, a Mesh is static: once it is configured, and thus during the lifetime of the network, there is one node containing the aggregator for Mesh clients that live on hidden nodes. So if then an aggregator node has died or is not reachable any more, the mesh needs to be reapplied, with __some__ publicly reachable node as aggregator node.
So it goes a bit like ![this](HIDDEN-PUBLIC.png)
The Exit labeled NR in that graph is the point where Network Resources in Hidden Nodes connect to. These Exit NRs are then the transfer nodes between Hidden NRs.
## ZOS networkd
The networkd daemon receives tasks from the provisioning daemon, so that it can create the necessary resources for a Mesh participator in the User Network (A network Resource - NR).
A network is defined as a whole by the User, using the tools in the 3bot to generate a proper configuration that can be used by the network daemon.
What networkd takes care of, is the establishment of the mesh itself, in accordance with the configuration a farmer has given to his nodes. What is configured on top of the Mesh is user defined, and applied as such by the networkd.
## Internet reachability per Network Resource
Every node that participates in a User mesh, will also provide for Internet access for every network resource.
that means that every NR has the same Internet access as the node itself. Which also means, in terms of security, that a firewall in the node takes care of blocking all types of entry to the NR, effectively being an Internet access diode, for outgoing and related traffic only.
In a later phase a user will be able to define some network resource as __sole__ outgoing Internet Access point, but for now that is not yet defined.
## Interworkings
So How is that set up ?
Every node participating in a User Network, sets up a Network Resource.
Basically, it's a Linux Network Namespace (sort of a network virtual machine), that contains a wireguard interface that has a list of other Network resources it needs to route encrypted packets toward.
As a User Network has a range typically a `/16` (like `10.1.0.0/16`), that is user defined. The User then picks a subnet from that range (like e.g. `10.1.1.0/24`) to assign that to every new NR he wants to participate in that Network.
Workloads that are then provisioned are started in a newly created Container, and that container gets a User assigned IP __in__ that subnet of the Network Resource.
The Network resource itself then handles the routing and firewalling for the containers that are connected to it. Also, the Network Resource takes care of internet connectivity, so that the container can reach out to other services on the Internet.
![like this](NR_layout.png)
Also in a later phase, a User will be able to add IPv6 prefixes to his Network Resources, so that containers are reachable over IPv6.
Fully-routed IPv6 will then be available, where an Exit NR will be the entrypoint towards that network.
## Network Resource Internals
Each NR is basically a router for the User Network, but to allow NRs to access the Internet through the Node's local connection, there are some other internal routers to be added.
Internally it looks like this :
```text
+------------------------------------------------------------------------------+
| |wg mesh |
| +-------------+ +-----+-------+ |
| | | | NR cust1 | 100.64.0.123/16 |
| | container +----------+ 10.3.1.0/24 +----------------------+ |
| | cust1 | veth| | public | |
| +-------------+ +-------------+ | |
| | |
| +-------------+ +-------------+ | |
| | | | NR cust200 | 100.64.4.200/16 | |
| | container +----------+ 10.3.1.0/24 +----------------------+ |
| | cust200 | veth| | public | |
| +-------------+ +------+------+ | |
| |wg mesh | |
| 10.101.123.34/16 | |
| +------------+ |tonrs |
| | | +------------------+ |
| | zos +------+ | 100.64.0.1/16 | |
| | | | 10.101.12.231/16| ndmz | |
| +---+--------+ NIC +-----------------------------+ | |
| | | public +------------------+ |
| +--------+------+ |
| | |
| | |
+------------------------------------------------------------------------------+
|
|
|
| 10.101.0.0/16 10.101.0.1
+------------------+------------------------------------------------------------
NAT
--------
rules NR custA
nft add rule inet nat postrouting oifname public masquerade
nft add rule inet filter input iifname public ct state { established, related } accept
nft add rule inet filter input iifname public drop
rules NR custB
nft add rule inet nat postrouting oifname public masquerade
nft add rule inet filter input iifname public ct state { established, related } accept
nft add rule inet filter input iifname public drop
rules ndmz
nft add rule inet nat postrouting oifname public masquerade
nft add rule inet filter input iifname public ct state { established, related } accept
nft add rule inet filter input iifname public drop
Routing
if NR only needs to get out:
ip route add default via 100.64.0.1 dev public
if an NR wants to use another NR as exitpoint
ip route add default via destnr
with for AllowedIPs 0.0.0.0/0 on that wg peer
```
During startup of the Node, the ndmz is put in place, following the configuration if it has a single internet connection , or that with a dual-nic setup, a separate nic is used for internet access.
The ndmz network has the carrier-grade nat allocation assigned, so we don'tinterfere with RFC1918 private IPv4 address space, so users can use any of them (and not any of `100.64.0.0/10`, of course)

View File

@ -1,315 +0,0 @@
# 0-OS v2 and it's network
## Introduction
0-OS nodes participating in the Threefold grid, need connectivity of course. They need to be able to communicate over
the Internet with each-other in order to do various things:
- download it's OS modules
- perform OS module upgrades
- register itself to the grid, and send regular updates about it's status
- query the grid for tasks to execute
- build and run the Overlay Network
- download flists and the effective files to cache
The nodes themselves can have connectivity in a few different ways:
- Only have RFC1918 private addresses, connected to the Internet through NAT, NO IPv6
Mostly, these are single-NIC (Network card) machines that can host some workloads through the Overlay Network, but
cant't expose services directly. These are HIDDEN nodes, and are mostly booted with an USB stick from
bootstrap.grid.tf .
- Dual-stacked: having RFC1918 private IPv4 and public IPv6 , where the IPv6 addresses are received from a home router,
but firewalled for outgoing traffic only. These nodes are effectively also HIDDEN
- Nodes with 2 NICs, one that has effectively a NIC connected to a segment that has real public
addresses (IPv4 and/or IPv6) and one NIC that is used for booting and local
management. (OOB) (like in the drawing for farmer setup)
For Farmers, we need to have Nodes to be reachable over IPv6, so that the nodes can:
- expose services to be proxied into containers/vms
- act as aggregating nodes for Overlay Networks for HIDDEN Nodes
Some Nodes in Farms should also have a publicly reachable IPv4, to make sure that clients that only have IPv4 can
effectively reach exposed services.
But we need to stress the importance of IPv6 availability when you're running a multi-node farm in a datacentre: as the
grid is boldly claiming to be a new Internet, we should make sure we adhere to the new protocols that are future-proof.
Hence: IPv6 is the base, and IPv4 is just there to accomodate the transition.
Nowadays, RIPE can't even hand out consecutive /22 IPv4 blocks any more for new LIRs, so you'll be bound to market to
get IPv4, mostly at rates of 10-15 Euro per IP. Things tend to get costly that way.
So anyway, IPv6 is not an afterthought in 0-OS, we're starting with it.
## Physical setup for farmers
```text
XXXXX XXX
XX XXX XXXXX XXX
X X XXX
X X
X INTERNET X
XXX X X
XXXXX XX XX XXXX
+X XXXX XX XXXXX
|
|
|
|
|
+------+--------+
| FIREWALL/ |
| ROUTER |
+--+----------+-+
| |
+-----------+----+ +-+--------------+
| switch/ | | switch/ |
| vlan segment | | vlan segment |
+-+---------+----+ +---+------------+
| | |
+-------+-------+ |OOB | PUBLIC
| PXE / dhcp | | |
| Ser^er | | |
+---------------+ | |
| |
+-----+------------+----------+
| |
| +--+
| | |
| NODES | +--+
+--+--------------------------+ | |
| | |
+--+--------------------------+ |
| |
+-----------------------------+
```
The PXE/dhcp can also be done by the firewall, your mileage may vary.
## Switch and firewall configs
Single switch, multiple switch, it all boils down to the same:
- one port is an access port on an OOB vlan/segment
- one port is connected to a public vlan/segment
The farmer makes sure that every node receives properly an IPv4 address in the OOB segment through means of dhcp, so
that with a PXE config or USB, a node can effectively start it's boot process:
- Download kernel and initrd
- Download and mount the system flists so that the 0-OS daemons can start
- Register itself on the grid
- Query the grid for tasks to execute
For the PUBLIC side of the Nodes, there are a few things to consider:
- It's the farmer's job to inform the grid what node gets an IP address, be it IPv4 or IPv4.
- Nodes that don't receive and IPv4 address will connect to the IPv4 net through the NATed OOB network
- A farmer is responsible to provide and IPv6 prefix on at least one segment, and have a Router Advertisement daemon
runnig to provide for SLAAC addressin on that segment.
- That IPv6 Prefix on the public segment should not be firewalled, as it's impossible to know in your firewall what
ports will get exposed for the proxies.
The Nodes themselves have nothing listening that points into the host OS itself, and are by themselves also firewalled.
In dev mode, there is an ssh server with a key-only login, accessible by a select few ;-)
## DHCP/Radvd/RA/DHCP6
For home networks, there is not much to do, a Node will get an IPv4 Private(rfc1918) address , and most probaly and
ipv6 address in a /64 prefix, but is not reachable over ipv6, unless the firewall is disabled for IPv6. As we can't
rely on the fact that that is possible, we assume these nodes to be HIDDEN.
A normal self-respecting Firewall or IP-capable switch can hand out IP[46] addresses, some can
even bootp/tftp to get nodes booted over the network.
We are (full of hope) assuming that you would have such a beast to configure and splice your network
in multiple segments.
A segment is a physical network separation. That can be port-based vlans, or even separate switches, whatver rocks your
boat, the keyword is here **separate**.
On both segments you will need a way to hand out IPv4 addresses based on MAC addresses of the nodes. Yes, there is some
administration to do, but it's a one-off, and really necessary, because you really need to know whic physical machine
has which IP. For lights-out management and location of machines that is a must.
So you'll need a list of mac addresses to add to your dhcp server for IPv4, to make sure you know which machine has
received what IPv4 Address.
That is necessary for 2 things:
- locate the node if something is amiss, like be able to pinpoint a node's disk in case it broke (which it will)
- have the node be reachable all the time, without the need to update the grid and network configs every time the node
boots.
## What happens under the hood (farmer)
While we did our uttermost best to keep IPv4 address needs to a strict minimum, at least one Node will need an IPv4 address for handling everything that is Overlay Networks.
For Containers to reach the Internet, any type of connectivity will do, be it NAT or though an Internal DMZ that has a
routable IPv4 address.
Internally, a lot of things are being set-up to have a node properly participate in the grid, as well to be prepared to partake in the User's Overlay Networks.
A node connects itself to 'the Internet' depending on a few states.
1. It lives in a fully private network (like it would be connected directly to a port on a home router)
```
XX XXX
XXX XXXXXX
X Internet X
XXXXXXX XXXXX
XX XXX
XX X
X+X
|
|
+--------+-----------+
| HOME / |
| SOHO router |
| |
+--------+-----------+
|
| Private space IPv4
| (192.168.1.0/24)
|
+---------+------------+
| |
| NODE |
| |
| |
| |
| |
| |
+----------------------+
```
1. It lives in a fully public network (like it is connected directly to an uplink and has a public ipv4 address)
```
XX XXX
XXX XXXXXX
X Internet X
XXXXXXX XXXXX
XX XXX
XX X
X+X
|
| fully public space ipv4/6
| 185.69.166.0/24
| 2a02:1802:5e:0:1000::abcd/64
|
+---------+------------+
| |
| NODE |
| |
+----------------------+
```
The node is fully reachable
1. It lives in a datacentre, where a farmer manages the Network.
A little Drawing :
```text
+----------------------------------------------------+
| switch |
| |
| |
+----------+-------------------------------------+---+
| |
access | |
mgmt | +---------------+
vlan | | access
| | public
| | vlan
| |
+-------+---------------------+------+
| |
| nic1 nic2 |
| |
| |
| |
| NODE |
| |
| |
| |
+------------------------------------+
```
Or the more elaborate drawing on top that should be sufficient for a sysadmin to comprehend.
Although:
- we don't (yet) support nic bonding (next release)
- we don't (yet) support vlans, so your ports on switch/router need to be access ports to vlans to your router/firewall
## yeayea, but really ... what now ?
Ok, what are the constraints?
A little foreword:
ZosV2 uses IPv6 as it's base for networking, where the oldie IPv4 is merely an afterthought. So for it to work properly in it's actual incantation (we are working to get it to do IPv4-only too), for now, we need the node to live in a space that provides IPv6 __too__ .
IPV4 and IPv6 are very different beasts, so any machine connected to the Internet wil do both on the same network. So basically your computer talks 2 different languages, when it comes to communicating. That is the same for ZOS, where right now, it's mother tongue is IPv6.
So your zos for V2 can start in different settings
1) you are a farmer, your ISP can provide you with IPv6
Ok, you're all set, aside from a public IPv4 DHCP, you need to run a Stateless-Only SLAAC Router Advertiser (ZOS does NOT do DHCP6).
1) you are a farmer, your ISP asks you what the hell IPv6 is
That is problematic right now, wait for the next release of ZosV2
1) you are a farmer, with only one node , at home, and on your PC https://ipv6.net tells you you have IPv6 on your PC.
That means your home router received an IPV6 allocation from the ISP,
Your'e all set, your node will boot, and register to the grid. If you know what you're doing, you can configure your router to allow all ipv6 traffic in forwarding mode to the specifice mac address of your node. (we'll explain later)
1) you are a farmer, with a few nodes somewhere that are registered on the grid in V1, but you have no clue if IPv6 is supported where these nodes live
1) you have a ThreefoldToken node at home, and still do not have a clue
Basically it boils down also in a few other cases
1) the physical network where a node lives has: IPv6 and Private space IPv4
1) the physical network where a node lives has: IPv6 and Public IPv4
1) the physical network where a node lives has: only IPv4
But it bloils down to : call your ISP, ask for IPv6. It's the future, for yout ISP, it's time. There is no way to circumvent it. No way.
OK, then, now what.
1) you're a farmer with a bunch of nodes somewhere in a DC
- your nodes are connected once (with one NIC) to a switch/router
Then your router will have :
- a segment that carries IPv4 __and__ IPv6:
- for IPv4, there are 2 possibilities:
- it's RFC1918 (Private space) -> you NAT that subnet (e.g. 192.168.1.0/24) towards the Public Internet
- you __will__ have difficulty to designate a IPv4 public entrypoint into your farm
- your workloads will be only reachable through the overlay
- your storage will not be reachable
- you received a (small, because of the scarceness of IPv4 addresses, your ISP will give you only limited and pricy IPv4 adresses) IPv4 range you can utilise
- things are better, the nodes can live in public ipv4 space, where they can be used as entrypoint
- standard configuration that works
- for IPv6, your router is a Routing advertiser that provides SLAAC (Stateless, unmanaged) for that segment, working witha /64 prefix
- the nodes will reachable over IPv6
- storage backend will be available for the full grid
- everything will just work
Best solution for single NIC:
- an ipv6 prefx
- an ipv4 subnet (however small)
- your nodes have 2 connections, and you wnat to differ management from user traffic
- same applies as above, where the best outcome will be obtained with a real IPv6 prefix allocation and a small public subnet that is routable.
- the second NIC (typically 10GBit) will then carry everything public, and the first nic will just be there for managent, living in Private space for IPv4, mostly without IPv6
- your switch needs to be configured to provide port-based vlans, so the segments are properly separated, and your router needs to reflect that vlan config so that separation is handeled by the firewall in the router (iptables, pf, acl, ...)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

View File

@ -1,8 +0,0 @@
<h1> Zero-OS Networking </h1>
<h2> Table of Contents </h2>
- [Introduction to networkd](./introduction.md)
- [Vocabulary Definitions](./definitions.md)
- [Wireguard Mesh Details](./mesh.md)
- [Farm Network Setup](./setup_farm_network.md)

View File

@ -1,123 +0,0 @@
<h1>Setup</h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Running ZOS (v2) at home](#running-zos-v2-at-home)
- [Running ZOS (v2) in a multi-node farm in a DC](#running-zos-v2-in-a-multi-node-farm-in-a-dc)
- [Necessities](#necessities)
- [IPv6](#ipv6)
- [Routing/firewalling](#routingfirewalling)
- [Multi-NIC Nodes](#multi-nic-nodes)
- [Farmers and the grid](#farmers-and-the-grid)
***
## Introduction
We present ZOSv2 network considerations.
Running ZOS on a node is just a matter of booting it with a USB stick, or with a dhcp/bootp/tftp server with the right configuration so that the node can start the OS.
Once it starts booting, the OS detects the NICs, and starts the network configuration. A Node can only continue it's boot process till the end when it effectively has received an IP address and a route to the Internet. Without that, the Node will retry indefinitely to obtain Internet access and not finish it's startup.
So a Node needs to be connected to a __wired__ network, providing a dhcp server and a default gateway to the Internet, be it NATed or plainly on the public network, where any route to the Internet, be it IPv4 or IPv6 or both is sufficient.
For a node to have that ability to host user networks, we **strongly** advise to have a working IPv6 setup, as that is the primary IP stack we're using for the User Network's Mesh to function.
## Running ZOS (v2) at home
Running a ZOS Node at home is plain simple. Connect it to your router, plug it in the network, insert the preconfigured USB stick containing the bootloader and the `farmer_id`, power it on.
You will then see it appear in the Cockpit (`https://cockpit.testnet.grid.tf/capacity`), under your farm.
## Running ZOS (v2) in a multi-node farm in a DC
Multi-Node Farms, where a farmer wants to host the nodes in a data centre, have basically the same simplicity, but the nodes can boot from a boot server that provides for DHCP, and also delivers the iPXE image to load, without the need for a USB stick in every Node.
A boot server is not really necessary, but it helps ;-). That server has a list of the MAC addresses of the nodes, and delivers the bootloader over PXE. The farmer is responsible to set-up the network, and configure the boot server.
### Necessities
The Farmer needs to:
- Obtain an IPv6 prefix allocation from the provider. A `/64` will do, that is publicly reachable, but a `/48` is advisable if the farmer wants to provide IPv6 transit for User Networks
- If IPv6 is not an option, obtain an IPv4 subnet from the provider. At least one IPv4 address per node is needed, where all IP addresses are publicly reachable.
- Have the Nodes connected on that public network with a switch so that all Nodes are publicly reachable.
- In case of multiple NICS, also make sure his farm is properly registered in BCDB, so that the Node's public IP Addresses are registered.
- Properly list the MAC addresses of the Nodes, and configure the DHCP server to provide for an IP address, and in case of multiple NICs also provide for private IP addresses over DHCP per Node.
- Make sure that after first boot, the Nodes are reachable.
### IPv6
IPv6, although already a real protocol since '98, has seen reluctant adoption over the time it exists. That mostly because ISPs and Carriers were reluctant to deploy it, and not seeing the need since the advent of NAT and private IP space, giving the false impression of security.
But this month (10/2019), RIPE sent a mail to all it's LIRs that the last consecutive /22 in IPv4 has been allocated. Needless to say, but that makes the transition to IPv6 in 2019 of utmost importance and necessity.
Hence, ZOS starts with IPv6, and IPv4 is merely an afterthought ;-)
So in a nutshell: we greatly encourage Farmers to have IPv6 on the Node's network.
### Routing/firewalling
Basically, the Nodes are self-protecting, in the sense that they provide no means at all to be accessed through listening processes at all. No service is active on the node itself, and User Networks function solely on an overlay.
That also means that there is no need for a Farm admin to protect the Nodes from exterior access, albeit some DDoS protection might be a good idea.
In the first phase we will still allow the Host OS (ZOS) to reply on ICMP ping requests, but that 'feature' might as well be blocked in the future, as once a Node is able to register itself, there is no real need to ever want to try to reach it.
### Multi-NIC Nodes
Nodes that Farmers deploy are typically multi-NIC Nodes, where one (typically a 1GBit NIC) can be used for getting a proper DHCP server running from where the Nodes can boot, and one other NIC (1Gbit or even 10GBit), that then is used for transfers of User Data, so that there is a clean separation, and possible injections bogus data is not possible.
That means that there would be two networks, either by different physical switches, or by port-based VLANs in the switch (if there is only one).
- Management NICs
The Management NIC will be used by ZOS to boot, and register itself to the GRID. Also, all communications from the Node to the Grid happens from there.
- Public NICs
### Farmers and the grid
A Node, being part of the Grid, has no concept of 'Farmer'. The only relationship for a Node with a Farmer is the fact that that is registered 'somewhere (TM)', and that a such workloads on a Node will be remunerated with Tokens. For the rest, a Node is a wholly stand-alone thing that participates in the Grid.
```text
172.16.1.0/24
2a02:1807:1100:10::/64
+--------------------------------------+
| +--------------+ | +-----------------------+
| |Node ZOS | +-------+ | |
| | +-------------+1GBit +--------------------+ 1GBit switch |
| | | br-zos +-------+ | |
| | | | | |
| | | | | |
| | | | +------------------+----+
| +--------------+ | | +-----------+
| | OOB Network | | |
| | +----------+ ROUTER |
| | | |
| | | |
| | | |
| +------------+ | +----------+ |
| | Public | | | | |
| | container | | | +-----+-----+
| | | | | |
| | | | | |
| +---+--------+ | +-------------------+--------+ |
| | | | 10GBit Switch | |
| br-pub| +-------+ | | |
| +-----+10GBit +-------------------+ | +---------->
| +-------+ | | Internet
| | | |
| | +----------------------------+
+--------------------------------------+
185.69.167.128/26 Public network
2a02:1807:1100:0::/64
```
Where the underlay part of the wireguard interfaces get instantiated in the Public container (namespace), and once created these wireguard interfaces get sent into the User Network (Network Resource), where a user can then configure the interface a he sees fit.
The router of the farmer fulfills 2 roles:
- NAT everything in the OOB network to the outside, so that nodes can start and register themselves, as well get tasks to execute from the BCDB.
- Route the assigned IPv4 subnet and IPv6 public prefix on the public segment, to which the public container is connected.
As such, in case that the farmer wants to provide IPv4 public access for grid proxies, the node will need at least one (1) IPv4 address. It's free to the farmer to assign IPv4 addresses to only a part of the Nodes.
On the other hand, it is quite important to have a proper IPv6 setup, because things will work out better.
It's the Farmer's task to set up the Router and the switches.
In a simpler setup (small number of nodes for instance), the farmer could setup a single switch and make 2 port-based VLANs to separate OOB and Public, or even wit single-nic nodes, just put them directly on the public segment, but then he will have to provide a DHCP server on the Public network.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

View File

@ -1,68 +0,0 @@
# On boot
> this is setup by `internet` daemon, which is part of the bootstrap process.
the first basic network setup is done, the point of this setup is to connect the node to the internet, to be able to continue the rest of the boot process.
- Go over all **PLUGGED, and PHYSICAL** interfaces
- For each matching interface, the interface is tested if it can get both IPv4 and IPv6
- If multiple interfaces have been found to receive ipv4 from dhcp, we find the `smallest` ip, with the private gateway IP, otherwise if no private gateway ip found, we only find the one with the smallest IP.
- Once the interface is found we do the following: (we will call this interface **eth**)
- Create a bridge named `zos`
- Disable IPv6 on this bridge, and ipv6 forwarding
- Run `udhcpc` on zos bridge
![zos-bridge](png/zos-bridge.png)
Once this setup complete, the node now has access to the internet which allows it to download and run `networkd` which takes over the network stack and continue the process as follows.
# Network Daemon
- Validate zos setup created by the `internet` on boot daemon
- Send information about all local nics to the explorer (?)
## Setting up `ndmz`
First we need to find the master interface for ndmz, we have the following cases:
- master of `public_config` if set. Public Config is an external configuration that is set by the farmer on the node object. that information is retrieved by the node from the public explorer.
- otherwise (if public_config is not set) check if the public namespace is set (i think that's a dead branch because if this exist (or can exist) it means the master is always set. which means it will get used always.
- otherwise find first interface with ipv6
- otherwise check if zos has global unicast ipv6
- otherwise hidden node (still uses zos but in hidden node setup)
### Hidden node ndmz
![ndmz-hidden](png/ndmz-hidden.png)
### Dualstack ndmz
![ndmz-dualstack](png/ndmz-dualstack.png)
## Setting up Public Config
this is an external configuration step that is configured by the farmer on the node object. The node then must have setup in the explorer.
![public-namespace](png/public-namespace.png)
## Setting up Yggdrasil
- Get a list of all public peers with status `up`
- If hidden node:
- Find peers with IPv4 addresses
- If dual stack node:
- Filter out all peers with same prefix as the node, to avoid connecting locally only
- write down yggdrasil config, and start yggdrasil daemon via zinit
- yggdrasil runs inside the ndmz namespace
- add an ipv6 address to npub in the same prefix as yggdrasil. this way when npub6 is used as a gateway for this prefix, traffic
will be routed through yggdrasil.
# Creating a network resource
A network resource (`NR` for short) as a user private network that lives on the node and can span multiple nodes over wireguard. When a network is deployed the node builds a user namespace as follows:
- A unique network id is generated by md5sum(user_id + network_name) then only take first 13 bytes. We will call this `net-id`.
![nr-1](png/nr-step-1.png)
## Create the wireguard interface
if the node has `public_config` so the `public` namespace exists. then the wireguard device is first created inside the `public` namespace then moved
to the network-resource namespace.
Otherwise, the port is created on the host namespace and then moved to the network-resource namespace. The final result is
![nr-2](png/nr-step-2.png)
Finally the wireguard peer list is applied and configured, routing rules is also configured to route traffic to the wireguard interface
# Member joining a user network (network resource)
![nr-join](png/nr-join.png)

View File

@ -1,57 +0,0 @@
@startuml
[zos\nbridge] as zos
[br-pub\nbridge] as brpub
[br-ndmz\nbridge] as brndmz
note top of brndmz
disable ipv6
- net.ipv6.conf.br-ndmz.disable_ipv6 = 1
end note
' brpub -left- zos : veth pair\n(tozos)
brpub -down- master
note right of master
master is found as described
in the readme (this can be zos bridge)
in case of a single node machine
end note
package "ndmz namespace" {
[tonrs\nmacvlan] as tonrs
note bottom of tonrs
- net.ipv4.conf.tonrs.proxy_arp = 0
- net.ipv6.conf.tonrs.disable_ipv6 = 0
Addresses:
100.127.0.1/16
fe80::1/64
fd00::1
end note
tonrs - brndmz: macvlan
[npub6\nmacvlan] as npub6
npub6 -down- brpub: macvlan
[npub4\nmacvlan] as npub4
npub4 -down- zos: macvlan
note as MAC
gets static mac address generated
from node id. to make sure it receives
same ip address.
end note
MAC .. npub4
MAC .. npub6
note as setup
- net.ipv6.conf.all.forwarding = 1
end note
[ygg0]
note bottom of ygg0
this will be added by yggdrasil setup
in the next step
end note
}
footer (hidden node) no master with global unicast ipv6 found
@enduml

View File

@ -1,55 +0,0 @@
@startuml
[zos\nbridge] as zos
note left of zos
current select master
for hiddent ndmz setup
end note
[br-pub\nbridge] as brpub
[br-ndmz\nbridge] as brndmz
note top of brndmz
disable ipv6
- net.ipv6.conf.br-ndmz.disable_ipv6 = 1
end note
brpub -left- zos : veth pair\n(tozos)
package "ndmz namespace" {
[tonrs\nmacvlan] as tonrs
note bottom of tonrs
- net.ipv4.conf.tonrs.proxy_arp = 0
- net.ipv6.conf.tonrs.disable_ipv6 = 0
Addresses:
100.127.0.1/16
fe80::1/64
fd00::1
end note
tonrs - brndmz: macvlan
[npub6\nmacvlan] as npub6
npub6 -right- brpub: macvlan
[npub4\nmacvlan] as npub4
npub4 -down- zos: macvlan
note as MAC
gets static mac address generated
from node id. to make sure it receives
same ip address.
end note
MAC .. npub4
MAC .. npub6
note as setup
- net.ipv6.conf.all.forwarding = 1
end note
[ygg0]
note bottom of ygg0
this will be added by yggdrasil setup
in the next step
end note
}
footer (hidden node) no master with global unicast ipv6 found
@enduml

View File

@ -1,23 +0,0 @@
@startuml
component "br-pub" as public
component "b-<netid>\nbridge" as bridge
package "<reservation-id> namespace" {
component eth0 as eth
note right of eth
set ip as configured in the reservation
it must be in the subnet assinged to n-<netid>
in the user resource above.
- set default route through n-<netid>
end note
eth .. bridge: veth
component [pub\nmacvlan] as pub
pub .. public
note right of pub
only if public ipv6 is requests
also gets a consistent MAC address
end note
}
@enduml

View File

@ -1,31 +0,0 @@
@startuml
component [b-<netid>] as bridge
note left of bridge
- net.ipv6.conf.b-<netid>.disable_ipv6 = 1
end note
package "n-<netid> namespace" {
component [n-<netid>\nmacvlan] as nic
bridge .. nic: macvlan
note bottom of nic
- nic gets the first ip ".1" in the assigned
user subnet.
- an ipv6 driven from ipv4 that is driven from the assigned ipv4
- fe80::1/64
end note
component [public\nmacvlan] as public
note bottom of public
- gets an ipv4 in 100.127.0.9/16 range
- get an ipv6 in the fd00::/64 prefix
- route over 100.127.0.1
- route over fe80::1/64
end note
note as G
- net.ipv6.conf.all.forwarding = 1
end note
}
component [br-ndmz] as brndmz
brndmz .. public: macvlan
@enduml

View File

@ -1,33 +0,0 @@
@startuml
component [b-<netid>] as bridge
note left of bridge
- net.ipv6.conf.b-<netid>.disable_ipv6 = 1
end note
package "n-<netid> namespace" {
component [n-<netid>\nmacvlan] as nic
bridge .. nic: macvlan
note bottom of nic
- nic gets the first ip ".1" in the assigned
user subnet.
- an ipv6 driven from ipv4 that is driven from the assigned ipv4
- fe80::1/64
end note
component [public\nmacvlan] as public
note bottom of public
- gets an ipv4 in 100.127.0.9/16 range
- get an ipv6 in the fd00::/64 prefix
- route over 100.127.0.1
- route over fe80::1/64
end note
note as G
- net.ipv6.conf.all.forwarding = 1
end note
component [w-<netid>\nwireguard]
}
component [br-ndmz] as brndmz
brndmz .. public: macvlan
@enduml

Some files were not shown because too many files have changed in this diff Show More