manual added knowledge_base

This commit is contained in:
2024-04-15 22:10:30 +00:00
parent b63f091e63
commit 6caab6f95d
448 changed files with 11437 additions and 0 deletions

View File

@@ -0,0 +1,25 @@
## Deterministic Deployment
- flists concept (deduped vfilesystem, no install, ...)
The Dedupe filesystem flist uses fuse = interface which allows you to create the file system interface in user space, it is a virtual filesystem.
Metadata is exposed. The system sees the full tree of the image, but data itself not there, data is downloaded whenever they are accessed.
There are multiple ways to create an flist:
- Convert an existing docker image which is hosted on the docker hub
- Push an archive like a tjz on the hub
- A library and CLI tool exist to build the flist from scratch: doing it this way, the directory is locally populated, and the flist is then created from the CLI tool.
- A [GitHub action](https://github.com/threefoldtech/publish-flist) allows to build a flist directly from GitHub action, useful for developers on GitHub
Be aware that the flist system works a bit differently than the usual deployment of containers (dockers), which doesn't do mounting of volumes from your local disk into container for configuration.
With flists you need to modify your image to get configuration from environment. This is basically how docker was originally intended to be used.
- Smart contract for IT
The smart contract for IT concept is applicable to any workload: containers, VMs, all gateways primitives, volumes, kubernetes and network.
It is a static agreement between farmer and user about deployment of an IT workload.
- no dynamic behavior for deployment at runtime
- no process can start unless the files are 100% described on flist level

View File

@@ -0,0 +1,14 @@
# Docker compatibility
Docker is being recognized as the market leader as a technology provider for containerization technology. Many enterprise and software developers have adopted Docker's technology stack to built a devops (development and operations, more information [here](https://en.wikipedia.org/wiki/DevOps)) "train" (internal process, a way of developing and delivering software) for delivering updates to applications and new applications. Regardless of how this devops "train" is organised it always spits out docker (application) images and deployments methods. Hercules is built with a 100% backwards compatibility in mind to the created docker images and deployment methods.
A major step in accepting and importing Docker images is to transpose docker images to the [ZOS Filesystem](zos_fs).
## Features
- 100 % backwards compatible with all existing and new to be created docker images.
- Easy import and transpose facility
- deduplicated application deployment simplifying aplication image management and versioning
!!!include:zos_toc

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

View File

@@ -0,0 +1,38 @@
# Network wall
![](img/webgateway.jpg)
> the best security = no network = no incoming tcp/ip from internet to containers 
This is done via sockets.
- TCP router client opens up socket to TCP router server, residing on the web gateway.
- When http arrives on this tcp router server, payload van http is brought back over socket to tcp router client.
- TCP router client sends http request that is made to server residing in the container.
- No TCP comes from outside world to the container, creating the most secure connection possible.
- The TCP router client opens the socket, TCP router server that received http request throws it on the socket.
- On the socket there is only data that comes in, which is replayed. TCP router client does a https request.
This mechanism is now implemented for https, but in the future also other protocols such as sql, redis, http … can be supported.
The end result is that only data goes over the network.
If container can no longer go to local tcp stack but only to make outgoing connection to the gateway, then there is no longer tcp comings in from outside.
This is what we call the 'Network wall'.
As a consequence, no tcp/ip is coming in AT ALL, making the full set-up reach unprecedented security.
## More detailed explanation
- Containers are behind NAT. We dont allow traffic coming in.
- All connection needs to come from container to the outside world. = very neat as network security.
- Connection needs to be outwards, secures against DDOS and other hacking etc, nothing to connect to.
- How to access it ? Drive traffic inside the container: proxy or load balancer which is exposed publicly, rest of the traffic in private network, not accessible.
- So you can limit the number of containers that are really accessible from the outside world.
- You dont have to worry about how to secure my DB as my DB is not exposed, only accessible in Wireguard private network.
- In containers you can specify to have specific IPv6 address, so deploy reverse proxy in container which has public access, = entry point in the network, deploy reverse tcp connection (=tcp router client), connects to the gateways and allows incoming connection.
!!!def alias:network_wall,net_wall
!!!include:zos_toc

View File

@@ -0,0 +1,30 @@
![](img/network_architecture2.jpg)
# Peer2Peer Network Concept
## Introduction
True peer-to-peer is a principle that exists everywhere within Threefold's technology stack, especially on its Network Architecture. Farmers produce IT capacity by connecting hardwares to the network and installing Zero-OS. The peer-to-peer network of devices forms the TF Grid. This TF Grid is a universal substrate of which a large variety of IT workloads exist and run.
## Peer-to-peer networking
The TF Grid is built by 3Nodes (hardware + Zero-OS) that are connected to the internet by using the IPv6 protocol. To future-proof this grid, IPv6 has been chosen as ThreeFold Grid's native networking technology. The TF Grid operates on IPv6 (where available) and creates peer-to-peer network connections between all the containers (and other primitives). Please find more about Zero-OS primitives in our [SDK manual](manual3_home)
This creates a many-to-many web of (encrypted) point-to-point network connections which together make a (private) secure **overlay network**. This network is completely private and connects only the primitives that have been deployed in your network.
TF Network Characteristics:
- Connect all containers point-to-point;
- All traffic is encrypted;
- High performance;
- The shortest path between two end-points, multi-homed containers;
- Could span large geographical areas and create virtual data centers;
- All created and made operational **without** public access from the internet.
## Existing Enterprise Private Networks
At Threefold, we are aware of the existence of private networks, IPsec, VPN, WAN's and more. We have the facility to create bridges to make those networks part of the deployed private overlay networks. This is in an early stage development, but with the right level(s) of interest this could be built out and carried out in the near future.
![](img/network_architecture.jpg)
!!!def alias:quantumsafe_network_concept,qsn_concept

View File

@@ -0,0 +1,14 @@
## Unbreakable Storage
- Unlimited history
- Survives network, datacenter or node breakdown
- No silent corruption possible
- Quantum safe (data cannot be decrypted by quantum computers) as long as quantum computer has no access to the metadata
- Self-healing & autocorrecting
If you deploy a container with simple disk access, you dont have it.
Performance is around 50MB/second, if a bit more CPU is given for the distributed storage encoder, we achieve this performance.
For more information, read [this documentation](../../primitives/storage/qsfs.md).

View File

@@ -0,0 +1,40 @@
## Zero Boot
> Zero Boot = Zero-OS boot process
ZOS Boot is a boot facility that allows 3nodes to boot from network boot servers located in the TF Grid. This boot mechanism creates as little as possible operational and administration overhead. ZOS Boot is a crucial part for enabling autonomy by *not* having the operating system installed on local disks on 3nodes. With a boot network facility and no local operating system files you immediate erase a number of operational and administration tasks:
- to install the operating system to start with
- to keep track of which systems run which version of the operating system (especially in large setups this is a complicated and error prone task)
- to keep track of patches and bug fixes that have been applied to systems
That's just the administration and operational part of maintaining a server estate with local installed operating system. On the security side of things the benefits are even greater:
- many hacking activities are geared towards adding to or changing parts of the operating system files. This is a threat from local physical access to servers as well as over the network. When there are no local operating system files installed this threat does not exist.
- accidental overwrite, delete or corruption of operating system files. Servers run many processes and many of these processes have administrative access to be able to do what they need to do. Accidental deletion or overwrites of crucial files on disk will make the server fail a reboot.
- access control. I there is no local operating system installed access control, user rights etc etc. are unnecessary functions and features and do not have to be implemented.
### How
In this image from fs, a small partition is mounted in memory to start booting the machine, it gets IPXE (downloads what it needs), and then 0-OS boots.
After that, going to the hub, downloading different lists.
There is 1 main flist that triggers downloads of multiple flists. Read more [here](../../../flist/flist.md).
In there all the components/daemons that do part of the 0-OS.
Also the download of the zos-bins, i.e. external binaries are triggered this way (https://hub.grid.tf/tf-zos-bins).
The core components of zero-os can be found in: [Zero-OS repo](https://github.com/threefoldtech/zos/tree/master/bins/packages) = If something changes in the directory, a workflow is triggered to rebuild the full flist and push it to the hub.
When a node discovers there is a new version of one of these lists on the hub, it downloads it, restarts the daemon with the new version.
Over the lifetime of the node, it keeps pulling on the hub directories to check whether new daemons/flists/binaries are available and whether things need get upgraded.
### Features
The features of ZOS Boot are:
- no local operating system installed
- network boot from the grid to get on the grid
- decreased administrative and operational work, allowing for autonomous operations
- increased security
- increased efficiency (deduplication, only one version of the OS stored for thousands of servers)
- all server storage space is available for enduser workloads (average operating system size around 10GB)
- bootloader is less than 1MB in size and can be presented to the servers as a PXE script, USB boot device, ISO boot image.

View File

@@ -0,0 +1,6 @@
## Zero Hacking Surface
Zero does not mean is not possible but we use this term to specificy that we minized the attack surface for hackers.
- There is no shell/server interface on zero-os level (our operating system)
- There are no hidden or unintended processes running which are not prevalidatedOne comment: still ssh server running with keys of a few people on each server, not yet disabled. To be disabled in the near future, now still useful to debug but it is a backdoor. The creation of a new primitive where the farmer agrees to give access to administrators under analysis. This way, when a reservation is sent to a node, a ssh server is booted up with chosen key to allow admins to go in.

View File

@@ -0,0 +1,19 @@
## Zero-OS Installation
The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed.
### 3Node Install
1. Acquire a computer (server).
2. Configure a farm on the TFGrid explorer.
3. Download the bootloader and put on a USB stick or configure a network boot device.
4. Power on the computer and connect to the internet.
5. Boot! The computer will automatically download the components of the operating system (Zero-OS).
The actual bootloader is very small. It brings up the network interface of your computer and queries TFGeid for the remainder of the boot files needed.
The operating system is not installed on any local storage medium (hard disk, ssd). Zero-OS is stateless.
The mechanism to allow this to work in a safe and efficient manner is a ThreeFold innovation called our container virtual filesystem.
For more information on setting a 3Node, please refer to the [Farmers documentation](../../../farmers/farmers.md).

View File

@@ -0,0 +1,134 @@
<h1> Zero-OS Advantages </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Zero-OS Installation](#zero-os-installation)
- [3Node Install](#3node-install)
- [Unbreakable Storage](#unbreakable-storage)
- [Zero Hacking Surface](#zero-hacking-surface)
- [Zero Boot](#zero-boot)
- [How](#how)
- [Features](#features)
- [Deterministic Deployment](#deterministic-deployment)
- [Zero-OS Protect](#zero-os-protect)
## Introduction
We present the many advantages of Zero-OS.
## Zero-OS Installation
The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed.
### 3Node Install
1. Acquire a computer (server).
2. Configure a farm on the TFGrid explorer.
3. Download the bootloader and put on a USB stick or configure a network boot device.
4. Power on the computer and connect to the internet.
5. Boot! The computer will automatically download the components of the operating system (Zero-OS).
The actual bootloader is very small. It brings up the network interface of your computer and queries TFGeid for the remainder of the boot files needed.
The operating system is not installed on any local storage medium (hard disk, ssd). Zero-OS is stateless.
The mechanism to allow this to work in a safe and efficient manner is a ThreeFold innovation called our container virtual filesystem.
For more information on setting a 3Node, please refer to the [Farmers documentation](../../../../documentation/farmers/farmers.md).
## Unbreakable Storage
- Unlimited history
- Survives network, datacenter or node breakdown
- No silent corruption possible
- Quantum safe (data cannot be decrypted by quantum computers) as long as quantum computer has no access to the metadata
- Self-healing & autocorrecting
If you deploy a container with simple disk access, you dont have it.
Performance is around 50MB/second, if a bit more CPU is given for the distributed storage encoder, we achieve this performance.
For more information, read [this documentation](../../primitives/storage/qsfs.md).
## Zero Hacking Surface
Zero does not mean is not possible but we use this term to specificy that we minized the attack surface for hackers.
- There is no shell/server interface on zero-os level (our operating system)
- There are no hidden or unintended processes running which are not prevalidatedOne comment: still ssh server running with keys of a few people on each server, not yet disabled. To be disabled in the near future, now still useful to debug but it is a backdoor. The creation of a new primitive where the farmer agrees to give access to administrators under analysis. This way, when a reservation is sent to a node, a ssh server is booted up with chosen key to allow admins to go in.
## Zero Boot
> Zero Boot = Zero-OS boot process
ZOS Boot is a boot facility that allows 3nodes to boot from network boot servers located in the TF Grid. This boot mechanism creates as little as possible operational and administration overhead. ZOS Boot is a crucial part for enabling autonomy by *not* having the operating system installed on local disks on 3nodes. With a boot network facility and no local operating system files you immediate erase a number of operational and administration tasks:
- to install the operating system to start with
- to keep track of which systems run which version of the operating system (especially in large setups this is a complicated and error prone task)
- to keep track of patches and bug fixes that have been applied to systems
That's just the administration and operational part of maintaining a server estate with local installed operating system. On the security side of things the benefits are even greater:
- many hacking activities are geared towards adding to or changing parts of the operating system files. This is a threat from local physical access to servers as well as over the network. When there are no local operating system files installed this threat does not exist.
- accidental overwrite, delete or corruption of operating system files. Servers run many processes and many of these processes have administrative access to be able to do what they need to do. Accidental deletion or overwrites of crucial files on disk will make the server fail a reboot.
- access control. I there is no local operating system installed access control, user rights etc etc. are unnecessary functions and features and do not have to be implemented.
### How
In this image from fs, a small partition is mounted in memory to start booting the machine, it gets IPXE (downloads what it needs), and then 0-OS boots.
After that, going to the hub, downloading different lists.
There is 1 main flist that triggers downloads of multiple flists. Read more [here](../../../../documentation/developers/flist/flist.md).
In there all the components/daemons that do part of the 0-OS.
Also the download of the zos-bins, i.e. external binaries are triggered this way (https://hub.grid.tf/tf-zos-bins).
The core components of zero-os can be found in: [Zero-OS repo](https://github.com/threefoldtech/zos/tree/master/bins/packages) = If something changes in the directory, a workflow is triggered to rebuild the full flist and push it to the hub.
When a node discovers there is a new version of one of these lists on the hub, it downloads it, restarts the daemon with the new version.
Over the lifetime of the node, it keeps pulling on the hub directories to check whether new daemons/flists/binaries are available and whether things need get upgraded.
### Features
The features of ZOS Boot are:
- no local operating system installed
- network boot from the grid to get on the grid
- decreased administrative and operational work, allowing for autonomous operations
- increased security
- increased efficiency (deduplication, only one version of the OS stored for thousands of servers)
- all server storage space is available for enduser workloads (average operating system size around 10GB)
- bootloader is less than 1MB in size and can be presented to the servers as a PXE script, USB boot device, ISO boot image.
## Deterministic Deployment
- flists concept (deduped vfilesystem, no install, ...)
The Dedupe filesystem flist uses fuse = interface which allows you to create the file system interface in user space, it is a virtual filesystem.
Metadata is exposed. The system sees the full tree of the image, but data itself not there, data is downloaded whenever they are accessed.
There are multiple ways to create an flist:
- Convert an existing docker image which is hosted on the docker hub
- Push an archive like a tjz on the hub
- A library and CLI tool exist to build the flist from scratch: doing it this way, the directory is locally populated, and the flist is then created from the CLI tool.
- A [GitHub action](https://github.com/threefoldtech/publish-flist) allows to build a flist directly from GitHub action, useful for developers on GitHub
Be aware that the flist system works a bit differently than the usual deployment of containers (dockers), which doesn't do mounting of volumes from your local disk into container for configuration.
With flists you need to modify your image to get configuration from environment. This is basically how docker was originally intended to be used.
- Smart contract for IT
The smart contract for IT concept is applicable to any workload: containers, VMs, all gateways primitives, volumes, kubernetes and network.
It is a static agreement between farmer and user about deployment of an IT workload.
- no dynamic behavior for deployment at runtime
- no process can start unless the files are 100% described on flist level
## Zero-OS Protect
- The operating system of the 3node (Zero-OS) is made to exist in environments without the presence of technical knowhow. 3nodes are made to exist everywhere where network meet a power socket. The OS does not have a login shell and does not allow people to log in with physical access to a keyboard and screen nor does it allows logins over the network. There is no way the 3node accepts user initiated login attempts.
- For certified capacity a group of known strategic vendors are able to lock the [BIOS](https://en.wikipedia.org/wiki/BIOS) of their server range and make sure no-one but them can unlock and change features present in the BIOS. Some vendors have an even higher degree of security and can store private keys in chips in side the computer to provider unique identification based on private keys or have mechanisms to check wether the server has been opened / tampered with in the transportation from the factory / vendor to the Farmer. All of this leads to maximum protection on the hardware level.
- 3nodes boot from a network facility. This means that they do not have local installed operating system files. Also they do not have a local username / password file or database. Viruses and hackers have very little work with if there are no local files to plant viruses or trojan horses in. Also the boot facility provides hashes for the files sent to the booting 3node so that the 3node can check wether is receives the intended file, no more man in the middle attacks.
- The zos_fs provides the same hash and file check mechanism. Every application file presented to a booting container has a hash describing it and the 3node on which the container is booting can verify if the received file matches the previously received hash.
- Every deployment of one or more applications starts with the creation of a (private) [znet](../../primitives/network/znet.md). This private overlay network is single tenant and not connected to the public internet. Every application or service that is started in a container in this overlay network is connection to all of the other containers via a point to point, encrypted network connection.

View File

@@ -0,0 +1,10 @@
# Zero-OS Advantages
<h2>Table of Contents</h2>
- [Zero-OS Installation](./zero_install.md)
- [Unbreakable Storage](./unbreakable_storage.md)
- [Zero Hacking Surface](./zero_hacking_surface.md)
- [Booting Process](./zero_boot.md)
- [Deterministic Deployment](./deterministic_deployment.md)
- [Zero-OS Protect](./zos_protect.md)

View File

@@ -0,0 +1,39 @@
# ZOS Monitoring
ZOS collects data from deployed solutions and applications and presents data in a well known open source monitoring solution called prometheus.
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company.
For more elaborate overviews of Prometheus, see [here](https://prometheus.io/)
### Features
- a multi-dimensional data model with time series data identified by metric name and key/value pairs
- PromQL, a flexible query language to leverage this dimensionality
- no reliance on distributed storage; single server nodes are autonomous
- time series collection happens via a pull model over HTTP
- pushing time series is supported via an intermediary gateway
- targets are discovered via service discovery or static configuration
- multiple modes of graphing and dashboarding support
### Components
The Prometheus ecosystem consists of multiple components, many of which are optional:
- the main Prometheus server which scrapes and stores time series data
- client libraries for instrumenting application code
- a push gateway for supporting short-lived jobs
- special-purpose exporters for services like HAProxy, StatsD, Graphite, etc.
- an alertmanager to handle alerts
- various support tools
### Roadmap
- ONLY for OEM partners today
!!!def alias:zos_monitoring
!!!include:zos_toc

View File

@@ -0,0 +1,7 @@
# ZOS Protect
- The operating system of the 3node (Zero-OS) is made to exist in environments without the presence of technical knowhow. 3nodes are made to exist everywhere where network meet a power socket. The OS does not have a login shell and does not allow people to log in with physical access to a keyboard and screen nor does it allows logins over the network. There is no way the 3node accepts user initiated login attempts.
- For certified capacity a group of known strategic vendors are able to lock the [BIOS](https://en.wikipedia.org/wiki/BIOS) of their server range and make sure no-one but them can unlock and change features present in the BIOS. Some vendors have an even higher degree of security and can store private keys in chips in side the computer to provider unique identification based on private keys or have mechanisms to check wether the server has been opened / tampered with in the transportation from the factory / vendor to the Farmer. All of this leads to maximum protection on the hardware level.
- 3nodes boot from a network facility. This means that they do not have local installed operating system files. Also they do not have a local username / password file or database. Viruses and hackers have very little work with if there are no local files to plant viruses or trojan horses in. Also the boot facility provides hashes for the files sent to the booting 3node so that the 3node can check wether is receives the intended file, no more man in the middle attacks.
- The zos_fs provides the same hash and file check mechanism. Every application file presented to a booting container has a hash describing it and the 3node on which the container is booting can verify if the received file matches the previously received hash.
- Every deployment of one or more applications starts with the creation of a (private) [znet](../../primitives/network/znet.md). This private overlay network is single tenant and not connected to the public internet. Every application or service that is started in a container in this overlay network is connection to all of the other containers via a point to point, encrypted network connection.

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

View File

@@ -0,0 +1,44 @@
![](img/zos22.png)
# Zero-OS
![](img/zero_os_overview.jpg)
!!!include:whatis_zos
### Imagine an operating system with the following benefits
- upto 10x more efficient for certain workloads (e.g. storage)
- no install required
- all files are deduped for the VM's, containers and the ZOS itself, no more data duplicated filesystems
- the hacking footprint is super small, which leads to much more safe systems
- every file is fingerprinted and gets checked at launch time of an application
- there is no shell or server interface on the operating system
- the networks are end2end encrypted between all Nodes
- there is the possibility to completely disconnect the compute/storage from the network service part which means hackers have a lot less chance to get to the data.
- a smart contract for IT layer allows groups of people to deploy IT workloads with concensus and full control
- all workloads which can run on linux can run on Zero-OS but in a much more controlled, private and safe way
> ThreeFold has created an operating system from scratch, we used the Linux kernel and its components and then build further on it, we have been able to achieve all above benefits.
## The requirements for our TFGrid based on Zero OS are:
- **Autonomy**: TF Grid needs to create compute, storage and networking capacity everywhere. We could not rely on a remote (or a local) maintenance of the operating system by owners or operating system administrators.
- **Simplicity**: An operating system should be simple, able to exist anywhere, for anyone, good for the planet.
- **Stateless**. In a grid (peer-to-peer) set up, the sum of the components is providing a stable basis for single elements to fail and not bring the whole system down. Therefore, it is necessary for single elements to be stateless, and the state needs to be stored within the grid.
<!-- !!!include:zos_toc -->
!!!def alias:zos,zero-os,threefold_operating_system,tf_os,threefold_os
<!--
### Properties of Zero-OS
ZOS is a very lightweight and efficient operating system. It supports a small number of _primitives_; the low-level functions it could perform natively in the operating system.
There is no shell, local nor remote.
It does not allow for inbound network connections to happen. -->

View File

@@ -0,0 +1,6 @@
## ZOS compute storage overview
![](img/zos_overview_compute_storage.jpg)
!!!include:zos_toc

View File

@@ -0,0 +1 @@
# Zero-OS Install

View File

@@ -0,0 +1,21 @@
### Zero OS install
The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed.
# Zero-OS Install Mechanism
## Stateless Install
1. Acquire a computer (server).
2. Configure a farm on the TFGrid explorer.
3. Download the bootloader and put on a USB stick or configure a network boot device.
4. Power on the computer and connect to the internet.
5. Boot! The computer will automatically download the components of the operating system (Zero-OS).
The actual bootloader is very small. It brings up the network interface of your computer and queries TFGrid for the remainder of the boot files needed.
The operating system is not installed on any local storage medium (hard disk, ssd). Zero-OS is stateless.
The mechanism to allow this to work in a safe and efficient manner is a ThreeFold innovation called our container virtual filesystem. This is explained in more detail [here](flist)
!!!def alias:zero_os_install

View File

@@ -0,0 +1,6 @@
# ZOS network overview
![](img/zos_network_overview.jpg)
!!!include:zos_toc