update tech book for docusaurus #2

Merged
mik-tf merged 1 commits from development_techbook into main 2025-01-16 22:05:32 +00:00
135 changed files with 1756 additions and 0 deletions
Showing only changes of commit 3d1894e02b - Show all commits

View File

@ -0,0 +1,7 @@
{
"label": "Architecture",
"position": 4,
"link": {
"type": "generated-index",
}
}

View File

@ -0,0 +1,14 @@
---
sidebar_position: 1
---
# Base Layer for Many Usecases
![](../img/usecases_tfgrid.png)
Our Decentralized Cloud Technology the ideal platform for hosting any web3 and AI workloads.
Our Zero-OS operating system also supports integrated GPUs, ensuring optimal performance for decentralized AI applications.
> Any workload (web2/3 and AI) can run on on our Decentralized Cloud.

View File

@ -0,0 +1,27 @@
---
sidebar_position: 2
---
# Architecture Cloud Engine
![](../img/cloud_engine.jpg)
The 3 Nodes form the base layer, providing compute, storage, and network capabilities.
The quantum safe network enables all IT workloads to communicate with one another using the most efficient route and the strongest security measures.
![](../img/cloudengine_architecture.png)
The Storage layer make sure we all have perfect control over our storage and can never loose our information.
## Autonomous Deployments
![](../img/autonomous_workloads.png)
The hero's can deploy and manage IT workloads on our behalf.

View File

@ -0,0 +1,32 @@
---
sidebar_position: 4
---
# Hero as Virtual System Administrator
![](../img/virtual_sysadmin.jpg)
![](../img/3bot_virtualsysadmin.png)
Every individual has a personal Hero—a virtual assistant that manages your digital life and serves as your system administrator for AI or Edge Cloud workloads.
These Heroes communicate with each other over the private Mycelium network, which offers a secure message bus for scalable and secure communication.
A Hero can store an unlimited amount of information, and only your Hero has access to this data, deciding how and when it can be shared with other Heroes or AI systems.
Active 24/7, your Hero continuously monitors your IT infrastructure. If something goes wrong, the Hero will automatically resolve the issue, ensuring smooth and uninterrupted service.
### Natural Evolution
![](../img/hero_archit.png)
We believe this represents the natural evolution away from reliance on centralized services.
With millions of Heroes working together, can form a collective global intelligence, leveraging various tools to interact with existing services on behalf of their users within all safety of our own sovereignity.

View File

@ -0,0 +1,64 @@
---
sidebar_position: 3
---
# Architecture for an Upgraded Internet
![](../img/architecture.png)
- **3Nodes**: Deliver compute, storage, and GPU capacity.
- **Mycelium Routers**: Allow all Mycelium Network participants to communicate with each other and also connect over existing Internet links. Mycelium Routers provide bandwidth to our ecosystem.
- **WebGateways**: Provide a bridge between the current Internet and the Mycelium Network.
- **Hero 3Bots**: Represent our digital lives and possess the knowledge to act as virtual system administrators, ensuring our IT workloads remain operational.
- **Users**: Arrange their digital lives through their Hero 3Bots.
- **AI Clouds**: Are created by connecting GPUs from the 3Nodes over the Mycelium Network.
## 3Nodes
Each 3node provides compute, storage and network capacity, its the core capacity layer of the cloud.
![](../img/3node.png)
A cloud needs hardware/servers to function. Servers of all shapes and sizes can be added. The production of Cloud Capacity is called Farming and parties who add these servers to the grid are called Farmers.
Farmers download the Zero-OS operating system and boot their servers. Once booted, these servers become 3Nodes. The 3Nodes will register themselves in a blockchain. Once registered, the capacity of the 3Nodes will become available. This enables a peer2peer environment for people or companies to reserve their Internet Capacity directly from the hardware but yet allowing full control by commercial parties if that would be required.
Each 3Node is running our Zero-OS operating system.
## Mycelium Routers
Mycelium is an end-to-end encrypted overlay meshed wireless network with agents available for any desktop and mobile operating system.
We have also created a dedicated Mycelium Router. Mycelium Routers seamlessly integrate with our Mycelium network technology, efficiently selecting the shortest path between all participants.
These Mycelium Routers are compatible not only with Satelite, Wi-Fi but also with 4G and 5G networks, ensuring versatile connectivity options.
The Mycelium Routers can be installed in locations with substantial network capacity, allowing everyone to bridge between the current Internet and the overlay Mycelium network.
## Web Gateways
The Web Gateway serves as a mechanism to connect the private (overlay) networks (Mycelium) to the open Internet.
By not providing an open and direct path into the private network, many malicious phishing and hacking attempts are stopped at the Web Gateway level for container applications.
The Web Gateways provide HTTP(S) and, in the future, other web services, which get forwarded to the relevant service exposing itself over Mycelium. This setup offers multiple access points to various backend services.
## TFChain: Our Blockchain
This blockchain does the following:
- registry for all 3bots (identity system, aka phonebook)
- registry for all farmers & 3nodes
- registry for our reputation system
- info as required for the Smart Contract for IT
This is the hart of our operational system of our decentralized cloud.
## Ultra Scalable
![](../img/architecture_scalable.png)
This architecture scales to the planet.

View File

@ -0,0 +1,7 @@
{
"label": "The Cloud Re-Invented",
"position": 3,
"link": {
"type": "generated-index",
}
}

View File

@ -0,0 +1,62 @@
---
title: Cloud Beyond Cost
sidebar_position: 3
---
![](../img/farming_pools.jpg)
# The Cloud: Beyond Cost A Business Perspective
## Introduction
The transition to cloud computing isn't solely driven by cost savings.
While affordability is a factor, the primary appeal of cloud services lies in their reliability, scalability, and performance.
This parallels the concept of insurance, where customers are willing to pay a premium for the assurance of superior service and uptime.
In the decentralized infrastructure (DePIN) space, establishing trust and reliability is super important, but not enough available in current offerings.
## Cost vs. Quality
### Reliability and Uptime
Customers often prefer to pay a higher price for cloud services that guarantee better uptime and reliability.
For example, paying $10 per terabyte for a service with stellar uptime and customer support is more appealing than a $5 per terabyte service with frequent downtimes and minimal support. The value proposition lies in uninterrupted access to data and services, which is critical for business operations.
### Service Level Agreements (SLAs)
Service Level Agreements (SLAs) play a crucial role in this decision-making process.
SLAs provide a clear, contractual guarantee of the service levels customers can expect. Companies offering robust SLAs with high uptime guarantees, fast response times, and comprehensive support are more attractive, even at a higher cost.
## Decentralized Infrastructure (DePIN) and Trust
### Challenges in DePIN
One of the significant challenges in DePIN is ensuring that customers can trust decentralized hosters.
Unlike traditional centralized providers, decentralized hosters need to establish credibility and reliability in a market that is inherently less controlled.
### Farming Pools and Legal Decentralization
#### Formation of Farming Pools
Hosters can group together in farming pools, creating a collaborative environment where resources and responsibilities are shared. These pools can offer combined storage and computational power, enhancing overall performance and reliability.
#### Security Token Offerings (STOs)
To fund these farming pools, we can utilize Security Token Offerings (STOs). Investors can buy tokens, representing shares in these pools, thus promoting legal decentralization. This model not only democratizes investment but also ensures that the operations of these pools are transparent and regulated.
### Transparency and Legal Assurance
#### Open SLAs
Each farming pool must be explicit about the SLAs they can deliver. Transparency in service commitments ensures that customers know what performance and reliability to expect. This openness is critical in building trust in a decentralized environment.
#### Legal Blockchain Contracts
Blockchain contracts must be legally binding in the real world. These contracts should clearly define the terms of service, dispute resolution mechanisms, and compliance with relevant regulations. Legal enforceability ensures that customers can trust the decentralized hosters to honor their commitments.

View File

@ -0,0 +1,32 @@
---
title: A New Cloud Engine
sidebar_position: 1
---
# A New Cloud Engine
## Requirements for a New Cloud Engine
![alt text](../img/requirements.png)
- Compute, Storage, Network need to be
- Local
- Sovereign
- Private
- More Secure
- Storage needs to be
- More reliable with less overhead (only 20% overhead needed)
- Capable to be global and be used as CDN (Content Delivery Network)
- Fast enough for the Use Case at hand
- Data can never get lost nor corrupted.
- Storage can scale to Zetabytes as Easily as Petabytes
- Network needs to be
- Working no matter what happens with existing network, route around issues.
- Local sensitive (chose shortest path)
- End2End Encrypted
- Capable to really know where information goes to or comes from (authenticity)
- The full system needs to be
- Autonomous & self Healing
- It should be possible to operate without human Intervention
- Green
- We believe Internet / Cloud can be delivered using at least 10x less energy.

View File

@ -0,0 +1,60 @@
---
title: Internet Re-Invented
sidebar_position: 2
---
# The Internets Natural Progression
The Internet was always meant to be a peer-to-peer infrastructure.
As large companies became profit and data centric, centralization quickly became the norm.
***We have a vision of the Internet which is much more close to how the Internet was intended to be.***
## Requirements For A New Internet
- Compute, Storage, Network need to be
- Local
- Sovereign
- Private
- More Secure
- Storage needs to be
- More reliable with less overhead
- Capable to be global and be used as Content Delivery Network (CDN)
- Fast enough for the use case at hand
- Network needs to be
- Working no matter what happens with existing network, route around issues
- Local sensitive (chose shortest path)
- End2End Encrypted
- Capable to really know where information goes to or comes from (authenticity)
## Internet/Cloud Architecture
!!wiki.include page:'internet_archtecture0.md'
## Base Layer for a New Cloud / Internet
![](../img/base_layer.png)
We need a new cloud engine which supports the evolution of the Internet
## Natural Progression
![alt text](../img/natural_progression.png)
We envision a world where every person is at the center of their digital life. In this new Internet, each person has their own digital avatar, which we call a ***Hero***.
The technical backbone enabling the Hero is a component known as the 3Bot. This server, owned and managed by you, operates on our decentralized cloud infrastructure.
Communication between 3Bots is optimized to use the shortest possible paths, ensuring that all interactions are end-to-end encrypted for maximum security and privacy.
## 3Bot Architecture
![alt text](../img/arch_minimal.png)
The underlying network of capacity is the decentralized cloud which is like the basic IT energy which makes all of this possible.
The cecentralized cloud is the result of more than 20 years of development and it is now active on more than 2000 nodes.

View File

@ -0,0 +1,14 @@
---
title: World Records
sidebar_position: 4
---
## World Records
Our team is working on re-inventing layers of the Internet for more than 30 years. While we were doing so this has resulted in some world records and innovative products.
Here is an overview of those achievements:
![](../img/world_records.png)

View File

@ -0,0 +1,7 @@
{
"label": "Core Features",
"position": 7,
"link": {
"type": "generated-index",
}
}

View File

@ -0,0 +1,7 @@
{
"label": "Compute",
"position": 2,
"link": {
"type": "generated-index",
}
}

View File

@ -0,0 +1,24 @@
---
sidebar_position: 1
title: Compute Layer
---
# Compute Layer
![](../../img/zos_compute.png)
Default features:
- Compatible with Docker
- Compatible with any VM (Virtual Machine)
- Compatible with any Linux workload
- Integrated unique storage & network primitives
- Integrated smart contract for IT layer
We have the following unique advantages:
- No need to work with images, we work with our unique ZOS FS
- Every container runs in a dedicated virtual machine providing more security
- The containers talk to each other over a private network (Mycelium)
- The containers can use a web gateway to allow internet users to connect to the applications which are running in their secure containers
- Can use core-x to manage the workload

View File

@ -0,0 +1,39 @@
---
sidebar_position: 3
title: Zero-Deploy
---
## Deterministic Deployment
The concept of Zero-Deploy is a key component of the **Smart Contract for IT** framework, which can be applied to any type of workload—whether it's containers, virtual machines (VMs), network gateways, volumes, Kubernetes resources, or other network elements. This framework serves as a formal agreement between a farmer (provider) and a user regarding the deployment of an IT workload.
### Process
1. **Build Your Code**
Develop and prepare your application code.
2. **Convert to Zero-Image**
Use a CI/CD solution (e.g., Hero CI/CD) to convert your Docker build (or other format) into a Zero-Image format.
3. **Define the Workload**
Specify all the details of your workload, including network bridges, web gateways, required machines, and more.
4. **Register and Sign**
Register the workload and sign it with your private key.
5. **Automatic Detection**
All necessary Zero-OS nodes (our infrastructure) will detect that a new workload needs to be deployed.
6. **Deployment Process**
The nodes will pull down the formal workload descriptions and initiate the deployment process.
7. **Validation**
Every step of the deployment is verified by Zero-OS (ZOS) to ensure that the intended result is accurately replicated. If any discrepancies are detected, ZOS will halt the deployment and provide an error message.
### Benefits
- **Deterministic Deployment**: There is no dynamic behavior during deployment at runtime, ensuring a consistent and predictable outcome.
- **Strict Compliance**: No process can start unless all files and configurations are fully described at the flist level.

View File

@ -0,0 +1,35 @@
---
sidebar_position: 4
title: Zero-Install
---
## Zero install
![](../../img/boot.png)
The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed.
### 3Node Install
1. Deploy a computer
2. Configure a farm on the TFGrid explorer
3. Download the bootloader and put on a USB stick or configure a network boot device
4. Power on the computer and connect to the internet
5. Boot! The computer will automatically download the components of the operating system (Zero-OS)
The actual bootloader is very small, it brings up the network interface of your computer and queries TFGrid for the remainder of the boot files needed.
The operating system is not installed on any local storage medium (hard disk, ssd), Zero-OS is stateless.
The mechanism to allow this to work in a safe and efficient manner is an innovation called our container virtual filesystem.
### Process
- optionally: configure booting from secure BIOS
- optionally: install signing certificate in the BIOS, to make sure that only the right bootloader can be started
- the bootloader (ISO, PXE, USB, ...) get's downloaded from Internet (TFGrid CDN or private deployment)
- core-0 (the first boot process) starts, self verification happens
- the metadata for the the required software modules is downloaded and checked against signature and hashes
- the core-0 zero_image service

View File

@ -0,0 +1,46 @@
---
sidebar_position: 2
title: Zero-OS
---
# Zero-OS
![](../../img/zos23.png)
A revolutionary operating system which can be booted on most modern computers. Once installed Zero-OS locks the hardware and makes it accessible to the decentralized marketplace or a centralized ultra secure deployment system. Blockchain mechanism can be used to strongly control how workloads are deployed on the system.
![](../../img/zos_overview.png)
## ZOS Compute & Storage Overview
![](../../img/zos_overview_compute_storage.jpg)
## ZOS Network Overview
![](../../img/zos_network_overview.jpg)
### Imagine An Operating System With The Following Benefits
- Up to 10x more efficient for certain workloads (e.g. storage)
- No install required
- All files are deduped for the VM's, containers and the ZOS itself, no more data duplicated filesystems
- The hacking footprint is very small which leads to much safer systems
- Every file is fingerprinted and gets checked at launch time of an application
- There is no shell or server interface on the operating system
- The networks are end2end encrypted between all Nodes
- It is possible to completely disconnect the compute/storage from the network service part which means hackers have a lot less chance to access the data
- A smart contract for the IT layer allows groups of people to deploy IT workloads with consensus and full control
- All workloads which can run on linux can run on Zero-OS but in a much more controlled, private and safe way
> We have created an operating system from scratch. We used the Linux kernel and its components and then built further on it. We have been able to achieve all of the above benefits.
## Requirements:
- **Autonomy**: TF Grid needs to create compute, storage and networking capacity everywhere. We could not rely on a remote (or a local) maintenance of the operating system by owners or operating system administrators.
- **Simplicity**: An operating system should be simple, able to exist anywhere for anyone, and be good for the planet.
- **Stateless**: In a grid (peer-to-peer) set up, the sum of the components provides a stable basis for single elements to fail and not bring the whole system down. Therefore, it is necessary for single elements to be stateless, and the state needs to be stored within the grid.

View File

@ -0,0 +1,34 @@
---
sidebar_position: 5
title: ZKube
---
# ZKube
TFGrid is compatible with Kubernetes Technology.
![](../../img/kubernetes_0.jpg)
### Unique for our Kubernetes implementation
- The Kubernetes networks are on top of our Mycelium technology which means all traffic between containers and kubernetes hosts is end2end encrypted independent of where your Kubernetes nodes are deployed.
- You can mount a Quantum Safe Storage System underneath a Kubernetes Node (VM), which means that you can deploy containers on top of QSFS to host unlimited amounts of storage in a super safe way.
- You Kubernetes environment is for sure 100% decentralized, you define where you want to deploy your Kubernetes nodes and only you have access to the deployed workloads on the TFGrid.
### Features
* integration with znet (efficient, secure encrypted network between the zero_vms)
* can be easily deployed at the edge
* single-tenant!
<!--
### Zero VM Benefits
* [ZOS Protect](zos_protect): no hacking surface to the Zero-Nodes, integrate silicon route of trust
* [ZNet](znet) and [Planetary Net](planetary_network): a true global single backplane network connecting us all -->
### Architecture
![](../../img/zkube_architecture.jpg)

44
docs/features/features.md Normal file
View File

@ -0,0 +1,44 @@
---
title: Decentralized Cloud Technology
sidebar_position: 1
---
# Decentralized Cloud Technology Features
We present the highlights of our decentralized cloud technology that solves many of the current IT and cloud challenges.
![](../img/cloud_features.png)
## Zero OS as a generator for Compute, Storage, Network capacity
### Compute (uses CU)
- ZKube
- kubernetes deployment
- Zero VM
- the container or virtual machine running inside ZOS
- CoreX
- process manager (optional), can be used to get remote access to your zero_vm
A 3Node is a Zero-OS enabled computer which is hosted with any of the Cloud Providers.
### There are 4 storage mechanisms which can be used to store your data:
- ZOS FS
- is our dedupe unique filesystem, replaces docker images.
- ZOS Mount
- is a mounted disk location on SSD, this can be used as faster storage location.
- Quantum Safe Filesystem
- this is a super unique storage system, data can never be lost or corrupted. Please be reminded that this storage layer is only meant to be used for secondary storage applications.
- ZOS Disk
- a virtual disk technology, only for TFTech OEM partners.
### There are 4 ways how networks can be connected to a Z-Machine.
- Mycelium = Planetary network
- is a planetary scalable network, we have clients for windows, osx, android and iphone.
- ZOS NIC
- connection to a public ipaddress
- WEB GW
- web gateway, a secure way to allow internet traffic reach your secure Z-Machine.

View File

@ -0,0 +1,7 @@
{
"label": "Network",
"position": 4,
"link": {
"type": "generated-index",
}
}

View File

@ -0,0 +1,16 @@
---
sidebar_position: 2
title: Mycelium
description: Our Planetary Network
---
# Mycelium: Our Planetary Network
![alt text](../../img/mycelium.png)
The planetary network called Mycelium is an overlay network which lives on top of the existing Internet or other peer-to-peer networks created.
In the Mycelium network, everyone is connected to everyone. End-to-end encryption between users of an app and the app runs behind the network wall.
!!wiki.include page:'tech:mycelium0.md'

View File

@ -0,0 +1,35 @@
---
sidebar_position: 1
---
# Network Technology Overview
Our decentralized networking platform allows any compute and storage workload to be connected together on a private (overlay) network and exposed to the existing Internet network. The peer-to-peer network platform allows any workload to be connected over secure encrypted networks, which will look for the shortest path between nodes.
### Secure Mesh Overlay Network (Peer-to-Peer)
ZNet is the foundation of any architecture running on the TF Grid. It can be seen as a virtual private data center and the network allows all of the *N* containers to connect to all of the *(N-1)* other containers. Any network connection is a secure network connection between your containers, it creates a peer-to-peer network between containers.
![alt text](../../img/net1.png)
No connection is made with the Internet. The ZNet is a single tenant network and by default not connected to the public Internet. Everything stays private. For connecting to the public Internet, a Web Gateway is included in the product to allow for public access, if and when required.
### Redundancy
As integrated with Web Gateway
![alt text](../../img/net2.png)
- Any app can get (securely) connected to the Internet by any chosen IP address made available by ThreeFold network farmers through WebW
- An app can be connected to multiple web gateways at once, the DNS round robin principle will provide load balancing and redundancy
- An easy clustering mechanism where web gateways and nodes can be lost and the public service will still be up and running
- Easy maintenance. When containers are moved or re-created, the same end user connection can be reused as that connection is terminated on the Web Gateway. The moved or newly created Web Gateway will recreate the socket to the Web Gateway and receive inbound traffic.
### Network Wall
![](../../img/network_wall.png)
For OEM projects we can implement a cloud deployment without using TCP-IP or Ethernet this can lead to super secure environments, ideal to battle the Cuber Pandemic.

View File

@ -0,0 +1,43 @@
---
sidebar_position: 3
title: Web Gateway
---
# Web Gateway
The Web Gateway is a mechanism to connect private networks to the open Internet in such a way that there is no direct connection between the Internet and the secure workloads running in the Zero VMs.
![](../../img/webgateway.jpg)
### Key Benefits
- Separation between where compute workloads are and where services are exposed
- Redundancy: Each app can be exposed on multiple web gateways at once
- Support for many interfaces
- Helps resolve shortage of IPv4 addresses
### Implementation
Some 3Nodes support gateway functionality (this is configured by the farmers). A 3Node with gateway configuration can then accept gateway workloads and forward traffic to Zero VMs that only have Planetary Network or IPv6 addresses.
The gateway workloads consist of a name (prefix) that first needs to be reserved on the blockchain. Then, the list of backend IPs. There are other flags that can be set to control automatic TLS (please check Terraform documentation for the exact details of a reservation).
Once the 3Node receives this workload, the network configures proxy for this name and the Planetary Network IPs.
### Security
Zero VMs have to have a Planetary Network IP or any other IPv6 (IPv4 is also accepted). This means that any person connected to the Planetary Network can also reach the Zero VM without the need for a proxy.
So it's up to the Zero VM owner/maintainer to make sure it is secured and that only the required ports are open.
### Redundant Network Connection
![](../../img/redundant_net.jpg)
### Unlimited Scale
![](../../img/webgw_scaling.jpg)
The network architecture is a pure scale-out network system. It can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users.
Supply and demand scale independently. For supply, there can be unlimited network farmers providing web gateways on their own 3Nodes, and unlimited compute farmers providing 3Nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprises, and so on. Globally, there is exponentially-growing demand for data processing and storage use cases.

View File

@ -0,0 +1,7 @@
{
"label": "Storage",
"position": 2,
"link": {
"type": "generated-index",
}
}

View File

@ -0,0 +1,92 @@
---
sidebar_position: 4
title: NFT Storage
---
# Quantum Safe Storage System for NFT
Our technology enables quantum safe storage for NFT.
![](../../img/nft_architecture_updated.png)
The owner of the NFT can upload the data using one of our supported interfaces:
- Http upload (everything possible on https://nft.storage/ is also possible on our system)
- Filesystem
Anyone in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available anywhere in the world using multiple interfaces again (IPFS, HTTP(S) etc.). Caching happens on a global level. No special software or account on ThreeFold is needed to do this.
The NFT system operates on top of a very reliable storage system which is sustainable for the planet and ultra secure and private. The NFT owner also owns the data.
## The Benefits
#### Persistence = Owned by the data user, as represented by their associated 3Bot
The system is not based on a shared-all architecture.
Whoever stores the data has full control over:
- Where data is stored (specific locations)
- The redundancy policy which is used
- How long the data is kept
- CDN policy (where the data is available and for how long)
#### Reliability
- Data cannot be corrupted
- Data cannot be lost
- Each time data is fetched back the hash (fingerprint) is checked. If there are any issues then autorecovery occurs
- All data is encrypted and compressed (unique per storage owner)
- Data owner chooses the level of redundancy
#### Lookup
- Multi URL & storage network support (see more in the interfaces section)
- IPFS, HyperDrive URL schema
- Unique DNS schema (with long key which is globally unique)
#### CDN Support
Each file (movie, image etc.) stored is available in many locations worldwide.
Each file gets a unique url pointing to the data which can be retrieved from all these locations.
Caching happens at each endpoint.
#### Self Healing & Auto Correcting Storage Interface
Any corruption e.g. bitrot gets automatically detected and corrected.
In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy.
#### The Storage Algoritm Uses Quantum Safe Storage System As Its Base
Not even a quantum computer can hack data stored on our QSSS.
The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems.
It uses forward looking error correcting codes inside.
#### Green
Storage uses upto 10x less energy compared to classic replicated system.
#### Multi Interface
The stored data is available over multiple interfaces at once.
- Interfaces
- IPFS
- HTTP(S) on top of 3Bot
- Syncthing
- Filesystem
This allows ultimate flexibility from the end user perspective.
The object (video, image etc.) can easily be embedded in any website or other representation which supports http.

View File

@ -0,0 +1,118 @@
---
sidebar_position: 2
---
# Quantum Safe Storage Algoritm
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s).
Data gets stored over multiple ZDB's in such a way that data can never be lost.
Unique features
- Data always append, can never be lost
- Even a quantum computer cannot decrypt the data
- Data is spread over multiple sites. If these sites are lost the data will still be available
- Protects from datarot
## The Problem
Today we produce more data than ever before. We cannot continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
We are using technology which was originally used for communication in space.
The algo stores data fragments over multiple devices (physical storage devices ).
The solution is not based on replication or sharding, the algo represents the data as equasions which are distributed over multiple locations.
## How Data Is Stored Today
![alt text](../../img/storage_today.png)
In most distributed systems, as used on the Internet or in blockchain today, the data will get replicated (sometimes after sharding, which means distributed based on the content of the file and spread out over the world).
This leads to a lot of overhead and minimal control where the data is.
In well optimized systems overhead will be 400% but in some it can be orders of magnitude higher to get to a reasonable redundancy level.
## The Quantum Safe Storage System Works Differently
![alt text](../../img/qsss_overview.png)
We have developed a new storage algorithm which is more efficient, ultra reliable and gives you full control over where your data is stored.
Our approach is different. Let's try to visualize this new approach with a simple analogy using equations.
Let a,b,c,d.... be the parts of the original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
```
a=1
b=2
c=3
```
(and for reference the part of the real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`).
With these numbers we could create endless amounts of equations:
```
1: a+b+c=6
2: c-b-a=0
3: b-c+a=0
4: 2b+a-c=2
5: 5c-b-a=12
etc.
```
Mathematically we only need 3 to describe the content (value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and we just remember the locations of all the equations created with the original data fragments.
Mathematically we need three equations (any 3 of the total) to recover the original values for a, b or c. So do a request to retrieve 3 of the many equations and the first 3 to arrive are good enough to recalculate the original values. Three randomly retrieved equations are:
```
5c-b-a=12
b-c+a=0
2b+a-c=2
```
And this is a mathematical system we could solve:
- First: `b-c+a=0 -> b=c-a`
- Second: `2b+a-c=2 -> c=2b+a-2 -> c=2(c-a)+a-2 -> c=2c-2a+a-2 -> c=a+2`
- Third: `5c-b-a=12 -> 5(a+2)-(c-a)-a=12 -> 5a+10-(a+2)+a-a=12 -> 5a-a-2=2 -> 4a=4 -> a=1`
Now that we know `a=1` we could solve the rest `c=a+2=3` and `b=c-a=2`. And we have from 3 random equations regenerated the original fragments and could now recreate the original object.
The redundancy and reliability in this system results from creating equations (more than needed) and storing them. As shown these equations in any random order can recreate the original fragments and therefore redundancy comes in at a much lower overhead.
In our system we don't do this with 3 parts but with thousands.
### Example of 16/4
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations. The first 16 that we find and collect allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
The likelihood of losing 4 independent, dispersed storage devices at the same time is very low. Since we have continuous monitoring of all of the stored equations, we could create additional equations immediately when one of them is missing, making it an auto-regeneration of lost data and a self-repairing storage system.
> The overhead in this example is 4 out of 20 which is a mere **20%** instead of **400%** .
## Content Delivery
This system can be used as backend for content delivery networks.
E.g. content distribution policy could be a 10/50 distribution which means, the content of a movie would be distributed over 60 locations from which we can lose 50 at the same time.
If someone now wants to download the data, the first 10 locations to answer will provide enough of the data parts to rebuild the data.
The overhead here is more, compared to previous example, but stil orders of magnitude lower compared to other CDN systems.
## The Quantum Safe Storage System Can Avoid Datarot
Datarot is the fact that data storage degrades over time and becomes unreadable e.g. on a harddisk.
The storage system provided by ThreeFold intercepts this silent data corruption ensurinf that data does not rot.
> See also https://en.wikipedia.org/wiki/Data_degradation

View File

@ -0,0 +1,67 @@
---
sidebar_position: 5
---
# Quantum Safe Filesystem
Our quantum safe filesystem technology has unique features.
![](../../img/qss_fs_arch.png)
A redundant filesystem, can store PB's (millions of gigabytes) of information.
Unique features:
- Unlimited scalability (many petabytes)
- Quantum Safe:
- No farmer knows what the data is
- Even a quantum computer cannot decrypt the data
- Data can't be lost
- Protection for datarot, data will autorepair
- Data is kept forever (data does not get deleted)
- Data is dispersed over multiple sites
- Even if the sites go down the data will not be lost
- Up to 10x more efficient than storing on classic storage cloud systems
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes etc.)
- Compatible with ± all data workloads (not high performance data driven workloads like a database)
- Self-healing: when a node or disk is lost, the storage system can get back to the original redundancy level
- Helps with compliance for regulations like GDPR (as the hosting facility has no view on what is stored: information is encrypted and incomplete)
- Hybrid: can be installed onsite, public and private
- Read-write caching on encoding node (the front end)
## Mount Any Files In Your Storage Infrastructure
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum secure way.
This storage layer relies on 3 primitives:
- [0-db](https://github.com/threefoldtech/0-db) is the storage engine.
It is an always append database, which stores objects in an immutable format. It allows history to be kept out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations.
It takes files in any format as input, encrypts the file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It is an essential element of the operational backup.
- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace.
Together they form a storage layer that is quantum secure: even the most powerful computer can't hack the system because no single node contains all of the information needed to reconstruct the data.
This concept scales forever, and you can bring any file system on top of it:
- S3 storage
- any backup system
- an ftp-server
- IPFS and Hypercore distributed file sharing protocols
## Architecture
By using our filesystem inside a Virtual Machine or Kubernetes, the cloud user can deploy any storage application on top e.g. Minio for S3 storage, OwnCloud as online fileserver.
![](../../img/qsstorage_architecture.jpg)
Any storage workload can be deployed on top of the zstor.

View File

@ -0,0 +1,15 @@
---
sidebar_position: 3
title: Zero Knowledge Proof
---
# Zero Knowledge Proof Storage System
The Quantum Safe Storage System is zero knowledge proof compliant. The storage system is made up of / split into 2 components: the actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
![](../../img/qss_system.jpg)
The zero proof knowledge compliancy comes from the fact that all of the physical storage nodes (3Nodes) can prove that they store a valid part of the data that the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all of the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the original data and have no access to the original data part or the complete original data objects.

View File

@ -0,0 +1,34 @@
---
sidebar_position: 1
---
# Quantum Safe Storage System
Our storage architecture follows the true peer2peer design of the cecentralized cloud system.
![](../../img/qsss_intro2.png)
Any participating node only stores small incomplete parts of objects (files, photos, movies, databases etc.) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
## Benefits
- Not even a quantum computer can hack
- Zetabytes can be stored as easily as Petabytes
- The system is truly autonomous & self healing
- Datarot is detected and fixed.
- There is 100% contorl over where data is (GDPR)
## Architecture
![](../../img/storage_arch.png)
The cloud user can mix and match storage technologies as are required for their application.
## Peer2Peer Advantages
Peer2peer provides the unique proposition of selecting storage providers that match your application and service of business criteria. For example, you might be looking to store data for your application in a certain geographic area (for governance and compliance) reasons. You might also want to use different "storage policies" for different types of data. Examples are live versus archived data. All of these uses cases are possible with this storage architecture, and could be built by using the same building slice produced by farmers and consumed by developers or end-users.
> There is 100% control over where the data is positioned and the security is incredible.

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

BIN
docs/img/3node.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

BIN
docs/img/arch_minimal.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

BIN
docs/img/architecture.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 590 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

BIN
docs/img/autonous3bots.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 229 KiB

BIN
docs/img/boot.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

BIN
docs/img/cloud_engine.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

BIN
docs/img/cloud_farming.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

BIN
docs/img/cloud_features.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 361 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

BIN
docs/img/compatible.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 294 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 422 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 464 KiB

BIN
docs/img/farming_pools.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

BIN
docs/img/freezone.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 671 KiB

BIN
docs/img/freezone2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 665 KiB

BIN
docs/img/hero_archit.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 458 KiB

BIN
docs/img/holochain.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 672 KiB

BIN
docs/img/holochain2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 215 KiB

BIN
docs/img/ipfs.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

BIN
docs/img/itcontract.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

BIN
docs/img/itcontract2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 KiB

BIN
docs/img/kubernetes_0.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

BIN
docs/img/mycelium.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 512 KiB

BIN
docs/img/mycelium00.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 512 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 454 KiB

BIN
docs/img/net1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 231 KiB

BIN
docs/img/net2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 312 KiB

BIN
docs/img/network_wall.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 386 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

BIN
docs/img/planetary_net.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

BIN
docs/img/qss_fs_arch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

BIN
docs/img/qss_system.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

BIN
docs/img/qsss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

BIN
docs/img/qsss_intro2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

BIN
docs/img/qsss_overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

BIN
docs/img/re_invented.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

BIN
docs/img/redundant_net.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

BIN
docs/img/requirements.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 312 KiB

BIN
docs/img/roadmap.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

BIN
docs/img/sikana1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
docs/img/sikana2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

BIN
docs/img/storage_arch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

BIN
docs/img/storage_inno.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

BIN
docs/img/storage_today.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 395 KiB

BIN
docs/img/syncthing.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

BIN
docs/img/tanzania1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 675 KiB

BIN
docs/img/tiers1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

BIN
docs/img/tiers_6pod.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 413 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 732 KiB

BIN
docs/img/tiers_pod.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 633 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 330 KiB

BIN
docs/img/vindo0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 691 KiB

BIN
docs/img/vindo1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

BIN
docs/img/vindo2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 955 KiB

BIN
docs/img/vindo3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

BIN
docs/img/vindo4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

BIN
docs/img/vindo5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 940 KiB

BIN
docs/img/vindo6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 692 KiB

BIN
docs/img/vindo7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/img/vindo8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 610 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

BIN
docs/img/vstreaming.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 710 KiB

BIN
docs/img/vverse_museum.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 KiB

BIN
docs/img/vverse_museum2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 772 KiB

Some files were not shown because too many files have changed in this diff Show More