...
@@ -1,13 +0,0 @@
|
||||
# Cloud Units
|
||||
|
||||
Cloud units are the basis for buying & selling capacity on the ThreeFold Grid (for more info see here).
|
||||
|
||||
- 1 CU = 1 compute unit
|
||||
- 1 SU = 1 storage unit
|
||||
- 1 NU = 1 network unit
|
||||
|
||||
|
||||
References:
|
||||
- Cloud units the the building blocks for any Cloud / IT workload. See definition [here](https://library.threefold.me/info/threefold#/tfgrid/farming/threefold__resource_units_calc_cloudunits)
|
||||
- Cloud units are also used to determine commercial pricing for utilisation. See definition [here](https://library.threefold.me/info/threefold#/cloud/threefold__pricing?id=discount-levels)
|
||||
- Low level primitive [cloud functions](https://library.threefold.me/info/threefold/#/technology/threefold__tfgrid_primitives)
|
@@ -1 +0,0 @@
|
||||
# Concepts
|
@@ -1,5 +0,0 @@
|
||||
# Cultivation
|
||||
|
||||

|
||||
> See: [https://library.threefold.me/info/threefold/#/cloud/threefold__cloud_home](https://library.threefold.me/info/threefold/#/cloud/threefold__cloud_home)
|
||||
|
@@ -1,4 +0,0 @@
|
||||
# Farming
|
||||
|
||||

|
||||
> See: [https://library.threefold.me/info/threefold/#/tfgrid/farming/threefold__farming_intro](https://library.threefold.me/info/threefold/#/tfgrid/farming/threefold__farming_intro)
|
Before Width: | Height: | Size: 259 KiB |
Before Width: | Height: | Size: 318 KiB |
@@ -1,18 +0,0 @@
|
||||

|
||||
|
||||
# DAO Consensus Engine
|
||||
|
||||
|
||||
|
||||
## DAO Engine
|
||||
|
||||
On TFGrid 3.0 ThreeFold has implemented a DAO consensus engine using Polkadot/TFChain blockchain technology.
|
||||
|
||||
This is a powerful blockchain construct which allows us to run our TFGrid and maintain consensus on global scale.
|
||||
|
||||
This system has been designed to be compatible with multiple blockchains.
|
||||
|
||||
|
||||
|
||||
|
||||
|
@@ -1,17 +0,0 @@
|
||||

|
||||
|
||||
### consensus engine in relation to TFT Farming Rewards in TFGrid 3.0
|
||||
|
||||
|
||||
|
||||
The consensus engine checks the farming rules as defined in
|
||||
|
||||
- [farming logic 3.0](farming_reward)
|
||||
- [farming reward calculator](farming_calculator)
|
||||
|
||||
- if uptime + 98% per month then the TFT will be rewarded to the farmer (for TFGrid 3.0, can change later).
|
||||
|
||||
All the data of the farmer and the 3nodes are registered on TFChain
|
||||
|
||||
- See [Roadmap TFChain/DAO 3.x](roadmap_tfchain3) for info of implementation.
|
||||
|
@@ -1,43 +0,0 @@
|
||||
|
||||
## Consensus 3.X Oracles used
|
||||
|
||||
Oracles are external resources of information.
|
||||
|
||||
The TFChain captures and holds that information so we get more certainty about the accuracy.
|
||||
|
||||
We have oracles for price & reputation for e.g. TFFarmers and 3Nodes.
|
||||
|
||||
These oracles are implemented on TF_CHAIN for TFGrid 3.0.
|
||||
|
||||
```mermaid
|
||||
|
||||
|
||||
graph TB
|
||||
subgraph Digital Currency Ecosystem
|
||||
money_blockchain[Money Blockchain Explorers]
|
||||
Exch1[Money Blockchain Decentralized Exchange]
|
||||
OracleEngine --> Exch1[Polkadot]
|
||||
OracleEngine --> Exch1[Money Blockchain Exchange]
|
||||
OracleEngine --> Exch2[Binance Exchange]
|
||||
OracleEngine --> Exch3[other... exchanges]
|
||||
end
|
||||
subgraph ThreeFold Grid
|
||||
Monitor_Engine --> 3Node1
|
||||
Monitor_Engine --> 3Node2
|
||||
Monitor_Engine --> 3Node3
|
||||
end
|
||||
subgraph TFChainNode1[TFGrid Blockchain Node]
|
||||
Monitor_Engine
|
||||
Explorers[TFChain Explorers]-->TFGridDB --> BCNode
|
||||
Explorers --> BCNode
|
||||
ConsensusEngine1-->BCNode[Blockchain Validator Node]
|
||||
ConsensusEngine1 --> money_blockchain[Money Blockchain]
|
||||
ConsensusEngine1 --> ReputationEngine[Reputation Engine]
|
||||
ReputationEngine --> Monitor_Engine[Monitor Engine]
|
||||
ConsensusEngine1 --> OracleEngine[Oracle For Pricing Digital Currencies]
|
||||
end
|
||||
|
||||
```
|
||||
|
||||
- See [Roadmap TFChain/DAO 3.x](roadmap_tfchain3) for info of implementation.
|
||||
|
@@ -1,51 +0,0 @@
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph Money Blockchain
|
||||
money_blockchain --> account1
|
||||
money_blockchain --> account2
|
||||
money_blockchain --> account3
|
||||
click money_blockchain "/threefold/#money_blockchain"
|
||||
end
|
||||
subgraph TFChainNode1[TFChain BCNode]
|
||||
Explorer1-->BCNode1
|
||||
ConsensusEngine1-->BCNode1
|
||||
ConsensusEngine1 --> money_blockchain
|
||||
ConsensusEngine1 --> ReputationEngine1
|
||||
ReputationEngine1 --> Monitor_Engine1
|
||||
click ReputationEngine1 "/info/threefold/#reputationengine"
|
||||
click ConsensusEngine1 "/info/threefold/#consensusengine"
|
||||
click BCNode1 "/info/threefold/#bcnode"
|
||||
click Explorer1 "/info/threefold/#tfexplorer"
|
||||
end
|
||||
subgraph TFChainNode2[TFChain BCNode]
|
||||
Explorer2-->BCNode2
|
||||
ConsensusEngine2-->BCNode2
|
||||
ConsensusEngine2 --> money_blockchain
|
||||
ConsensusEngine2 --> ReputationEngine2
|
||||
ReputationEngine2 --> Monitor_Engine2
|
||||
click ReputationEngine2 "/info/threefold/#reputationengine"
|
||||
click ConsensusEngine2 "/info/threefold/#consensusengine"
|
||||
click BCNoBCNode2de1 "/info/threefold/#bcnode"
|
||||
click Explorer2 "/info/threefold/#tfexplorer"
|
||||
|
||||
end
|
||||
Monitor_Engine1 --> 3Node1
|
||||
Monitor_Engine1 --> 3Node2
|
||||
Monitor_Engine1 --> 3Node3
|
||||
Monitor_Engine2 --> 3Node1
|
||||
Monitor_Engine2 --> 3Node2
|
||||
Monitor_Engine2 --> 3Node3
|
||||
click 3Node1 "/info/threefold/#3node"
|
||||
click 3Node2 "/info/threefold/#3node"
|
||||
click 3Node3 "/info/threefold/#3node"
|
||||
click Monitor_Engine1 "/info/threefold/#monitorengine"
|
||||
click Monitor_Engine2 "/info/threefold/#monitorengine"
|
||||
|
||||
|
||||
```
|
||||
|
||||
*click on the parts of the image, they will go to more info*
|
||||
|
||||
- See [Roadmap TFChain/DAO 3.x](roadmap_tfchain3) for info of implementation.
|
||||
|
@@ -1,45 +0,0 @@
|
||||
# Consensus Mechanism
|
||||
|
||||
## Blockchain node components
|
||||
|
||||
|
||||
|
||||
- A Blockchain node (= TFChain node) called TF-Chain, containing all entities interacting with each other on the TF-Grid
|
||||
- An explorer = a Rest + GraphQL interface to TF-Chain (Graphql is a nice query language to make it easy for everyone to query for info)
|
||||
- Consensus Engine
|
||||
- is a Multisignature Engine running on TF-Chain
|
||||
- The multisignature is done for the Money BlockchainAccounts
|
||||
- It checks the AccountMetadata versus reality and if ok, will sign, which allows transactions to happen after validation of the "smart contract"
|
||||
- SLA & reputation engine
|
||||
- Each node uptime is being checked by Monitor_Engine
|
||||
- Also bandwidth will be checked in the future (starting 3.x)
|
||||
|
||||
### Remarks
|
||||
|
||||
- Each Monitor_Engine checks uptime of X nr of nodes (in beginning it can do all nodes), and stores the info in local DB (to keep history of check)
|
||||
- [Roadmap for TFChain deployment mechanism](roadmap_tfchain3)
|
||||
|
||||
## Principle
|
||||
|
||||
- We keep things as simple as we can
|
||||
- Money Blockchain blockchain used to hold the money
|
||||
- Money Blockchain has all required features to allow users to manage their money like wallet support, decentralized exchange, good reporting, low transaction fees, ...
|
||||
- TFChain based TFChain is holding the metadata for the accounts which express what we need to know per account to allow the start contracts to execute.
|
||||
- Smart Contracts are implemented using multisignature feature on Money Blockchain in combination with Multi Signature done by Consensus_Engine.
|
||||
- on money_blockchain:
|
||||
- each user has Money BlockchainAccounts (each of them holds money)
|
||||
- there are normal Accounts (means people can freely transfer money from these accounts) as well as RestrictedAccounts. Money cannot be transfered out of RestrictedAccounts unless consensus has been achieved from ConsensusEngine.
|
||||
- Restricted_Account
|
||||
- On stellar we use the multisignature feature to make sure that locked/vesting or FarmingPool cannot transfer money unless consensus is achieved by the ConsensusEngine
|
||||
|
||||
- Each account on money_blockchain (Money BlockchainAccount) has account record in TFChain who needs advanced features like:
|
||||
- lockup
|
||||
- vesting
|
||||
- minting (rewards to farmers)
|
||||
- tfta to tft conversion
|
||||
|
||||
- The Account record in TFGrid_DB is called AccountMetadata.
|
||||
- The AccountMetadata describes all info required to be able for consensus engine to define what to do for advanced features like vesting, locking, ...
|
||||
|
||||
- See [Roadmap TFChain/DAO 3.x](roadmap_tfchain3) for info of implementation.
|
||||
|
@@ -1,14 +0,0 @@
|
||||
|
||||
## Consensus Engine Information
|
||||
|
||||
- [Consensus Engine Homepage](consensus3)
|
||||
- [Principles TFChain 3.0 Consensus](consensus3_principles)
|
||||
- [Consensus Engine Farming 3.0](consensus3_engine_farming)
|
||||
- [TFGrid 3.0 wallets](tfgrid3_wallets)
|
||||
- Architecture:
|
||||
- [Money Blockchains/TFChain architecture](money_blockchain_partity_link)
|
||||
- [ThreeFold Chain Oracles](consensus3_oracles)
|
||||
<!-- - [Consensus Engine Weight System](consensus3_weights) -->
|
||||
|
||||
> implemented in TFGrid 3.0
|
||||
|
Before Width: | Height: | Size: 39 KiB |
Before Width: | Height: | Size: 64 KiB |
@@ -1,52 +0,0 @@
|
||||
|
||||
## Link between different Money Blockchain & TFChain
|
||||
|
||||
TF-Chain is the ThreeFold blockchain infrastructure, set up in the TFChain framework.
|
||||
|
||||
We are building a consensus layer which allows us to easily bridge between different money blockchains.
|
||||
|
||||
Main blockchain for TFT remains the Stellar network for now. A secure bridging mechanism exists, able to transfer TFT between the different blockchains.
|
||||
Active bridges as from TFGrid 3.0 release:
|
||||
- Stellar <> Binance Smart Chain
|
||||
- Stellar <> Parity TFChain
|
||||
More bridges are under development.
|
||||
|
||||
```mermaid
|
||||
|
||||
|
||||
graph TB
|
||||
subgraph Money Blockchain
|
||||
money_blockchain --- account1a
|
||||
money_blockchain --- account2a
|
||||
money_blockchain --- account3a
|
||||
account1a --> money_user_1
|
||||
account2a --> money_user_2
|
||||
account3a --> money_user_3
|
||||
click money_blockchain "/info/threefold/#money_blockchain"
|
||||
end
|
||||
subgraph ThreeFold Blockchain On Parity
|
||||
TFBlockchain --- account1b[account 1]
|
||||
TFBlockchain --- account2b[account 2]
|
||||
TFBlockchain --- account3b[account 3]
|
||||
account1b --- smart_contract_data_1
|
||||
account2b --- smart_contract_data_2
|
||||
account3b --- smart_contract_data_3
|
||||
click TFBlockchain "/info/threefold/#tfchain"
|
||||
end
|
||||
account1b ---- account1a[account 1]
|
||||
account2b ---- account2a[account 2]
|
||||
account3b ---- account3a[account 3]
|
||||
|
||||
consensus_engine --> smart_contract_data_1[fa:fa-ban smart contract metadata]
|
||||
consensus_engine --> smart_contract_data_2[fa:fa-ban smart contract metadata ]
|
||||
consensus_engine --> smart_contract_data_3[fa:fa-ban smart contract metadata]
|
||||
consensus_engine --> account1a
|
||||
consensus_engine --> account2a
|
||||
consensus_engine --> account3a
|
||||
click consensus_engine "/info/threefold/#consensus_engine"
|
||||
|
||||
|
||||
```
|
||||
|
||||
Above diagram shows how our consensus engine can deal with TFChain and multiple Money Blockchains at same time.
|
||||
|
@@ -1,53 +0,0 @@
|
||||
|
||||
# Roadmap For our TFCHain and ThreeFold DAO
|
||||
|
||||

|
||||
|
||||
## TFChain / DAO 3.0.2
|
||||
|
||||
For this phase our TFChain and TFDAO has been implemented using parity/TFChain.
|
||||
|
||||
Features
|
||||
|
||||
- poc
|
||||
- pou
|
||||
- identity management
|
||||
- consensus for upgrades of DAO and TFChain (code)
|
||||
- capacity tracking (how much capacity used)
|
||||
- uptime achieved
|
||||
- capacity utization
|
||||
- smart contract for IT
|
||||
- validators for L1 (TFChain level)
|
||||
- storage of value = TFT
|
||||
- request/approval for adding a validator
|
||||
|
||||
Basically all basic DAO concepts are in place
|
||||
|
||||
## TFChain / DAO 3.0.x
|
||||
|
||||
TBD version nr, planned Q1 2022
|
||||
|
||||
NEW
|
||||
|
||||
- proposals for TFChain/DAO/TFGrid changes (request for change) = we call them TFCRP (ThreeFold Change Request Proposal)
|
||||
- voting on proposals = we call them TFCRV (ThreeFold Change Request Vote)
|
||||
|
||||
|
||||
## TFChain / DAO 3.1.x
|
||||
|
||||
TBD version nr, planned Q1 2022
|
||||
|
||||
This version adds more layers to our existing DAO and prepares for an even more scalable future.
|
||||
|
||||
NEW
|
||||
|
||||
- Cosmos based chain on L2
|
||||
- Validator Nodes for TFGrid and TFChain.
|
||||
- Cosmos based HUB = security for all TFChains
|
||||
|
||||
> More info about our DAO strategy see TFDAO.
|
||||
|
||||
|
||||
|
||||
|
||||
|
@@ -1,72 +0,0 @@
|
||||
|
||||
# TFGrid 3.0 Wallets
|
||||
|
||||
ThreeFold has a mobile wallet which will allow to be used on the TFChain backend (TFChain) as well as any other Money Blockchain it supports.
|
||||
|
||||
This provides for a very secure digital currency infrastructure with lots of advantages.
|
||||
|
||||
- [X] ultra flexible smart contracts possible
|
||||
- [X] super safe
|
||||
- [X] compatible with multiple blockchains (money blockchains)
|
||||
- [X] ultra scalable
|
||||
|
||||
```mermaid
|
||||
|
||||
|
||||
graph TB
|
||||
|
||||
subgraph Money Blockchain
|
||||
money_blockchain[Money Blockchain Explorers]
|
||||
money_blockchain --- money_blockchain_node_1 & money_blockchain_node_2
|
||||
money_blockchain_node_1
|
||||
money_blockchain_node_2
|
||||
end
|
||||
|
||||
subgraph ThreeFold Wallets
|
||||
mobile_wallet[Mobile Wallet]
|
||||
desktop_wallet[Desktop Wallet]
|
||||
mobile_wallet & desktop_wallet --> money_blockchain
|
||||
mobile_wallet & desktop_wallet --> Explorers
|
||||
money_blockchain_wallet[Any Money Blockchain Wallet] --> money_blockchain
|
||||
end
|
||||
|
||||
|
||||
subgraph TFChain[TFGrid Blockchain on TFChain]
|
||||
Explorers[TFChain Explorers]-->TFGridDB --> BCNode
|
||||
Explorers --> BCNode
|
||||
end
|
||||
|
||||
|
||||
```
|
||||
|
||||
Generic overview:
|
||||
|
||||
```mermaid
|
||||
|
||||
graph TB
|
||||
|
||||
subgraph TFChain[TFGrid Chain]
|
||||
guardian1[TFChain Node 1]
|
||||
guardian2[TFChain Node 2]
|
||||
guardian3[TFChain Node 3...9]
|
||||
end
|
||||
|
||||
User_wallet[User Wallet] --> money_blockchain_account
|
||||
User_wallet[User Wallet] --> money_blockchain_restricted_account
|
||||
|
||||
subgraph Money Blockchain Ecosystem
|
||||
money_blockchain_account
|
||||
money_blockchain_restricted_account --- guardian1 & guardian2 & guardian3
|
||||
end
|
||||
|
||||
subgraph consensus[Consensus Layer on TFChain]
|
||||
guardian1 --> ReputationEngine & PricingOracle
|
||||
guardian1 --> contract1[Smart Contract Vesting]
|
||||
guardian1 --> contract2[Smart Contract Minting/Farming]
|
||||
end
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
@@ -1,52 +0,0 @@
|
||||
|
||||
// - vesting
|
||||
// - startdate: epoch
|
||||
// - currency: USD
|
||||
// - [[$month_nr,$minprice_unlock,$TFT_to_vest],...]
|
||||
// - if 48 months then list will have 48 parts
|
||||
// - month 0 = first month
|
||||
// - e.g. [[0,0.11,10000],[1,0.12,10000],[2,0.13,10000],[3,0.14,10000]...]
|
||||
|
||||
//information stored at account level in TFGridDB
|
||||
struct AccountMeta{
|
||||
//corresponds to unique address on money_blockchain
|
||||
money_blockchain_address string
|
||||
vesting Vesting[]
|
||||
unlocked_TFT int
|
||||
}
|
||||
|
||||
struct Vesting{
|
||||
startdate int
|
||||
//which currency is used to execute on the acceleration in the vesting
|
||||
//if price above certain level (which is currency + amount of that currency) the auto unlock
|
||||
currency CurrencyEnum
|
||||
months []VestingMonth
|
||||
}
|
||||
|
||||
struct VestingMonth{
|
||||
month_nr int
|
||||
//if 0 then will not unlock based on price
|
||||
unlock_price f32
|
||||
tft_amount int
|
||||
}
|
||||
|
||||
enum CurrencyEnum{
|
||||
usd
|
||||
eur
|
||||
egp
|
||||
gbp
|
||||
aed
|
||||
}
|
||||
|
||||
//this is stored in the TFGridDB
|
||||
fn (mut v AccountMeta) serialize() string{
|
||||
//todo code which does serialization see above
|
||||
return ""
|
||||
}
|
||||
|
||||
|
||||
//write minting pool
|
||||
|
||||
|
||||
//REMARKS
|
||||
// if unlock triggered because of month or price then that record in the VestingMonth[] goes away and TFT go to unlocked_TFT
|
@@ -1,6 +0,0 @@
|
||||
3node_simple.png
|
||||
architecture_usage.png
|
||||
manual.png
|
||||
tech_overview.png
|
||||
tech_overview2.png
|
||||
web_remade.png
|
Before Width: | Height: | Size: 178 KiB |
Before Width: | Height: | Size: 628 KiB |
Before Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 290 KiB |
Before Width: | Height: | Size: 318 KiB |
Before Width: | Height: | Size: 241 KiB |
Before Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 359 KiB |
Before Width: | Height: | Size: 964 KiB |
Before Width: | Height: | Size: 128 KiB |
Before Width: | Height: | Size: 98 KiB |
Before Width: | Height: | Size: 997 KiB |
@@ -1,17 +0,0 @@
|
||||
# DAO 3 Layer Approach
|
||||
|
||||
|
||||

|
||||
|
||||
|
||||
The threefold grid has 3 layers:
|
||||
|
||||
- Layer 2 = ecosystem security layer
|
||||
- only 1 blockchain with 100 validators, secures the multiple Layer 1 networks.
|
||||
- Layer 1 = internet of blockchains layer
|
||||
- ultimate scale because of thousands (or more) of blockchains
|
||||
- Layer 0 = the IT capacity layer
|
||||
- cloud computing layer, provide compute, storage and network resources to L1 and L2
|
||||
|
||||
Validators play an important role in securing the ThreeFold ecosystem, a validator is a blockchain component run by independent parties who validate transactions happening on the blockchain until consensus has been achieved.
|
||||
|
@@ -1,52 +0,0 @@
|
||||
## ThreeFold Capacity Layer
|
||||
|
||||

|
||||
|
||||
### Zero-OS
|
||||
|
||||
ThreeFold has build its own operating system called Zero-OS, which was based starting from a Linux Kernel with as purpose to remove all the unnecessary complexities found on contemporary OS's.
|
||||
|
||||
Zero-OS supports a small number of primitives, and performs low-level functions natively.
|
||||
|
||||
It delivers 3 primitive functions:
|
||||
|
||||
- storage capacity
|
||||
- compute capacity
|
||||
- network capacity
|
||||
|
||||
There is no shell, local nor remote attached to Zero-OS. It does not allow for inbound network connections to happen to the core. Also, given its shell-less nature, the people and organizations, called farmers, that run 3nodes cannot issue any commands nor access its features. In that sense, Zero-OS enables a "zero people" (autonomous) Internet, meaning hackers cannot get in, while also eliminating human error from the paradigm.
|
||||
|
||||
### 3Node
|
||||
|
||||
The ThreeFold_Grid needs hardware/servers to function. Servers of all shapes and sizes can be added to the grid by anyone, anywhere in the world. The production of Internet Capacity on the Threefold Grid is called Farming and people who add these servers to the grid are called Farmers. This is a fully decentralized process and they get rewarded by the means of TFT.
|
||||
|
||||
Farmers download the Zero-OS operating system and boot their servers themselves. Once booted, these servers become 3Nodes. The 3Nodes will register themselves in a database called the TF Explorer. Once registered in the TF Explorer, the capacity of the 3Nodes will become available on the TF Grid Explorer. Also, given the autonomous nature of the ThreeFold Grid, there is no need for any intermediaries between the user and 3Nodes.
|
||||
|
||||
This enables a complete peer2peer environment for people to reserve their Internet Capacity directly from the hardware.
|
||||
|
||||
### Smart Contract for IT
|
||||
|
||||
The purpose of the smart contract for IT is to create and enable autonomous IT. Autonomous self-driving IT is possible.
|
||||
|
||||
Once a smart contract for IT is created, it will be registered in the TFChain Blockchain.
|
||||
|
||||
Learn more about smart contract for IT [here](../smartcontract_it/smartcontract_it_full.md).
|
||||
|
||||
### TFChain
|
||||
|
||||
A blockchain running on the TFGrid stores the following information (TFGrid 3.0)
|
||||
|
||||
- registry for all digital twins (identity system, aka phonebook)
|
||||
- registry for all farmers & 3nodes
|
||||
- registry for our reputation system
|
||||
- info as required for the Smart Contract for IT
|
||||
|
||||
This is the hart of our operational system of the TFGrid
|
||||
|
||||
### Peer-to-Peer Network
|
||||
|
||||
The peer2peer network allows any zmachine or user to connect with other zmachine or users on the TF Grid securely, and creates a private shortest path peer2peer network.
|
||||
|
||||
### Web Gateway
|
||||
|
||||
The Web Gateway is a mechanism to connect the private (overlay) networks to the open Internet. By not providing an open and direct path in to the private network, a lot of malicious phishing and hacking attempts are stopped at the Web Gateway level for container applications.
|
@@ -1 +0,0 @@
|
||||
3layers_tf.png
|
Before Width: | Height: | Size: 749 KiB |
@@ -1 +0,0 @@
|
||||
# Layers
|
@@ -1,17 +0,0 @@
|
||||
## Beyond Containers
|
||||
|
||||

|
||||
|
||||
|
||||
Default features:
|
||||
|
||||
- compatible with Docker
|
||||
- compatible with any Linux workload
|
||||
|
||||
We have following unique advantages:
|
||||
|
||||
- no need to work with images, we work with our unique zos_fs
|
||||
- every container runs in a dedicated virtual machine providing more security
|
||||
- the containers talk to each other over a private network: zos_net
|
||||
- the containers can use web_gw to allow users on the internet connect to the applications as running in their secure containers
|
||||
- can use core-x to manage the workload
|
@@ -1,8 +0,0 @@
|
||||
|
||||
## TFGrid Compute Layer
|
||||
|
||||

|
||||
|
||||
We are more than just Container or VM technology, see [our Beyond Container Document](../../primitives/compute/beyond_containers.md).
|
||||
|
||||
For more information see [ZeroOS](../../zos/zos_toc.md)
|
@@ -1,13 +0,0 @@
|
||||
|
||||
# CoreX
|
||||
|
||||

|
||||
|
||||
This tool allows you to manage your ZMachine over web remotely.
|
||||
|
||||
ZMachine process manager
|
||||
|
||||
- Provide a web interface and a REST API to control your processes
|
||||
- Allow to watch the logs of your processes
|
||||
- Or use it as a web terminal (access over https to your terminal)!
|
||||
|
Before Width: | Height: | Size: 209 KiB |
Before Width: | Height: | Size: 177 KiB |
Before Width: | Height: | Size: 349 KiB |
Before Width: | Height: | Size: 272 KiB |
Before Width: | Height: | Size: 304 KiB |
Before Width: | Height: | Size: 333 KiB |
@@ -1,30 +0,0 @@
|
||||
# ZKube
|
||||
|
||||
TFGrid is compatible with Kubernetes Technology.
|
||||
|
||||

|
||||
|
||||
Each eVDC as shown above is a full blown Kubernetes deployment.
|
||||
|
||||
### Unique for our Kubernetes implementation
|
||||
|
||||
- The Kubernetes networks are on top of our [ZNet](znet) technology which means all traffic between containers and kubernetes hosts is end2end encrypted independent of where your Kubernetes nodes are deployed.
|
||||
- You can mount a QSFS underneath a Kubernetes Node (VM), which means that you can deploy containers on top of QSFS to host unlimited amounts of storage in a super safe way.
|
||||
- You Kubernetes environment is for sure 100% decentralized, you define where you want to deploy your Kubernetes nodes and only you have access to the deployed workloads on the TFGrid.
|
||||
|
||||
### Features
|
||||
|
||||
* integration with znet (efficient, secure encrypted network between the zmachines)
|
||||
* can be easily deployed at the edge
|
||||
* single-tenant!
|
||||
|
||||
<!--
|
||||
### ZMachine Benefits
|
||||
|
||||
* [ZOS Protect](zos_protect): no hacking surface to the Zero-Nodes, integrate silicon route of trust
|
||||
* [ZNet](znet) and [Planetary Net](planetary_network): a true global single backplane network connecting us all -->
|
||||
|
||||
### Architecture
|
||||
|
||||

|
||||
|
@@ -1,22 +0,0 @@
|
||||
# ZMachine
|
||||
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* import from docker (market std for containers)
|
||||
* can be easily deployed at the edge (edge cloud)
|
||||
* single-tenant, fully decentralized!
|
||||
* can deploy unlimited amounts of storage using our qsfs
|
||||
* minimal hacking surface to the Zero-Nodes, integrate silicon route of trust
|
||||
* ZOS Filesystem: dedupe, zero-install, hacker-proof
|
||||
* Webgateway: intelligent connection between web (internet) and container services
|
||||
* integration with ZNet (efficient, secure encrypted network between the zmachines)
|
||||
* Planetary Net: a true global single backplane network connecting us all
|
||||
|
||||
### Architecture
|
||||
|
||||

|
||||
|
||||
A ZMachine is running as a virtual machine on top of Zero-OS.
|
||||
|
Before Width: | Height: | Size: 202 KiB |
Before Width: | Height: | Size: 267 KiB |
Before Width: | Height: | Size: 77 KiB |
Before Width: | Height: | Size: 76 KiB |
Before Width: | Height: | Size: 188 KiB |
Before Width: | Height: | Size: 122 KiB |
Before Width: | Height: | Size: 175 KiB |
Before Width: | Height: | Size: 163 KiB |
Before Width: | Height: | Size: 104 KiB |
Before Width: | Height: | Size: 104 KiB |
Before Width: | Height: | Size: 156 KiB |
@@ -1,13 +0,0 @@
|
||||
# Network Primitives
|
||||
|
||||
- [Planetary network](planetary_network.md):
|
||||
- is a planetary scalable network, we have clients for windows, osx, android and iphone.
|
||||
- [ZOS Net](znet.md):
|
||||
- is a fast end2end encrypted network technology, keep your traffic between your z_machines 100% private.
|
||||
- [ZOS NIC](znic.md):
|
||||
- connection to a public ipaddress
|
||||
- [WEB GW](webgw3.md):
|
||||
- web gateway, a secure way to allow internet traffic reach your secure Z-Machine.
|
||||
|
||||
|
||||
|
@@ -1,47 +0,0 @@
|
||||
|
||||
# Planetary Network
|
||||
|
||||

|
||||
|
||||
The planetary network is an overlay network which lives on top of the existing internet or other peer2peer networks created. In this network, everyone is connected to everyone. End-to-end encryption between users of an app and the app running behind the network wall.
|
||||
|
||||
Each user end network point is strongly authenticated and uniquely identified, independent of the network carrier used. There is no need for a centralized firewall or VPN solutions, as there is a circle based networking security in place.
|
||||
|
||||
Benefits :
|
||||
- It finds shortest possible paths between peers
|
||||
- There's full security through end-to-end encrypted messaging
|
||||
- It allows for peer2peer links like meshed wireless
|
||||
- It can survive broken internet links and re-route when needed
|
||||
- It resolves the shortage of IPV4 addresses
|
||||
|
||||
|
||||
Whereas current computer networks depend heavily on very centralized design and configuration, this networking concept breaks this mould by making use of a global spanning tree to form a scalable IPv6 encrypted mesh network. This is a peer2peer implementation of a networking protocol.
|
||||
|
||||
The following table illustrates high-level differences between traditional networks like the internet, and the planetary threefold network:
|
||||
|
||||
| Characteristic | Traditional | Planetary Network |
|
||||
| --------------------------------------------------------------- | ----------- | ----------------- |
|
||||
| End-to-end encryption for all traffic across the network | No | Yes |
|
||||
| Decentralized routing information shared using a DHT | No | Yes |
|
||||
| Cryptographically-bound IPv6 addresses | No | Yes |
|
||||
| Node is aware of its relative location to other nodes | No | Yes |
|
||||
| IPv6 address remains with the device even if moved | No | Yes |
|
||||
| Topology extends gracefully across different mediums, i.e. mesh | No | Yes |
|
||||
|
||||
## What are the problems solved here?
|
||||
|
||||
The internet as we know it today doesn’t conform to a well-defined topology. This has largely happened over time - as the internet has grown, more and more networks have been “bolted together”. The lack of defined topology gives us some unavoidable problems:
|
||||
|
||||
- The routing tables that hold a “map” of the internet are huge and inefficient
|
||||
- There isn’t really any way for a computer to know where it is located on the internet relative to anything else
|
||||
- It’s difficult to examine where a packet will go on its journey from source to destination without actually sending it
|
||||
- It’s very difficult to install reliable networks into locations that change often or are non-static, i.e. wireless mesh networks
|
||||
|
||||
These problems have been partially mitigated (but not really solved) through centralization - rather than your computers at home holding a copy of the global routing table, your ISP does it for you. Your computers and network devices are configured just to “send it upstream” and to let your ISP decide where it goes from there, but this does leave you entirely at the mercy of your ISP who can redirect your traffic anywhere they like and to inspect, manipulate or intercept it.
|
||||
|
||||
In addition, wireless meshing requires you to know a lot about the network around you, which would not typically be the case when you have outsourced this knowledge to your ISP. Many existing wireless mesh routing schemes are not scalable or efficient, and do not bridge well with existing networks.
|
||||
|
||||

|
||||
|
||||
The planetary network is a continuation and implementation of the [Planetary Network](https://Planetary Network-network.github.io/about.html) network initiative. This technology is in beta but has been proven to work already quite well.
|
||||
|
@@ -1,40 +0,0 @@
|
||||
|
||||
|
||||
# WebGW
|
||||
|
||||
The Web Gateway is a mechanism to connect the private networks to the open Internet, in such a way that there is no direct connection between internet and the secure workloads running in the ZMachines.
|
||||
|
||||

|
||||
|
||||
|
||||
- Separation between where compute workloads are and where services are exposed
|
||||
- Redundant
|
||||
- Each app can be exposed on multiple webgateways at once
|
||||
- Support for many interfaces...
|
||||
- Helps resolve shortage of IPv4 addresses
|
||||
|
||||
### Implementation
|
||||
|
||||
Some 3nodes supports gateway functionality (configured by the farmers). A 3node with gateway configuration can then accept gateway workloads and then forward traffic to ZMachines that only have Planetary Network (planetary network) or Ipv6 addresses.
|
||||
|
||||
The gateway workloads consists of a name (prefix) that need to be reserved on the block chain first. Then the list of backend IPs. There are other flags that can be set to control automatic TLS (please check terraform documentations for the exact details of a reservation).
|
||||
|
||||
Once the 3node receives this workloads, the network configure proxy for this name and the Planetary Network IPs.
|
||||
|
||||
### Security
|
||||
|
||||
ZMachines have to have a Planetary Network IP or any other IPv6 (also IPv4 are accepted), it means that any person who is connected to the Planetary Network, can also reach the ZMachine without the need for a proxy.
|
||||
|
||||
So it's up to the ZMachine owner/maintainer to make sure it is secured and only have the required ports open.
|
||||
|
||||
### Redundant Network Connection
|
||||
|
||||

|
||||
|
||||
|
||||
### Unlimited Scale
|
||||
|
||||

|
||||
|
||||
|
||||
The network architecture is a pure scale-out network system, it can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users. Supply and demand scale independently, for supply there can be unlimited network, farmers providing the web gateways on their own 3nodes, and unlimited compute farmers providing 3nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprises. This demand side is exponentially growing for data processing and storage use cases.
|
@@ -1,33 +0,0 @@
|
||||
|
||||
|
||||
# ZNET
|
||||
|
||||
Decentralized networking platform allowing any compute and storage workload to be connected together on a private (overlay) network and exposed to the existing internet network. The Peer2Peer network platform allows any workload to be connected over secure encrypted networks which will look for the shortest path between the nodes.
|
||||
|
||||

|
||||
|
||||
|
||||
### Secure mesh overlay network (peer2peer)
|
||||
|
||||
Z_NET is the foundation of any architecture running on the TF Grid. It can be seen as a virtual private datacenter and the network allows all of the *N* containers to connect to all of the *(N-1)* other containers. Any network connection is a secure network connection between your containers, it creates peer 2 peer network between containers.
|
||||
|
||||

|
||||
|
||||
No connection is made with the internet. The ZNet is a single tenant network and by default not connected to the public internet. Everything stays private. For connecting to the public internet, a Web Gateway is included in the product to allows for public access if and when required.
|
||||
|
||||
### Redundancy
|
||||
|
||||
As integrated with [WebGW](webgw):
|
||||
|
||||

|
||||
|
||||
- Any app can get (securely) connected to the internet by any chosen IP address made available by ThreeFold network farmers through [WebGW](webgw)
|
||||
- An app can be connected to multiple web gateways at once, the DNS round robin principle will provide load balancing and redundancy
|
||||
- An easy clustering mechanism where web gateways and nodes can be lost and the public service will still be up and running
|
||||
- Easy maintenance. When containers are moved or re-created, the same end user connection can be reused as that connection is terminated on the Web Gateway. The moved or newly created Web Gateway will recreate the socket to the Web Gateway and receive inbound traffic.
|
||||
|
||||
### Interfaces in Zero-OS
|
||||
|
||||

|
||||
|
||||
|
@@ -1,11 +0,0 @@
|
||||
# ZNIC
|
||||
|
||||
ZNIC is the network interface which is connected to ZMachine.
|
||||
|
||||
Can be implemented as interface to
|
||||
|
||||
- planetary_network
|
||||
- public ip address on a Zero-OS
|
||||
|
||||

|
||||
|
@@ -1,49 +0,0 @@
|
||||

|
||||
|
||||
# TFGrid Low Level Functions = Primitives
|
||||
|
||||
The following are the low level constructs which can be deployed on the TFGrid.
|
||||
|
||||
The following functionalities allow you to create any solutions on top of the grid.
|
||||
Any application which can run on linux can run on the TFGrid.
|
||||
|
||||
### Compute (uses CU)
|
||||
|
||||
- [ZKube](compute/zkube.md)
|
||||
- kubernetes deployment
|
||||
- [ZMachine](compute/zmachine.md)
|
||||
- the container or virtual machine running inside ZOS
|
||||
- [CoreX](compute/corex.md)
|
||||
- process manager (optional), can be used to get remote access to your zmachine
|
||||
|
||||
A 3Node is a Zero-OS enabled computer which is hosted by any of the ThreeFold Farmers.
|
||||
|
||||
### There are 4 storage mechanisms which can be used to store your data:
|
||||
|
||||
- [ZOS FS](storage/zos_fs.md)
|
||||
- is our dedupe unique filesystem, replaces docker images
|
||||
- [ZOS Mount](storage/zmount.md)
|
||||
- is a mounted disk location on SSD, this can be used as faster storage location
|
||||
- [Quamtum Safe Filesystem](../qsss/qss_filesystem.md)
|
||||
- this is a super unique storage system, data can never be lost or corrupted. Please be reminded that this storage layer is only meant to be used for secondary storage applications
|
||||
- [ZOS Disk](storage/zdisk.md)
|
||||
- a virtual disk technology, only for TFTech OEM partners
|
||||
|
||||
### There are 4 ways how networks can be connected to a Z-Machine.
|
||||
|
||||
- [Planetary network](network/planetary_network.md):
|
||||
- is a planetary scalable network, we have clients for windows, osx, android and iphone
|
||||
- [ZOS Net](network/znet.md):
|
||||
- is a fast end2end encrypted network technology, keep your traffic between your z_machines 100% private
|
||||
- [ZOS NIC](network/znic.md):
|
||||
- connection to a public ipaddress
|
||||
- [WEB GW](network/webgw.md):
|
||||
- web gateway, a secure way to allow internet traffic reach your secure Z-Machine.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@@ -1 +0,0 @@
|
||||
zosfs.png
|
Before Width: | Height: | Size: 218 KiB |
Before Width: | Height: | Size: 118 KiB |
Before Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 99 KiB |
Before Width: | Height: | Size: 670 KiB |
@@ -1,29 +0,0 @@
|
||||
# Quantum Safe Filesystem
|
||||
|
||||

|
||||
|
||||
presents itself as a filesystem to the ZMachine.
|
||||
|
||||
### Benefits
|
||||
|
||||
- Safe
|
||||
- Hacker Proof
|
||||
- Ultra Reliable
|
||||
- Low Overhead
|
||||
- Ultra Scalable
|
||||
- Self Healing = recovers service automatically in the event of outage with no human
|
||||
|
||||
|
||||
### Can be used as
|
||||
|
||||
- backup and archive system
|
||||
- Blockchain Storage Backend (OEM ONLY)
|
||||
|
||||
### Implementation
|
||||
|
||||
see how its implemented in:
|
||||
|
||||
- [Quantum Safe Storage](../../qsss/qsss_home.md)
|
||||
- [Quantum Safe Filesystem](../../qsss/qss_filesystem.md)
|
||||
- [Quantum Safe Algo](../../qsss/qss_algorithm.md)
|
||||
|
@@ -1,9 +0,0 @@
|
||||
# Storage Primitives
|
||||
|
||||
- [ZOS Filesystem](zos_fs.md) : deduped immutable filesystem
|
||||
- [ZOS Mount](zmount.md) : a part of a SSD (fast disk), mounted underneath your zmachine
|
||||
- [Quantum Safe Filesystem](qsfs.md) : unbreakable storage system (secondary storage only)
|
||||
- [Zero-DB](zdb.md) : the lowest level storage primitive, is a key value stor, used underneath other storage mechanisms typically
|
||||
- [Zero-Disk](zdisk.md) : OEM only, virtual disk format
|
||||
|
||||
Uses [Storage Units = SU](../../../grid/concepts/cloudunits.md).
|
@@ -1,8 +0,0 @@
|
||||
# ZOS-DB (ZDB)
|
||||
|
||||

|
||||
|
||||
0-db is a fast and efficient key-value store redis-protocol compatible, which makes data persistent inside an always append datafile, with namespaces support.
|
||||
|
||||
> ZDB is being used as backend storage for [Quantum Safe Filesystem](qsfs.md).
|
||||
|
@@ -1,10 +0,0 @@
|
||||
# ZOS_Disk
|
||||
|
||||
Virtual disk creates the possibility to create and use virtual disks which can be attached to containers (and virtual machines).
|
||||
|
||||
The technology is designed to be redundant without having to do anything.
|
||||
|
||||
## Roadmap
|
||||
|
||||
- The virtual disk technology is available for OEM's only, contact ThreeFold Tech.
|
||||
|
@@ -1,10 +0,0 @@
|
||||
# ZOS_Mount
|
||||
|
||||
A SSD storage location on which can be written inside a VMachine and VKube.
|
||||
|
||||
The SSD storage location is mounted on a chosen path inside your Z-Machine.
|
||||
|
||||

|
||||
|
||||
> ZMounts are not incrypted, if you need security, use a quantum safe filesystem.
|
||||
|
@@ -1,38 +0,0 @@
|
||||
|
||||
# ZOS FileSystem (ZOS-FS)
|
||||
|
||||

|
||||
|
||||
|
||||
A deduped filesystem which is more efficient compared to images as used in other Virtual Machine technology.
|
||||
|
||||
|
||||
## Uses FLIST Inside
|
||||
|
||||
In Zero-OS, `flist` is the format used to store zmachine images. This format is made to provide a complete mountable remote filesystem, but downloading only the files contents that you actually needs.
|
||||
|
||||
In practice, Flist itself is a small database which contains metadata about files and directories, file payload are stored on a tfgrid hub. You only need to download payload when you need it, this dramatically reduce zmachine boot time, bandwidth and disk overhead.
|
||||
|
||||
### Why this ZFlist Concept
|
||||
|
||||
Have you ever been in the following situation: you need two small files but they are embedded in a large archive. How to get to those two files in an efficient way? What a disappointment when you see this archive is 4 GB large and you only need four files of 2 MB inside. You'll need to download the full archive, store it somewhere to extract only what you need. Time, effort and bandwidth wasted.
|
||||
|
||||
You want to start a Docker container and the base image you want to use is 2 GB. What do you need to do before being able to use your container? Waiting to get the 2 GB downloaded. This problem exists everywhere but in Europe and the US where the bandwidth speeds are such that this doesn't present a real problem anymore, hence none of the leading (current) tech companies are looking for solutions.
|
||||
|
||||
We believe that there should be a smarter way of dealing with this than simply throwing larger bandwidth at the problem: what if you could only download the files you actually want and not the full blob (archive, image, whatever...).
|
||||
|
||||
ZFList is splitting metadata and data. Metadata is referential information about everything you need to know about content of the archive, but without the payload. Payload is the content of the referred files. The ZFList is exactly that: it consists of metadata with references that point to where to get the payload itself. So if you don't need it you won't get it.
|
||||
|
||||
As soon as you have the flist mounted, you can see the full directory tree, and walk around it. The files are only downloaded and presented at moment that you try to access them. In other words, every time you want to read a file, or modify it, Zero FS will download it, so that the data is available too. You only download on-the-fly what you need which reduces dramatically the bandwidth requirement.
|
||||
|
||||
|
||||
## Benefits
|
||||
|
||||
- Efficient usage of bandwidth makes this service perform with and without (much) bandwidth.
|
||||
|
||||
## Flist Tool
|
||||
|
||||

|
||||
|
||||
> to see our tool for flists see: https://hub.grid.tf/
|
||||
|
@@ -1,8 +0,0 @@
|
||||
|
||||
- [Technology](technology/technology.md)
|
||||
- [Layers](technology/layers/technology_layers.md)
|
||||
- [Storage](technology/qsss/qsss_home.md)
|
||||
- [Quantum Safe Storage](technology/qsss/qsss_home.md)
|
||||
- [Quantum Safe Filesystem](technology/qsss/qss_filesystem.md)
|
||||
- [Quantum Safe Algo](technology/qsss/qss_algorithm.md)
|
||||
- [S3 Storage System](technology/qsss/s3_interface.md)
|
@@ -1,2 +0,0 @@
|
||||
qss_scaleout.png
|
||||
qsss_intro.png
|
@@ -1,9 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
for name in ./*.mmd
|
||||
do
|
||||
output=$(basename $name mmd)png
|
||||
echo $output
|
||||
mmdc -i $name -o $output -w 4096 -H 2160 -b transparant
|
||||
echo $name
|
||||
done
|
@@ -1,13 +0,0 @@
|
||||
graph TD
|
||||
subgraph Data Origin
|
||||
file[Large chunk of data = part_1part_2part_3part_4]
|
||||
parta[part_1]
|
||||
partb[part_2]
|
||||
partc[part_3]
|
||||
partd[part_4]
|
||||
file -.- |split part_1|parta
|
||||
file -.- |split part_2|partb
|
||||
file -.- |split part 3|partc
|
||||
file -.- |split part 4|partd
|
||||
parta --> partb --> partc --> partd
|
||||
end
|
@@ -1,20 +0,0 @@
|
||||
graph TD
|
||||
subgraph Data Substitution
|
||||
parta[part_1]
|
||||
partb[part_2]
|
||||
partc[part_3]
|
||||
partd[part_4]
|
||||
parta -.-> vara[ A = part_1]
|
||||
partb -.-> varb[ B = part_2]
|
||||
partc -.-> varc[ C = part_3]
|
||||
partd -.-> vard[ D = part_4]
|
||||
end
|
||||
subgraph Create equations with the data parts
|
||||
eq1[A + B + C + D = 6]
|
||||
eq2[A + B + C - D = 3]
|
||||
eq3[A + B - C - D = 10]
|
||||
eq4[ A - B - C - D = -4]
|
||||
eq5[ A - B + C + D = 0]
|
||||
eq6[ A - B - C + D = 5]
|
||||
vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6
|
||||
end
|
Before Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 192 KiB |
Before Width: | Height: | Size: 154 KiB |
@@ -1,44 +0,0 @@
|
||||
graph TD
|
||||
subgraph Data Origin
|
||||
file[Large chunk of data = part_1part_2part_3part_4]
|
||||
parta[part_1]
|
||||
partb[part_2]
|
||||
partc[part_3]
|
||||
partd[part_4]
|
||||
file -.- |split part_1|parta
|
||||
file -.- |split part_2|partb
|
||||
file -.- |split part 3|partc
|
||||
file -.- |split part 4|partd
|
||||
parta --> partb --> partc --> partd
|
||||
parta -.-> vara[ A = part_1]
|
||||
partb -.-> varb[ B = part_2]
|
||||
partc -.-> varc[ C = part_3]
|
||||
partd -.-> vard[ D = part_4]
|
||||
end
|
||||
subgraph Create equations with the data parts
|
||||
eq1[A + B + C + D = 6]
|
||||
eq2[A + B + C - D = 3]
|
||||
eq3[A + B - C - D = 10]
|
||||
eq4[ A - B - C - D = -4]
|
||||
eq5[ A - B + C + D = 0]
|
||||
eq6[ A - B - C + D = 5]
|
||||
vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6
|
||||
end
|
||||
subgraph Disk 1
|
||||
eq1 --> |store the unique equation, not the parts|zdb1[A + B + C + D = 6]
|
||||
end
|
||||
subgraph Disk 2
|
||||
eq2 --> |store the unique equation, not the parts|zdb2[A + B + C - D = 3]
|
||||
end
|
||||
subgraph Disk 3
|
||||
eq3 --> |store the unique equation, not the parts|zdb3[A + B - C - D = 10]
|
||||
end
|
||||
subgraph Disk 4
|
||||
eq4 --> |store the unique equation, not the parts|zdb4[A - B - C - D = -4]
|
||||
end
|
||||
subgraph Disk 5
|
||||
eq5 --> |store the unique equation, not the parts|zdb5[ A - B + C + D = 0]
|
||||
end
|
||||
subgraph Disk 6
|
||||
eq6 --> |store the unique equation, not the parts|zdb6[A - B - C + D = 5]
|
||||
end
|
Before Width: | Height: | Size: 950 KiB |
Before Width: | Height: | Size: 78 KiB |
Before Width: | Height: | Size: 801 KiB |
Before Width: | Height: | Size: 315 KiB |
Before Width: | Height: | Size: 238 KiB |
@@ -1,34 +0,0 @@
|
||||
graph TD
|
||||
subgraph Local laptop, computer or server
|
||||
user[End User]
|
||||
protocol[Storage protocol]
|
||||
qsfs[Filesystem on local OS]
|
||||
0store[Quantum Safe storage engine]
|
||||
end
|
||||
subgraph Grid storage - metadata
|
||||
etcd1[ETCD-1]
|
||||
etcd2[ETCD-2]
|
||||
etcd3[ETCD-3]
|
||||
end
|
||||
subgraph Grid storage - zero proof data
|
||||
zdb1[ZDB-1]
|
||||
zdb2[ZDB-2]
|
||||
zdb3[ZDB-3]
|
||||
zdb4[ZDB-4]
|
||||
zdb5[ZDB-5]
|
||||
zdb6[ZDB-6]
|
||||
zdb7[ZDB-7]
|
||||
user -.- protocol
|
||||
protocol -.- qsfs
|
||||
qsfs --- 0store
|
||||
0store --- etcd1
|
||||
0store --- etcd2
|
||||
0store --- etcd3
|
||||
0store <-.-> zdb1[ZDB-1]
|
||||
0store <-.-> zdb2[ZDB-2]
|
||||
0store <-.-> zdb3[ZDB-3]
|
||||
0store <-.-> zdb4[ZDB-4]
|
||||
0store <-.-> zdb5[ZDB-5]
|
||||
0store <-.-> zdb6[ZDB-...]
|
||||
0store <-.-> zdb7[ZDB-N]
|
||||
end
|
Before Width: | Height: | Size: 145 KiB |
Before Width: | Height: | Size: 23 KiB |
@@ -1,86 +0,0 @@
|
||||
# Quantum Safe Storage System for NFT
|
||||
|
||||

|
||||
|
||||
The owner of the NFT can upload the data using one of our supported interfaces
|
||||
|
||||
- http upload (everything possible on https://nft.storage/ is also possible on our system)
|
||||
- filesystem
|
||||
|
||||
Every person in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available everywhere in the world using multiple interfaces again (IPFS, HTTP(S), ...). Caching happens on global level. No special software or account on threefold is needed to do this.
|
||||
|
||||
The NFT system uses a super reliable storage system underneath which is sustainable for the planet (green) and ultra secure and private. The NFT owner also owns the data.
|
||||
|
||||
|
||||
## Benefits
|
||||
|
||||
#### Persistence = owned by the data user (as represented by digital twin)
|
||||
|
||||

|
||||
|
||||
Is not based on a shared-all architecture.
|
||||
|
||||
Whoever stores the data has full control over
|
||||
|
||||
- where data is stored (specific locations)
|
||||
- redundancy policy used
|
||||
- how long should the data be kept
|
||||
- CDN policy (where should data be available and how long)
|
||||
|
||||
|
||||
#### Reliability
|
||||
|
||||
- data cannot be corrupted
|
||||
- data cannot be lost
|
||||
- each time data is fetched back hash (fingerprint) is checked, if issues autorecovery happens
|
||||
- all data is encrypted and compressed (unique per storage owner)
|
||||
- data owner chooses the level of redundancy
|
||||
|
||||
#### Lookup
|
||||
|
||||
- multi URL & storage network support (see further the interfaces section)
|
||||
- IPFS, HyperDrive URL schema
|
||||
- unique DNS schema (with long key which is globally unique)
|
||||
|
||||
#### CDN support (with caching)
|
||||
|
||||
Each file (movie, image) stored is available on many places worldwide.
|
||||
|
||||
Each file gets a unique url pointing to the data which can be retrieved on all locations.
|
||||
|
||||
Caching happens on each endpoint.
|
||||
|
||||
#### Self Healing & Auto Correcting Storage Interface
|
||||
|
||||
Any corruption e.g. bitrot gets automatically detected and corrected.
|
||||
|
||||
In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy.
|
||||
|
||||
#### Storage Algoritm = Uses Quantum Safe Storage System as base
|
||||
|
||||
Not even a quantum computer can hack data as stored on our QSSS.
|
||||
|
||||
The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems.
|
||||
|
||||
It uses forward looking error correcting codes inside.
|
||||
|
||||
#### Green
|
||||
|
||||
Storage uses upto 10x less energy compared to classic replicated system.
|
||||
|
||||
#### Multi Interface
|
||||
|
||||
The stored data is available over multiple interfaces at once.
|
||||
|
||||
| interface | |
|
||||
| -------------------------- | ----------------------- |
|
||||
| IPFS |  |
|
||||
| http(s) on top of Digital Twin |  |
|
||||
| syncthing |  |
|
||||
| filesystem |  |
|
||||
|
||||
This allows ultimate flexibility from enduser perspective.
|
||||
|
||||
The object (video,image) can easily be embedded in any website or other representation which supports http.
|
||||
|
||||
|
@@ -1,92 +0,0 @@
|
||||
# Quantum Safe Storage Algoritm
|
||||
|
||||

|
||||
|
||||
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s)
|
||||
|
||||
Data gets stored over multiple ZDB's in such a way that data can never be lost.
|
||||
|
||||
Unique features
|
||||
|
||||
- data always append, can never be lost
|
||||
- even a quantum computer cannot decrypt the data
|
||||
- is spread over multiple sites, sites can be lost, data will still be available
|
||||
- protects for datarot.
|
||||
|
||||
### Why
|
||||
|
||||
Today we produce more data than ever before. We could not continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
|
||||
|
||||
ThreeFold is using space technology to store data (fragments) over multiple devices (physical storage devices in 3Nodes). The solution does not distribute and store parts of an object (file, photo, movie...) but describes the part of an object. This could be visualized by thinking of it as equations.
|
||||
|
||||
|
||||
### Details
|
||||
|
||||
Let a,b,c,d.... be the parts of that original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
|
||||
```
|
||||
a=1
|
||||
b=2
|
||||
c=3
|
||||
```
|
||||
(and for reference the part of real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`). With these numbers we could create endless amounts of equations:
|
||||
```
|
||||
1: a+b+c=6
|
||||
2: c-b-a=0
|
||||
3: b-c+a=0
|
||||
4: 2b+a-c=2
|
||||
5: 5c-b-a=12
|
||||
......
|
||||
```
|
||||
Mathematically we only need 3 to describe the content (=value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and see, and we just remember the locations of all the equations created with the original data fragments. Mathematically we need three equations (any 3 of the total) to recover the original values for a, b or c. So do a request to retrieve 3 of the many equations and the first 3 to arrive are good enough to recalculate the original values. Three randomly retrieved equations are:
|
||||
|
||||
```
|
||||
5c-b-a=12
|
||||
b-c+a=0
|
||||
2b+a-c=2
|
||||
```
|
||||
And this is a mathematical system we could solve:
|
||||
- First: `b-c+a=0 -> b=c-a`
|
||||
- Second: `2b+a-c=2 -> c=2b+a-2 -> c=2(c-a)+a-2 -> c=2c-2a+a-2 -> c=a+2`
|
||||
- Third: `5c-b-a=12 -> 5(a+2)-(c-a)-a=12 -> 5a+10-(a+2)+a-a=12 -> 5a-a-2=2 -> 4a=4 -> a=1`
|
||||
|
||||
Now that we know `a=1` we could solve the rest `c=a+2=3` and `b=c-a=2`. And we have from 3 random equations regenerated the original fragments and could now recreate the original object.
|
||||
|
||||
The redundancy and reliability in such system comes in the form of creating (more than needed) equations and storing them. As shown these equations in any random order could recreate the original fragments and therefore redundancy comes in at a much lower overhead.
|
||||
|
||||
### Example of 16/4
|
||||
|
||||

|
||||
|
||||
|
||||
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations, the first 16 that we find and collect which allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
|
||||
|
||||
The likelihood of losing 4 independent, dispersed storage devices at the same time is very low. Since we have continuous monitoring of all of the stored equations, we could create additional equations immediately when one of them is missing, making it an auto-regeneration of lost data and a self-repairing storage system. The overhead in this example is 4 out of 20 which is a mere **20%** instead of (up to) **400%.**
|
||||
|
||||
### Content distribution Policy (10/50)
|
||||
|
||||
This system can be used as backend for content delivery networks.
|
||||
|
||||
Imagine a movie being stored on 60 locations from which we can loose 50 at the same time.
|
||||
|
||||
If someone now wants to download the data, the first 10 locations who answer fastest will provide enough of the data parts to allow the data to be rebuild.
|
||||
|
||||
The overhead here is much more, compared to previous example, but stil order of magnitude lower compared to other cdn systems.
|
||||
|
||||
### Datarot
|
||||
|
||||
> Datarot cannot happen on this storage system.
|
||||
|
||||
Fact that data storage degrades over time and becomes unreadable, on e.g. a harddisk.
|
||||
The storage system provided by ThreeFold intercepts this silent data corruption, making that it can pass by unnotified.
|
||||
|
||||
> see also https://en.wikipedia.org/wiki/Data_degradation
|
||||
|
||||
### Zero Knowledge Proof
|
||||
|
||||
The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: the actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
|
||||
|
||||
|
||||

|
||||
|
||||
The zero proof knowledge compliancy comes from the fact that all the physical storage nodes (3nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data, and have no access to the original data part or the complete origal data objects.
|
||||
|
@@ -1,82 +0,0 @@
|
||||
|
||||
|
||||
i
|
||||
|
||||
# Quantum Safe Filesystem
|
||||
|
||||
A redundant filesystem, can store PB's (millions of gigabytes) of information.
|
||||
|
||||
Unique features:
|
||||
|
||||
- Unlimited scalable (many petabytes) filesystem
|
||||
- Quantum Safe:
|
||||
- On the TFGrid, no farmer knows what the data is about
|
||||
- Even a quantum computer cannot decrypt
|
||||
- Data can't be lost
|
||||
- Protection for datarot, data will autorepair
|
||||
- Data is kept for ever (data does not get deleted)
|
||||
- Data is dispersed over multiple sites
|
||||
- Sites can go down, data not lost
|
||||
- Up to 10x more efficient than storing on classic storage cloud systems
|
||||
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid, ...)
|
||||
- Compatible with ± all data workloads (not high performance data driven workloads like a database)
|
||||
- Self-healing: when a node or disk is lost, the storage system can get back to the original redundancy level
|
||||
- Helps with compliance to regulations like GDPR (as the hosting facility has no view on what is stored, information is encrypted and incomplete)
|
||||
- Hybrid: can be installed onsite, public, private, ...
|
||||
- Read-write caching on encoding node (the front end)
|
||||
|
||||

|
||||
|
||||
## Mount Any Files in your Storage Infrastructure
|
||||
|
||||
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum-secure way.
|
||||
|
||||
This storage layer relies on 3 primitives of the ThreeFold technology :
|
||||
|
||||
- [0-db](https://github.com/threefoldtech/0-db) is the storage engine.
|
||||
It is an always append database, which stores objects in an immutable format. It allows keeping the history out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
|
||||
|
||||
- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations.
|
||||
It takes files in any format as input, encrypts the file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result
|
||||
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It's an essential element of the operational backup.
|
||||
|
||||
- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace.
|
||||
|
||||
Together they form a storage layer that is quantum secure: even the most powerful computer can't hack the system because no single node contains all of the information needed to reconstruct the data.
|
||||
|
||||

|
||||
|
||||
This concept scales forever, and you can bring any file system on top of it:
|
||||
- S3 storage
|
||||
- any backup system
|
||||
- an ftp-server
|
||||
- IPFS and Hypercore distributed file sharing protocols
|
||||
- ...
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
## Architecture
|
||||
|
||||
By using our filesystem inside a Virtual Machine or Kubernetes, the TFGrid user can deploy any storage application on top e.g. Minio for S3 storage, OwnCloud as online fileserver.
|
||||
|
||||

|
||||
|
||||
Any storage workload can be deployed on top of the zstor.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph Data Ingress and Egress
|
||||
qss[Quantum Safe Storage Engine]
|
||||
end
|
||||
subgraph Physical Data storage
|
||||
st1[Virtual Storage Device 1]
|
||||
st2[Virtual Storage Device 2]
|
||||
st3[Virtual Storage Device 3]
|
||||
st4[Virtual Storage Device 4]
|
||||
st5[Virtual Storage Device 5]
|
||||
st6[...]
|
||||
qss -.-> st1 & st2 & st3 & st4 & st5 & st6
|
||||
end
|
||||
```
|
@@ -1,10 +0,0 @@
|
||||
|
||||
# Zero Knowledge Proof Storage system.
|
||||
|
||||
The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: The actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
|
||||
|
||||
|
||||

|
||||
|
||||
The zero proof knowledge compliancy comes from the fact the all the physical storage nodes (3nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data and have no access to the original data part or the complete origal data objects.
|
||||
|
@@ -1,11 +0,0 @@
|
||||
<!--  -->
|
||||
|
||||
i
|
||||
|
||||
# Quantum Safe Storage System
|
||||
|
||||
Our storage architecture follows the true peer2peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases...) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
|
||||
|
||||

|
||||
|
||||
Peer2peer provides the unique proposition of selecting storage providers that match your application and service of business criteria. For example, you might be looking to store data for your application in a certain geographic area (for governance and compliance) reasons. You might also want to use different "storage policies" for different types of data. Examples are live versus archived data. All of these uses cases are possible with this storage architecture, and could be built by using the same building blocks produced by farmers and consumed by developers or end-users.
|
@@ -1,14 +0,0 @@
|
||||
# S3 Service
|
||||
|
||||
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [quantumsafe_filesystem](qss_filesystem.md).
|
||||
|
||||
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
|
||||
|
||||
Thanks to our quantum safe storage layer, you could build fast, robust and reliable storage and archiving solutions.
|
||||
|
||||
A typical setup would look like:
|
||||
|
||||

|
||||
|
||||
> TODO: link to manual on cloud how to deploy minio, using helm (3.0 release)
|
||||
|