This commit is contained in:
2024-03-18 14:28:08 +02:00
parent f9674a74b9
commit e12acb690e
876 changed files with 788 additions and 546 deletions

View File

@@ -1,8 +0,0 @@
- [Technology](technology/technology.md)
- [Layers](technology/layers/technology_layers.md)
- [Storage](technology/qsss/qsss_home.md)
- [Quantum Safe Storage](technology/qsss/qsss_home.md)
- [Quantum Safe Filesystem](technology/qsss/qss_filesystem.md)
- [Quantum Safe Algo](technology/qsss/qss_algorithm.md)
- [S3 Storage System](technology/qsss/s3_interface.md)

View File

@@ -1,2 +0,0 @@
qss_scaleout.png
qsss_intro.png

View File

@@ -1,9 +0,0 @@
#!/bin/bash
for name in ./*.mmd
do
output=$(basename $name mmd)png
echo $output
mmdc -i $name -o $output -w 4096 -H 2160 -b transparant
echo $name
done

View File

@@ -1,13 +0,0 @@
graph TD
subgraph Data Origin
file[Large chunk of data = part_1part_2part_3part_4]
parta[part_1]
partb[part_2]
partc[part_3]
partd[part_4]
file -.- |split part_1|parta
file -.- |split part_2|partb
file -.- |split part 3|partc
file -.- |split part 4|partd
parta --> partb --> partc --> partd
end

View File

@@ -1,20 +0,0 @@
graph TD
subgraph Data Substitution
parta[part_1]
partb[part_2]
partc[part_3]
partd[part_4]
parta -.-> vara[ A = part_1]
partb -.-> varb[ B = part_2]
partc -.-> varc[ C = part_3]
partd -.-> vard[ D = part_4]
end
subgraph Create equations with the data parts
eq1[A + B + C + D = 6]
eq2[A + B + C - D = 3]
eq3[A + B - C - D = 10]
eq4[ A - B - C - D = -4]
eq5[ A - B + C + D = 0]
eq6[ A - B - C + D = 5]
vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6
end

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

View File

@@ -1,44 +0,0 @@
graph TD
subgraph Data Origin
file[Large chunk of data = part_1part_2part_3part_4]
parta[part_1]
partb[part_2]
partc[part_3]
partd[part_4]
file -.- |split part_1|parta
file -.- |split part_2|partb
file -.- |split part 3|partc
file -.- |split part 4|partd
parta --> partb --> partc --> partd
parta -.-> vara[ A = part_1]
partb -.-> varb[ B = part_2]
partc -.-> varc[ C = part_3]
partd -.-> vard[ D = part_4]
end
subgraph Create equations with the data parts
eq1[A + B + C + D = 6]
eq2[A + B + C - D = 3]
eq3[A + B - C - D = 10]
eq4[ A - B - C - D = -4]
eq5[ A - B + C + D = 0]
eq6[ A - B - C + D = 5]
vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6
end
subgraph Disk 1
eq1 --> |store the unique equation, not the parts|zdb1[A + B + C + D = 6]
end
subgraph Disk 2
eq2 --> |store the unique equation, not the parts|zdb2[A + B + C - D = 3]
end
subgraph Disk 3
eq3 --> |store the unique equation, not the parts|zdb3[A + B - C - D = 10]
end
subgraph Disk 4
eq4 --> |store the unique equation, not the parts|zdb4[A - B - C - D = -4]
end
subgraph Disk 5
eq5 --> |store the unique equation, not the parts|zdb5[ A - B + C + D = 0]
end
subgraph Disk 6
eq6 --> |store the unique equation, not the parts|zdb6[A - B - C + D = 5]
end

Binary file not shown.

Before

Width:  |  Height:  |  Size: 950 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 801 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 315 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 238 KiB

View File

@@ -1,34 +0,0 @@
graph TD
subgraph Local laptop, computer or server
user[End User]
protocol[Storage protocol]
qsfs[Filesystem on local OS]
0store[Quantum Safe storage engine]
end
subgraph Grid storage - metadata
etcd1[ETCD-1]
etcd2[ETCD-2]
etcd3[ETCD-3]
end
subgraph Grid storage - zero proof data
zdb1[ZDB-1]
zdb2[ZDB-2]
zdb3[ZDB-3]
zdb4[ZDB-4]
zdb5[ZDB-5]
zdb6[ZDB-6]
zdb7[ZDB-7]
user -.- protocol
protocol -.- qsfs
qsfs --- 0store
0store --- etcd1
0store --- etcd2
0store --- etcd3
0store <-.-> zdb1[ZDB-1]
0store <-.-> zdb2[ZDB-2]
0store <-.-> zdb3[ZDB-3]
0store <-.-> zdb4[ZDB-4]
0store <-.-> zdb5[ZDB-5]
0store <-.-> zdb6[ZDB-...]
0store <-.-> zdb7[ZDB-N]
end

Binary file not shown.

Before

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

View File

@@ -1,86 +0,0 @@
# Quantum Safe Storage System for NFT
![](img/nft_architecture.jpg)
The owner of the NFT can upload the data using one of our supported interfaces
- http upload (everything possible on https://nft.storage/ is also possible on our system)
- filesystem
Every person in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available everywhere in the world using multiple interfaces again (IPFS, HTTP(S), ...). Caching happens on global level. No special software or account on threefold is needed to do this.
The NFT system uses a super reliable storage system underneath which is sustainable for the planet (green) and ultra secure and private. The NFT owner also owns the data.
## Benefits
#### Persistence = owned by the data user (as represented by digital twin)
![](img/nft_storage.jpg)
Is not based on a shared-all architecture.
Whoever stores the data has full control over
- where data is stored (specific locations)
- redundancy policy used
- how long should the data be kept
- CDN policy (where should data be available and how long)
#### Reliability
- data cannot be corrupted
- data cannot be lost
- each time data is fetched back hash (fingerprint) is checked, if issues autorecovery happens
- all data is encrypted and compressed (unique per storage owner)
- data owner chooses the level of redundancy
#### Lookup
- multi URL & storage network support (see further the interfaces section)
- IPFS, HyperDrive URL schema
- unique DNS schema (with long key which is globally unique)
#### CDN support (with caching)
Each file (movie, image) stored is available on many places worldwide.
Each file gets a unique url pointing to the data which can be retrieved on all locations.
Caching happens on each endpoint.
#### Self Healing & Auto Correcting Storage Interface
Any corruption e.g. bitrot gets automatically detected and corrected.
In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy.
#### Storage Algoritm = Uses Quantum Safe Storage System as base
Not even a quantum computer can hack data as stored on our QSSS.
The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems.
It uses forward looking error correcting codes inside.
#### Green
Storage uses upto 10x less energy compared to classic replicated system.
#### Multi Interface
The stored data is available over multiple interfaces at once.
| interface | |
| -------------------------- | ----------------------- |
| IPFS | ![](img/ipfs.jpg) |
| http(s) on top of Digital Twin | ![](img/http.jpg) |
| syncthing | ![](img/syncthing.jpg) |
| filesystem | ![](img/filesystem.jpg) |
This allows ultimate flexibility from enduser perspective.
The object (video,image) can easily be embedded in any website or other representation which supports http.

View File

@@ -1,92 +0,0 @@
# Quantum Safe Storage Algoritm
![](img/qss_scaleout.png)
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s)
Data gets stored over multiple ZDB's in such a way that data can never be lost.
Unique features
- data always append, can never be lost
- even a quantum computer cannot decrypt the data
- is spread over multiple sites, sites can be lost, data will still be available
- protects for datarot.
### Why
Today we produce more data than ever before. We could not continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
ThreeFold is using space technology to store data (fragments) over multiple devices (physical storage devices in 3Nodes). The solution does not distribute and store parts of an object (file, photo, movie...) but describes the part of an object. This could be visualized by thinking of it as equations.
### Details
Let a,b,c,d.... be the parts of that original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
```
a=1
b=2
c=3
```
(and for reference the part of real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`). With these numbers we could create endless amounts of equations:
```
1: a+b+c=6
2: c-b-a=0
3: b-c+a=0
4: 2b+a-c=2
5: 5c-b-a=12
......
```
Mathematically we only need 3 to describe the content (=value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and see, and we just remember the locations of all the equations created with the original data fragments. Mathematically we need three equations (any 3 of the total) to recover the original values for a, b or c. So do a request to retrieve 3 of the many equations and the first 3 to arrive are good enough to recalculate the original values. Three randomly retrieved equations are:
```
5c-b-a=12
b-c+a=0
2b+a-c=2
```
And this is a mathematical system we could solve:
- First: `b-c+a=0 -> b=c-a`
- Second: `2b+a-c=2 -> c=2b+a-2 -> c=2(c-a)+a-2 -> c=2c-2a+a-2 -> c=a+2`
- Third: `5c-b-a=12 -> 5(a+2)-(c-a)-a=12 -> 5a+10-(a+2)+a-a=12 -> 5a-a-2=2 -> 4a=4 -> a=1`
Now that we know `a=1` we could solve the rest `c=a+2=3` and `b=c-a=2`. And we have from 3 random equations regenerated the original fragments and could now recreate the original object.
The redundancy and reliability in such system comes in the form of creating (more than needed) equations and storing them. As shown these equations in any random order could recreate the original fragments and therefore redundancy comes in at a much lower overhead.
### Example of 16/4
![](img/quantumsafe_storage_algo.jpg)
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations, the first 16 that we find and collect which allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
The likelihood of losing 4 independent, dispersed storage devices at the same time is very low. Since we have continuous monitoring of all of the stored equations, we could create additional equations immediately when one of them is missing, making it an auto-regeneration of lost data and a self-repairing storage system. The overhead in this example is 4 out of 20 which is a mere **20%** instead of (up to) **400%.**
### Content distribution Policy (10/50)
This system can be used as backend for content delivery networks.
Imagine a movie being stored on 60 locations from which we can loose 50 at the same time.
If someone now wants to download the data, the first 10 locations who answer fastest will provide enough of the data parts to allow the data to be rebuild.
The overhead here is much more, compared to previous example, but stil order of magnitude lower compared to other cdn systems.
### Datarot
> Datarot cannot happen on this storage system.
Fact that data storage degrades over time and becomes unreadable, on e.g. a harddisk.
The storage system provided by ThreeFold intercepts this silent data corruption, making that it can pass by unnotified.
> see also https://en.wikipedia.org/wiki/Data_degradation
### Zero Knowledge Proof
The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: the actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
![](img/qss_system.jpg)
The zero proof knowledge compliancy comes from the fact that all the physical storage nodes (3nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data, and have no access to the original data part or the complete origal data objects.

View File

@@ -1,82 +0,0 @@
i![](img/qsss_intro.png)
# Quantum Safe Filesystem
A redundant filesystem, can store PB's (millions of gigabytes) of information.
Unique features:
- Unlimited scalable (many petabytes) filesystem
- Quantum Safe:
- On the TFGrid, no farmer knows what the data is about
- Even a quantum computer cannot decrypt
- Data can't be lost
- Protection for datarot, data will autorepair
- Data is kept for ever (data does not get deleted)
- Data is dispersed over multiple sites
- Sites can go down, data not lost
- Up to 10x more efficient than storing on classic storage cloud systems
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid, ...)
- Compatible with ± all data workloads (not high performance data driven workloads like a database)
- Self-healing: when a node or disk is lost, the storage system can get back to the original redundancy level
- Helps with compliance to regulations like GDPR (as the hosting facility has no view on what is stored, information is encrypted and incomplete)
- Hybrid: can be installed onsite, public, private, ...
- Read-write caching on encoding node (the front end)
![](img/planet_fs.jpg)
## Mount Any Files in your Storage Infrastructure
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum-secure way.
This storage layer relies on 3 primitives of the ThreeFold technology :
- [0-db](https://github.com/threefoldtech/0-db) is the storage engine.
It is an always append database, which stores objects in an immutable format. It allows keeping the history out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations.
It takes files in any format as input, encrypts the file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It's an essential element of the operational backup.
- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace.
Together they form a storage layer that is quantum secure: even the most powerful computer can't hack the system because no single node contains all of the information needed to reconstruct the data.
![](img/quantum_safe_storage.jpg)
This concept scales forever, and you can bring any file system on top of it:
- S3 storage
- any backup system
- an ftp-server
- IPFS and Hypercore distributed file sharing protocols
- ...
![](img/quantum_safe_storage_scale.jpg)
## Architecture
By using our filesystem inside a Virtual Machine or Kubernetes, the TFGrid user can deploy any storage application on top e.g. Minio for S3 storage, OwnCloud as online fileserver.
![](img/qsstorage_architecture.jpg)
Any storage workload can be deployed on top of the zstor.
```mermaid
graph TD
subgraph Data Ingress and Egress
qss[Quantum Safe Storage Engine]
end
subgraph Physical Data storage
st1[Virtual Storage Device 1]
st2[Virtual Storage Device 2]
st3[Virtual Storage Device 3]
st4[Virtual Storage Device 4]
st5[Virtual Storage Device 5]
st6[...]
qss -.-> st1 & st2 & st3 & st4 & st5 & st6
end
```

View File

@@ -1,10 +0,0 @@
# Zero Knowledge Proof Storage system.
The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: The actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
![](img/qss_system.jpg)
The zero proof knowledge compliancy comes from the fact the all the physical storage nodes (3nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data and have no access to the original data part or the complete origal data objects.

View File

@@ -1,11 +0,0 @@
<!-- ![](img/qsss_intro_.jpg) -->
i![](img/qsss_intro.png)
# Quantum Safe Storage System
Our storage architecture follows the true peer2peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases...) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
![](img/qsss_intro_0_.jpg)
Peer2peer provides the unique proposition of selecting storage providers that match your application and service of business criteria. For example, you might be looking to store data for your application in a certain geographic area (for governance and compliance) reasons. You might also want to use different "storage policies" for different types of data. Examples are live versus archived data. All of these uses cases are possible with this storage architecture, and could be built by using the same building blocks produced by farmers and consumed by developers or end-users.

View File

@@ -1,14 +0,0 @@
# S3 Service
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [quantumsafe_filesystem](qss_filesystem.md).
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
Thanks to our quantum safe storage layer, you could build fast, robust and reliable storage and archiving solutions.
A typical setup would look like:
![](img/storage_architecture_1.jpg)
> TODO: link to manual on cloud how to deploy minio, using helm (3.0 release)