merge main_commsteam to main with fixed conflicts

This commit is contained in:
2024-04-10 18:54:49 +00:00
45 changed files with 463 additions and 398 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

View File

@@ -4,53 +4,53 @@
> TODO: need to upgrade image, also digital twin needs to be named '3bot'
The owner of the NFT can upload the data using one of our supported interfaces
The owner of the NFT can upload the data using one of our supported interfaces:
- http upload (everything possible on https://nft.storage/ is also possible on our system)
- filesystem
- Http upload (everything possible on https://nft.storage/ is also possible on our system)
- Filesystem
Every person in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available everywhere in the world using multiple interfaces again (IPFS, HTTP(S), ...). Caching happens on global level. No special software or account on threefold is needed to do this.
Anyone in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available anywhere in the world using multiple interfaces again (IPFS, HTTP(S) etc.). Caching happens on a global level. No special software or account on ThreeFold is needed to do this.
The NFT system uses a super reliable storage system underneath which is sustainable for the planet (green) and ultra secure and private. The NFT owner also owns the data.
The NFT system operates on top of a very reliable storage system which is sustainable for the planet and ultra secure and private. The NFT owner also owns the data.
## Benefits
## The Benefits
#### Persistence = owned by the data user (as represented by digital twin)
![](img/nft_storage.jpg)
Is not based on a shared-all architecture.
The system is not based on a shared-all architecture.
Whoever stores the data has full control over
Whoever stores the data has full control over:
- where data is stored (specific locations)
- redundancy policy used
- how long should the data be kept
- CDN policy (where should data be available and how long)
- Where data is stored (specific locations)
- The redundancy policy which is used
- How long the data is kept
- CDN policy (where the data is available and for how long)
#### Reliability
- data cannot be corrupted
- data cannot be lost
- each time data is fetched back hash (fingerprint) is checked, if issues autorecovery happens
- all data is encrypted and compressed (unique per storage owner)
- data owner chooses the level of redundancy
- Data cannot be corrupted
- Data cannot be lost
- Each time data is fetched back the hash (fingerprint) is checked. If there are any issues then autorecovery occurs
- All data is encrypted and compressed (unique per storage owner)
- Data owner chooses the level of redundancy
#### Lookup
- multi URL & storage network support (see further the interfaces section)
- Multi URL & storage network support (see more in the interfaces section)
- IPFS, HyperDrive URL schema
- unique DNS schema (with long key which is globally unique)
- Unique DNS schema (with long key which is globally unique)
#### CDN support (with caching)
#### CDN Support
Each file (movie, image) stored is available on many places worldwide.
Each file (movie, image etc.) stored is available in many locations worldwide.
Each file gets a unique url pointing to the data which can be retrieved on all locations.
Each file gets a unique url pointing to the data which can be retrieved from all these locations.
Caching happens on each endpoint.
Caching happens at each endpoint.
#### Self Healing & Auto Correcting Storage Interface
@@ -58,9 +58,9 @@ Any corruption e.g. bitrot gets automatically detected and corrected.
In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy.
#### Storage Algoritm = Uses Quantum Safe Storage System as base
#### The Storage Algoritm Uses Quantum Safe Storage System As Its Base
Not even a quantum computer can hack data as stored on our QSSS.
Not even a quantum computer can hack data stored on our QSSS.
The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems.
@@ -74,15 +74,15 @@ Storage uses upto 10x less energy compared to classic replicated system.
The stored data is available over multiple interfaces at once.
| interface | |
| Interface | |
| -------------------------- | ----------------------- |
| IPFS | ![](img/ipfs.jpg) |
| http(s) on top of Digital Twin | ![](img/http.jpg) |
| syncthing | ![](img/syncthing.jpg) |
| filesystem | ![](img/filesystem.jpg) |
| Http(s) on top of Digital Twin | ![](img/http.jpg) |
| Syncthing | ![](img/syncthing.jpg) |
| Filesystem | ![](img/filesystem.jpg) |
This allows ultimate flexibility from enduser perspective.
This allows ultimate flexibility from the end user perspective.
The object (video,image) can easily be embedded in any website or other representation which supports http.
The object (video, image etc.) can easily be embedded in any website or other representation which supports http.

View File

@@ -1,42 +1,42 @@
# Quantum Safe Storage Algoritm
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s)
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s).
Data gets stored over multiple ZDB's in such a way that data can never be lost.
Unique features
- data always append, can never be lost
- even a quantum computer cannot decrypt the data
- is spread over multiple sites, sites can be lost, data will still be available
- protects for datarot.
- Data always append, can never be lost
- Even a quantum computer cannot decrypt the data
- Data is spread over multiple sites. If these sites are lost the data will still be available
- Protects from datarot
## Why
## The Problem
Today we produce more data than ever before. We could not continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
Today we produce more data than ever before. We cannot continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
ThreeFold is using space technology to store data (fragments) over multiple devices (physical storage devices in TFNodes). The solution does not distribute and store parts of an object (file, photo, movie...) but describes the part of an object. This could be visualized by thinking of it as equations.
ThreeFold is using space technology to store data fragments over multiple devices (physical storage devices in TFNodes). The solution does not distribute and store parts of an object (file, photo, movie etc.) but describes the part of an object. This can be visualized by thinking of it as equations.
## How is it done today
## How Data Is Stored Today
![alt text](storage_today.png)
In most distributed systems as used on the Internet or in blockchain land today the data will get replicated (sometimes after sharding, which means distributed based on the content of the file and spread out over the world).
In most distributed systems, as used on the Internet or in blockchain today, the data will get replicated (sometimes after sharding, which means distributed based on the content of the file and spread out over the world).
This leads to a lot of overhead and minimal control where the data is.
In well optimized systems overhead will be 400% but in some it can be orders of magnitude higher to get to a reasonable redundancy level.
## The Quantum Safe storage System Works Differently
## The Quantum Safe Storage System Works Differently
![alt text](qsss_overview.png)
ThreeFold has developed a new storage algoritm which is more efficient, ultra reliable and allows you full control over where you want your data to be stored.
ThreeFold has developed a new storage algorithm which is more efficient, ultra reliable and gives you full control over where your data is stored.
ThreeFold's approach is different, lets try to visualize by means of simple analogy with equations.
ThreeFold's approach is different. Let's try to visualize this new approach with a simple analogy using equations.
Let a,b,c,d.... be the parts of that original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
Let a,b,c,d.... be the parts of the original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
```
a=1
@@ -44,7 +44,7 @@ b=2
c=3
```
(and for reference the part of real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`).
(and for reference the part of the real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`).
With these numbers we could create endless amounts of equations:
@@ -56,11 +56,11 @@ With these numbers we could create endless amounts of equations:
4: 2b+a-c=2
5: 5c-b-a=12
......
etc.
```
Mathematically we only need 3 to describe the content (=value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and see, and we just remember the locations of all the equations created with the original data fragments.
Mathematically we only need 3 to describe the content (value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and we just remember the locations of all the equations created with the original data fragments.
Mathematically we need three equations (any 3 of the total) to recover the original values for a, b or c. So do a request to retrieve 3 of the many equations and the first 3 to arrive are good enough to recalculate the original values. Three randomly retrieved equations are:
@@ -77,36 +77,36 @@ And this is a mathematical system we could solve:
Now that we know `a=1` we could solve the rest `c=a+2=3` and `b=c-a=2`. And we have from 3 random equations regenerated the original fragments and could now recreate the original object.
The redundancy and reliability in such system comes in the form of creating (more than needed) equations and storing them. As shown these equations in any random order could recreate the original fragments and therefore redundancy comes in at a much lower overhead.
The redundancy and reliability in this system results from creating equations (more than needed) and storing them. As shown these equations in any random order can recreate the original fragments and therefore redundancy comes in at a much lower overhead.
In our system we don't don this with 3 parts but with thousands.
In our system we don't do this with 3 parts but with thousands.
### Example of 16/4
![](img/quantumsafe_storage_algo.jpg)
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations, the first 16 that we find and collect which allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations. The first 16 that we find and collect allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
The likelihood of losing 4 independent, dispersed storage devices at the same time is very low. Since we have continuous monitoring of all of the stored equations, we could create additional equations immediately when one of them is missing, making it an auto-regeneration of lost data and a self-repairing storage system.
> The overhead in this example is 4 out of 20 which is a mere **20%** instead of **400%** .
## Can be used for content delivery.
## Content Delivery
This system can be used as backend for content delivery networks.
This system can be used as backend for content delivery networks.
e.g. Content distribution Policy could be a 10/50 distribution which means, the content of a movie would be distributed over 60 locations from which we can loose 50 at the same time.
E.g. content distribution policy could be a 10/50 distribution which means, the content of a movie would be distributed over 60 locations from which we can lose 50 at the same time.
If someone now wants to download the data, the first 10 locations who answer fastest will provide enough of the data parts to allow the data to be rebuild.
If someone now wants to download the data, the first 10 locations to answer will provide enough of the data parts to rebuild the data.
The overhead here is more, compared to previous example, but stil order of magnitude lower compared to other cdn systems.
The overhead here is more, compared to previous example, but stil orders of magnitude lower compared to other CDN systems.
## The Quantum Safe Storage System is capable to avoid Datarot
## The Quantum Safe Storage System Can Avoid Datarot
Datarot is the fact that data storage degrades over time and becomes unreadable, on e.g. a harddisk.
Datarot is the fact that data storage degrades over time and becomes unreadable e.g. on a harddisk.
The storage system provided by ThreeFold intercepts this silent data corruption, making that it can pass by unnotified.
The storage system provided by ThreeFold intercepts this silent data corruption ensurinf that data does not rot.
> see also https://en.wikipedia.org/wiki/Data_degradation
> See also https://en.wikipedia.org/wiki/Data_degradation

View File

@@ -1,44 +1,42 @@
i![](img/qsss_intro.png)
# Quantum Safe Filesystem
![](img/qsss_intro.png)
A redundant filesystem, can store PB's (millions of gigabytes) of information.
Unique features:
- Unlimited scalable (many petabytes) filesystem
- Unlimited scalability (many petabytes)
- Quantum Safe:
- On the TFGrid, no farmer knows what the data is about
- Even a quantum computer cannot decrypt
- On the TFGrid no farmer knows what the data is
- Even a quantum computer cannot decrypt the data
- Data can't be lost
- Protection for datarot, data will autorepair
- Data is kept for ever (data does not get deleted)
- Data is kept forever (data does not get deleted)
- Data is dispersed over multiple sites
- Sites can go down, data not lost
- Even if the sites go down the data will not be lost
- Up to 10x more efficient than storing on classic storage cloud systems
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid, ...)
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid etc.)
- Compatible with ± all data workloads (not high performance data driven workloads like a database)
- Self-healing: when a node or disk is lost, the storage system can get back to the original redundancy level
- Helps with compliance to regulations like GDPR (as the hosting facility has no view on what is stored, information is encrypted and incomplete)
- Hybrid: can be installed onsite, public, private, ...
- Helps with compliance for regulations like GDPR (as the hosting facility has no view on what is stored: information is encrypted and incomplete)
- Hybrid: can be installed onsite, public and private
- Read-write caching on encoding node (the front end)
![](img/planet_fs.jpg)
## Mount Any Files in your Storage Infrastructure
## Mount Any Files In Your Storage Infrastructure
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum-secure way.
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum secure way.
This storage layer relies on 3 primitives of the ThreeFold technology :
- [0-db](https://github.com/threefoldtech/0-db) is the storage engine.
It is an always append database, which stores objects in an immutable format. It allows keeping the history out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
It is an always append database, which stores objects in an immutable format. It allows history to be kept out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations.
It takes files in any format as input, encrypts the file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It's an essential element of the operational backup.
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It is an essential element of the operational backup.
- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace.
@@ -51,7 +49,6 @@ This concept scales forever, and you can bring any file system on top of it:
- any backup system
- an ftp-server
- IPFS and Hypercore distributed file sharing protocols
- ...
![](img/quantum_safe_storage_scale.jpg)

View File

@@ -1,11 +1,11 @@
# Zero Knowledge Proof Storage system.
# Zero Knowledge Proof Storage System
The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: The actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
The Quantum Safe Storage System is zero knowledge proof compliant. The storage system is made up of / split into 2 components: the actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
![](img/qss_system.jpg)
The zero proof knowledge compliancy comes from the fact the all the physical storage nodes (tf_nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data and have no access to the original data part or the complete origal data objects.
The zero proof knowledge compliancy comes from the fact that all of the physical storage nodes (TFnodes) can prove that they store a valid part of the data that the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all of the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the original data and have no access to the original data part or the complete original data objects.

View File

@@ -1,10 +1,10 @@
<!-- ![](img/qsss_intro_.jpg) -->
i![](img/qsss_intro.png)
# Quantum Safe Storage System
Our storage architecture follows the true peer2peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases...) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
i![](img/qsss_intro.png)
Our storage architecture follows the true peer2peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases etc.) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
![](img/qsss_intro_0_.jpg)

View File

@@ -1,14 +1,15 @@
# S3 Service
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [quantumsafe_filesystem](qss_filesystem.md).
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
Thanks to our quantum safe storage layer, you could build fast, robust and reliable storage and archiving solutions.
A typical setup would look like:
![](img/storage_architecture_1.jpg)
To deploy MinIO using Helm 3, you can consult [this guide](https://forum.threefold.io/t/minio-operator-with-helm-3/4294).
# S3 Service
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [Quantum Safe File System](qss_filesystem.md).
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
Thanks to our Quantum Safe Storage Layer, you can build fast, robust and reliable storage and archiving solutions.
A typical setup would look like this:
![](img/storage_architecture_1.jpg)
<!-- TODO: link to manual on cloud how to deploy minio, using helm (3.0 release) -->