manual added knowledge_base
This commit is contained in:
Binary file not shown.
After Width: | Height: | Size: 276 KiB |
@@ -0,0 +1,3 @@
|
||||
# zstor filesystem (zstor) Policy
|
||||
|
||||
Describe how it works...
|
@@ -0,0 +1,68 @@
|
||||

|
||||
|
||||
# System requirements
|
||||
|
||||
System that is easy to provision storage capacity on the TF grid
|
||||
- user can create X storage nodes on a random or specific locations
|
||||
- user can list their storage nodes
|
||||
- check node status/info in some shape or form in a monitoring solution
|
||||
- external authentication/payment system using threefold connect app
|
||||
- user can delete their storage nodes
|
||||
- user can provision mode storage nodes
|
||||
- user can increase total size of storage solutions
|
||||
- user can install the quantum safe filesystem on any linux based system, physical or virtual
|
||||
|
||||
# Non-functional requirements
|
||||
|
||||
- How many expected concurrent users: not application - each user will have it's own local binary and software install.
|
||||
- How many users on the system: 10000-100000
|
||||
- Data store: fuse filesystem plus local and grid based ZDB's
|
||||
- How critical is the system? it needs to be alive all the time.
|
||||
- What do we know about the external payment system?
|
||||
Threefold Connect, use QR code for payments and validate on the blockchain
|
||||
- Life cycle of the storage nodes? How does the user keep their nodes alive? The local binary / application has a wallet from which it can pay for the existing and new storage devices. This wallet needs to be kept topped up.
|
||||
- When the user is asked to sign the deployment of 20 storage nodes:
|
||||
- will the user sign each single reservation? or should the system itself sign it for the user and show the QR code only for payments?
|
||||
- Payments should be done to the a specific user wallet and with a background service with extend the user pools or /extend in the bot conversation? to be resolved
|
||||
- Configuration and all metadata should be stored as a hash / private key. With this information you are able to regain access to your stored data from everywhere.
|
||||
|
||||
|
||||
# Components mapping / SALs
|
||||
|
||||
- Entities: User, storage Node
|
||||
- ReservationBuilder: builds reservation for the user to sign (note the QR code data size limit is 3KB)
|
||||
- we need to define how many nodes can we deploy at a time, shouldn't exceed 3KB for the QR Code, if it exceeds the limit should we split the reservations?
|
||||
|
||||
- UserInfo: user info are loaded from threefold login system
|
||||
- Blockchain Node (role, configurations)
|
||||
- Interface to Threefold connect (authentication+payment) /identify + generate payments
|
||||
- User notifications / topup
|
||||
- Monitoring: monitoring + redeployment of the solutions again if they go down, when redeploying who owns the reservation to delete -can be fixed with delete signers field- and redeploy, but to deploy we need the user identity or should we inform the user in telegram and ask him to /redeploy
|
||||
- Logging
|
||||
|
||||
# Tech stack
|
||||
|
||||
- [JS-SDK[](https://github.com/threefoldtech/js-sdk) (?)
|
||||
- [0-db](https://github.com/threefoldtech/0-db-s)
|
||||
- [0-db-fs](https://github.com/threefoldtech/0-db-fs)
|
||||
- [0-stor_v2](https://github.com/threefoldtech/0-stor_v2)
|
||||
- [quantum_storage](https://github.com/threefoldtech/quantum-storage)
|
||||
|
||||
|
||||
|
||||
# Blockers
|
||||
|
||||
|
||||
Idea from blockchain jukekebox brainstorm:
|
||||
|
||||
## payments
|
||||
- QR code contains threebot://signandpay/#https://tf.grid/api/a6254a4a-bdf4-11eb-8529-0242ac130003 (can also be uni link)
|
||||
- App gets URL
|
||||
- URL gives data
|
||||
- { DataToSign : {RESERVATIONDETAILS}, Payment: {PAYMENTDETAILS}, CallbackUrl: {CALLBACKURL} }
|
||||
- App signs reservation, makes payment, calls callbackurl { SingedData : {SINGEDRESERVATION}, Payment: {FINISHED_PAYMENTDETAILS}}
|
||||
|
||||
Full flow:
|
||||
- User logs in using normal login flow
|
||||
- User scans QR
|
||||
- User confirms reservation and payment in the app
|
@@ -0,0 +1,3 @@
|
||||
# Specs zstor filesystem
|
||||
|
||||
- [Quantum Safe File System](quantum_safe_filesystem_2_6)
|
@@ -0,0 +1,15 @@
|
||||
# zstor filesystem 2.6
|
||||
|
||||
## requirements
|
||||
|
||||
- redundancy/uptime
|
||||
- data can never be lost if older than 20 min (avg will be 7.5 min, because we use 15 min push)
|
||||
- if a datacenter or node goes down and we are in storage policy the storage stays available
|
||||
- reliability
|
||||
- data cannot have hidden data corruption, when bitrot the FS will automatically recover
|
||||
- self healing
|
||||
- when data policy is lower than required level then should re-silver (means make sure policy is intact again)
|
||||
|
||||
## NEW
|
||||
|
||||
- 100% redundancy
|
@@ -0,0 +1,37 @@
|
||||
|
||||
## zstor Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph TFGridLoc2
|
||||
ZDB5
|
||||
ZDB6
|
||||
ZDB7
|
||||
ZDB8
|
||||
ETCD3
|
||||
end
|
||||
subgraph TFGridLoc1
|
||||
ZDB1
|
||||
ZDB2
|
||||
ZDB3
|
||||
ZDB4
|
||||
ETCD1
|
||||
ETCD2
|
||||
KubernetesController --> ETCD1
|
||||
KubernetesController --> ETCD2
|
||||
KubernetesController --> ETCD3
|
||||
end
|
||||
|
||||
|
||||
subgraph eVDC
|
||||
PlanetaryFS --> ETCD1 & ETCD2 & ETCD3
|
||||
PlanetaryFS --> MetadataStor
|
||||
PlanetaryFS --> ReadWriteCache
|
||||
MetadataStor --> LocalZDB
|
||||
ReadWriteCache --> LocalZDB
|
||||
LocalZDB & PlanetaryFS --> ZeroStor
|
||||
ZeroStor --> ZDB1 & ZDB2 & ZDB3 & ZDB4 & ZDB5 & ZDB6 & ZDB7 & ZDB8
|
||||
end
|
||||
|
||||
|
||||
```
|
@@ -0,0 +1,40 @@
|
||||
## zstor Sequence Diagram
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant user as user
|
||||
participant fs as 0-fs
|
||||
participant lzdb as local 0-db
|
||||
participant zstor as 0-stor
|
||||
participant etcd as ETCD
|
||||
participant zdbs as backend 0-dbs
|
||||
participant mon as Monitor
|
||||
|
||||
alt Writing data
|
||||
user->>fs: write data to files
|
||||
fs->>lzdb: write data blocks
|
||||
opt Datafile is full
|
||||
lzdb->>zstor: encode and chunk data file
|
||||
zstor->>zdbs: write encoded datafile chunks to the different backends
|
||||
zstor->>etcd: write metadata about encoded file to metadata storage
|
||||
end
|
||||
else Reading data
|
||||
user->>fs: read data from file
|
||||
fs->>lzdb: read data blocks
|
||||
opt Datafile is missing
|
||||
lzdb->>zstor: request retrieval of data file
|
||||
zstor->>etcd: load file encoding and storage metadata
|
||||
zstor->>zdbs: read encoded datafile chunks from multiple backends and rebuilds original datafile
|
||||
zstor->>lzdb: replaces the missing datafile
|
||||
end
|
||||
end
|
||||
|
||||
loop Monitor action
|
||||
mon->>lzdb: delete local data files which are full and have been encoded, AND have not been accessed for some time
|
||||
mon->>zdbs: monitors health of used namespaces
|
||||
opt Namespace is lost or corrupted
|
||||
mon->>zstor: checks storage configuration
|
||||
mon->>zdbs: rebuilds missing shard on new namespace from storage config
|
||||
end
|
||||
end
|
||||
```
|
Reference in New Issue
Block a user