merge main_commsteam to main with fixed conflicts

This commit is contained in:
mik-tf 2024-04-10 18:54:49 +00:00
commit 793341573b
45 changed files with 463 additions and 398 deletions

View File

@ -5,7 +5,7 @@
### TF Nodes (or 3Nodes)
The network of nodes which make up the cloud, each node provides compute, storage and network capacity.
The network of nodes which make up the cloud. Each node provides compute, storage and network capacity.
### TF Routers
@ -17,7 +17,7 @@ These TF Routers are not only compatible with Wi-Fi but also with 4G and 5G netw
### Web3 & Decentralized AI Compatibility
The TFGrid is the ideal platform for hosting any Web3 and AI workloads. Our Zero-OS operating system already supports integrated GPUs, ensuring optimal performance for decentralized AI applications.
The TFGrid is the ideal platform for hosting any web3 and AI workloads. Our Zero-OS operating system already supports integrated GPUs, ensuring optimal performance for decentralized AI applications.
> Any workload (web2/3 and AI) can run on TFGrid.

View File

@ -1,6 +1,10 @@
# energy efficient
# Energy Efficient
Below are some of the ways in which ThreeFold achieves energy efficiency as compared to traditional models.
![alt text](energy_efficient.png)
> Depending the usecase the ThreeFold approach can lead to 10x energy savings.
In addition, a decentralized peer-to-peer infrastructure which finds the shortest path between end points is by nature energy-efficient. Data needs to travel a much shorter distance.
> Depending on the use case the ThreeFold approach can lead to 10x energy savings.

View File

@ -1,4 +1,4 @@
## we forgot to use hardware well
## Hardware Is No Longer Used Efficiently
The IT world fails to harness the full potential of computer hardware.
@ -10,4 +10,4 @@ The original Commodore 64, with only 64 KB of memory, was a remarkably capable m
This highlights a regression in our ability to fully utilize computer hardware.
At Threefold, we are committed to bridging this gap by optimizing our approach to hardware utilization, thereby unlocking its full potential.
At Threefold, we are committed to bridging this gap by optimizing our approach to hardware utilization, thereby unlocking its full potential. 

View File

@ -1,4 +1,4 @@
## FList: a new way how to deal with OS Images
## FList: A New Way Of Dealing With OS Images
!!wiki.include page:flist_innovation_short

View File

@ -1,28 +1,28 @@
### Why?
### The Problem
The current method of deploying workloads in the cloud using Docker containers and virtual machine images has inherent issues. These images consume significant storage space, result in slow and bandwidth-intensive transfers to the internet's edge, drive up costs, introduce complexity, and pose security risks due to difficulties in tracking their contents over time.
For instance, a complete Ubuntu image can easily be 2 GB in size, comprising millions of files. In contrast, the Flist for a full Ubuntu image is less than 2 MB (1000 times smaller), containing only the necessary files required to launch an application.
### What?
### Introducing Flist
A new image format that separates the image data (comprising files and subfile parts) from the metadata describing the image structure.
An Flists format uniquely encompasses comprehensive file descriptions along with all relevant metadata such as size, modification and creation timestamps, and POSIX attributes. Additionally, it incorporates a fingerprint for each component, ensuring deterministic behavior—a crucial feature for security-focused use cases.
An Flist's format uniquely encompasses comprehensive file descriptions along with all relevant metadata such as size, modification and creation timestamps, and POSIX attributes. Additionally, it incorporates a fingerprint for each component, ensuring deterministic behavior—a crucial feature for security focused use cases.
Flists provide the flexibility to manage metadata and data as separate entities, offering a versatile approach to handling various build and delivery scenarios.
### Benefits
### The Benefits
- **Rapid Deployment:** Zero-OS enables containers and virtual machines to launch up to 100 times faster, especially in decentralized scenarios.
- **Enhanced Security:** Zero-OS prevents tampering with images, ensuring higher security levels.
- **Reduced Storage and Bandwidth:** Zero-OS significantly reduces storage and bandwidth requirements, potentially achieving up to a 100-fold improvement.
- **Deterministic Deployments:** Engineers can precisely define deployments beforehand, ensuring predictable outcomes without changes during deployment.
- **100% compatible:** with existing standards, docker, virtual machines... The same format is useful for VM's as well as any container technology.
- **Rapid deployment:** Zero-OS enables containers and virtual machines to launch up to 100 times faster, especially in decentralized scenarios.
- **Enhanced security:** Zero-OS prevents tampering with images, ensuring higher security levels.
- **Reduced storage and bandwidth:** Zero-OS significantly reduces storage and bandwidth requirements, potentially achieving up to a 100-fold improvement.
- **Deterministic deployments:** engineers can precisely define deployments beforehand, ensuring predictable outcomes without changes during deployment.
- **100% compatible:** with existing standards, docker and virtual machines. The same format is useful for VM's as well as any container technology.
### Status?
### Status
- Usable for years, see Zero-OS.
Usable for years, see Zero-OS.

View File

@ -1,22 +1,25 @@
### Why?
### The Problem
Existing blockchain, internet, and P2P content delivery and storage systems suffer from sluggish performance and are too expensive. Content retrieval is often slow, and the overhead for ensuring redundancy is excessive. We require innovative approaches to facilitate efficient information sharing among users.
Content delivery frequently represents the most significant expense for social networks. Running a basic social video network for 10 million users currently costs approximately $2 million per month using traditional cloud providers. We have the potential to reduce this cost by several orders of magnitude.
### What?
### Introducing FungiStor
FungiStor is a peer-to-peer (P2P) content delivery layer designed to store and distribute an extensive range of objects, including images, videos, files, and more. It has the capability to handle trillions of objects and files efficiently. FungiStor serves as an excellent solution for content delivery networks (CDNs), significantly reducing costs for organizations seeking to stream or deliver substantial data volumes to their user base.
Furthermore, FungiStor will act as the backend infrastructure for the Flists within our own system. However, it is versatile and can be utilized by anyone in need of a global-level content delivery system for files, objects, and images.
Furthermore, FungiStor will act as the backend infrastructure for the Flists within our own system. It is versatile and can be utilized by anyone in need of a global-level content delivery system for files, objects, and images.
### Benefits?
### The Benefits
- **Global Scalability, Sub-50ms Lookups:** FungiStor scales worldwide with ultra-fast data retrieval under 50 milliseconds.
- **Localized Content Delivery:** Prioritizes local data access for optimized speed and efficiency.
- **Quantum-Safe Security:** Incorporates robust quantum security measures.
- **Interoperability:** Works seamlessly with IPFS, Torrent, and more.
- **Cost Efficiency:** Offers significant cost savings, potentially 10 to 100 times less than conventional solutions.
- **Global scalability, sub-50ms lookups:** FungiStor scales worldwide with ultra-fast data retrieval under 50 milliseconds.
- **Localized content delivery:** prioritizes local data access for optimized speed and efficiency.
- **Quantum-Safe security:** incorporates robust quantum security measures.
- **Interoperability:** works seamlessly with IPFS, Torrent, and more.
- **Cost efficiency:** offers significant cost savings, potentially 10 to 100 times less than conventional solutions.
### Status
Planned for the end of 2024

View File

@ -5,9 +5,9 @@
- [Mycelium: a new network layer for the internet](mycelium_innovation.md)
- [Zero-OS: a minimalistic more efficient server operating system](zos_innovation)
- [Zero-OS: a minimalistic and more efficient server operating system](zos_innovation)
- [Quantum Safe Storage](zstor_innovation.md)
- [Quantum Safe Filesystem](qsfs_innovation.md)
- [FList: a new way how to deal with OS Images](flist_innovation.md)
- [FList: a new way to deal with OS Images](flist_innovation.md)
- [FungiStor](fungistor_innovation.md)
- [Network Wall](network_wall_innovation.md)

View File

@ -1,25 +1,25 @@
### Why?
### The Problem
The current centralized state of the internet poses significant security risks, with compromised routers and growing cyber threats (trillions of USD per year now), making everyone vulnerable to hacking. Industry responses involve disabling original features, hindering true peer-to-peer connectivity and personal server capabilities. Workarounds and system hacks have become the norm.
**Our Internet is seriously broken, we need new ways how to communicate**
**Our Internet is seriously broken. We need new ways to communicate**
### What?
### Introducing Mycelium
Mycelium is an overlay network layer designed to enhance the existing internet infrastructure while remaining compatible with all current applications. It empowers true peer-to-peer communication. By installing a Network Agent on your device, you gain the ability to securely connect with any other participant on this network. Mycelium intelligently reroutes traffic to maintain connectivity taking location of you and your peer into consideration.
### Benefits?
### The Benefits
- **Continuous Connectivity:** Mycelium ensures uninterrupted connectivity by dynamically rerouting traffic through available connections (friends, satellites, 4/5G, fiber).
- **End-to-End Encryption:** Robust encryption stops man-in-the-middle attacks, guaranteeing secure communication.
- **Proof of authenticity ([POA](p2p:poa.md))**: make sure we know who we communicate with
- **Optimized Routing:** Mycelium finds the shortest path between network participants, reducing latency and keeping traffic localized.
- **Universal Server Capability:** Empowers individuals to act as servers, a foundational element for any peer-to-peer system.
- **Continuous connectivity:** Mycelium ensures uninterrupted connectivity by dynamically rerouting traffic through available connections (friends, satellites, 4/5G, fiber).
- **End-to-end encryption:** robust encryption stops man-in-the-middle attacks, guaranteeing secure communication.
- **Proof of authenticity ([POA](p2p:poa.md))**: ensures that we know who we are communicating with
- **Optimized routing:** Mycelium finds the shortest path between network participants, reducing latency and keeping traffic localized.
- **Universal server capability:** empowers individuals to act as servers, a foundational element for any peer-to-peer system.
- **Full Compatibility:** Mycelium seamlessly integrates with the current internet, supporting any application.
- **Impressive Speed:** Achieves 1 Gbps per Network Agent, ensuring rapid data transfer.
- **Impressive speed:** achieves 1 Gbps per Network Agent, ensuring rapid data transfer.
### Status?
### Status
- In beta and usable from TFGrid 3.13, its our 3e generation approach to networking and took us years to do. We are looking forward to your feedback.
In beta and usable from TFGrid 3.13, its our 3e generation approach to networking and took us years to do. We are looking forward to your feedback.

View File

@ -1,10 +1,10 @@
### Why?
## The Problem
Traditional firewalls are increasingly ineffective in addressing modern security challenges. They struggle to mitigate emerging threats, particularly against backdoors and man-in-the-middle attacks. Backdoors can render firewalls obsolete as attackers find ways to bypass them. New, innovative approaches to cybersecurity are necessary to address these evolving security problems.
Traditional firewalls are increasingly ineffective at addressing modern security challenges. They struggle to mitigate emerging threats, particularly against backdoors and man-in-the-middle attacks. Backdoors can render firewalls obsolete as attackers find ways to bypass them. New and innovative approaches to cybersecurity are necessary to address these evolving security problems.
## What?
## Introducing NetworkWall
Imagine a scenario where you deploy applications within secure, liquid-cooled physical containers or smaller POD's that operate without relying on traditional TCP/IP or Ethernet protocols. By avoiding the use of standard low-level protocols, the existing backdoors are unable to communicate with the external world.
@ -12,9 +12,12 @@ Incoming traffic is intercepted at the application level and securely transporte
It's important to note that this solution is primarily intended for commercial use cases, but its existence is valuable knowledge in the realm of cybersecurity and network security.
## Benefits
## The Benefits
- **Enhanced Security and Privacy:** The solution offers significantly improved security and privacy measures, mitigating potential risks and vulnerabilities.
- **Ultra-Fast Connectivity:** Within the POD/Container, the connectivity is exceptionally fast, ensuring rapid data transfer and application performance.
- **Robust Data and Application-Aware Proxies:** Secure proxies between the Internet and the protected backend application provide an additional layer of security, safeguarding data and ensuring application-level awareness.
- **Seamless Integration:** The solution is designed for ease of integration within existing environments, minimizing disruptions and complexities during implementation.
- **Enhanced security and privacy:** the solution offers significantly improved security and privacy measures, mitigating potential risks and vulnerabilities.
- **Ultra-fast connectivity:** within the POD/Container, connectivity is exceptionally fast, ensuring rapid data transfer and application performance.
- **Robust data and application aware proxies:** secure proxies between the Internet and the protected backend application provide an additional layer of security, safeguarding data and ensuring application-level awareness.
- **Seamless integration:** the solution is designed for ease of integration within existing environments, minimizing disruptions and complexities during implementation.
## Status
To be completed

View File

@ -1,14 +1,11 @@
### Why?
### The Problem
There is a growing need for more accessible and user-friendly solutions to store and manage large volumes of data efficiently.
While Zero-Stor addresses numerous storage challenges effectively, it may not be accessible or user-friendly for typical developers or system administrators. QSFS has been developed to bridge this gap and provide a more approachable storage solution.
### What?
### Introducing QSFS
A FUSE-based filesystem utilizing Zero-Stor as its backend. Metadata is safeguarded to prevent loss, inheriting Zero-Stor's benefits and simplifying usage for developers and system administrators.
@ -26,4 +23,8 @@ This filesystem can be mounted under various storage-aware applications, such as
- Provides a user-friendly interface for seamless integration with a wide range of applications.
- Offers considerable scalability capabilities, although not unlimited in scale.
- Achieves reasonable performance data transfer rates of up to 50 MB/sec, particularly for larger files.
- Can scale to about 2 million files per filesystem.
- Can scale to about 2 million files per filesystem.
### Status
To be completed

View File

@ -11,13 +11,13 @@ This unique operating system doesn't require installation on hard disks or SSDs;
### The Benefits
- **Compatible with existing workloads:** our primary goal is to ensure that Zero-OS is compatibile with over 99% of the workloads commonly hosted in centralized cloud environments today. This includes support for Docker containers, virtual machines, Kubernetes, and more.
- **Compatibility with existing workloads:** our primary goal is to ensure Zero-OS compatibility with over 99% of the workloads commonly hosted in centralized cloud environments today. This includes support for Docker containers, virtual machines, Kubernetes, and more.
- **Reduced attack surface:** Zero-OS boasts a smaller hacking surface, enhancing security by minimizing potential vulnerabilities.
- **Stateless design:** its statelessness simplifies deployment and updates, making it easier to maintain while ensuring that it's always up to date.
- **Stateless design:** its statelessness simplifies deployment and updates, making it easier to maintain while ensuring it's always up to date.
- **Autonomous operation:** whether you have one instance or a billion, Zero-OS operates autonomously, streamlining management and maintaining consistency across all instances.
- **Rapid deployment:** with Zero-OS, you can deploy 1000 virtual machines in just 2 minutes, ensuring agility and efficiency in scaling up resources.
- **Unique security features:** Zero-OS offers support for distinctive security features to enhance protection and fortify your infrastructure.
- **Lower cost and easier maintenance:** Zero-OS significantly reduces the operational expenses associated with cloud infrastructure by automating most processes. This results in minimal operational costs and eliminates the need for extensive engineering efforts.
- **Lower Cost and easier maintenance:** Zero-OS significantly reduces the operational expenses associated with cloud infrastructure by automating most processes. This results in minimal operational costs and eliminates the need for extensive engineering efforts.
- **Ready for a decentralized world:** Zero-OS empowers individuals to become hosts for required Internet capacity (storage, network, compute, gpu), allowing them to be rewarded for providing computing resources and internet connectivity. This aligns with the vision of a truly decentralized and distributed computing ecosystem.
### Status

View File

@ -1,40 +1,43 @@
## Zero-Stor : a quantum safe backend storage system.
## Zero-Stor : A Quantum Safe Backend Storage System.
### Why?
### The Problem
Traditional backend storage systems have their roots in centralized environments, focusing on low-latency and closed security setups. However, these characteristics make them less suitable for use in decentralized cloud contexts.
Newer-generation storage systems like protocol-driven or blockchain-based solutions may face scalability and performance limitations and may not fulfill certain critical requirements that we consider essential.
Newer generation storage systems such as protocol-driven or blockchain-based solutions may face scalability and performance limitations and may not fulfill certain critical requirements that we consider essential.
### What?
### Introducing Zero-Stor
A redesigned storage system which can scale to planet level, is super secure private and fast enough for more usecases. Its designed to operate in a decentralized context. Data can never be lost of corrupted.
A redesigned storage system which can scale to planet level. It is super secure, private and fast enough for more use cases. It is designed to operate in a decentralized context and data can never be lost or corrupted.
This storage system is a backend storage system, cannot be used by end users, its meant to be integrated with a front end storage system like e.g. S3 or a filesystem (see next section).
This storage system is:
- A backend storage system
- It cannot be used by end users
- It's meant to be integrated with a front end storage system like e.g. S3 or a filesystem (see next section).
### Benefits?
### The Benefits
- **Data Resilience:** Ensures data is never lost or corrupted.
- **Planetary Scalability:** Capable of scaling to a global level.
- **Cost-Efficient:** Offers exceptional cost efficiency.
- **Versatility:** Suitable for various use cases, including archiving, backup, files, and CDNs.
- **Low Overhead:** Requires only a 20% overhead for building a storage network where any four nodes can be lost simultaneously, compared to a 400% overhead in traditional storage systems.
- **Security and Privacy:** Provides robust security, even impervious to quantum computers.
- **Data Sovereignty:** Users have complete control over data placement.
- **Empowering Front-End Applications:** Can be integrated into various front-end storage applications, such as blockchains, archives, or S3.
- **CDN Support:** Functions effectively as a backend for CDN applications, facilitating content delivery.
- **Sustainability:** Uses 10 times less energy compared to traditional storage systems, contributing to sustainability efforts.
- **Locality Aware:** Data can be delivered to where the users are ideal for sovereign usecases.
- **Data resilience:** ensures data is never lost or corrupted.
- **Planetary scalability:** capable of scaling to a global level.
- **Cost-efficient:** offers exceptional cost efficiency.
- **Versatility:** suitable for various use cases, including archiving, backup, files, and CDNs.
- **Low overhead:** requires only a 20% overhead for building a storage network where any four nodes can be lost simultaneously, compared to a 400% overhead in traditional storage systems.
- **Security and privacy:** provides robust security and is even impervious to quantum computers.
- **Data sovereignty:** users have complete control over data placement.
- **Empowering front-end applications:** can be integrated into various front-end storage applications, such as blockchains, archives, or S3.
- **CDN support:** functions effectively as a backend for CDN applications, facilitating content delivery.
- **Sustainability:** uses 10 times less energy compared to traditional storage systems, contributing to sustainability efforts.
- **Locality aware:** data can be delivered to where the users are which is ideal for sovereign use cases.
### Status?
### Status
- Zero-OS has been in beta for over four years, with continuous development and improvement.
- A notable deployment in Switzerland, with over 50 petabytes of storage capacity, served as a substantial test environment, although it's no longer active.
- Within the current TFGrid network, there's an impressive capacity of over 20 petabytes available for use.
- Lacking some monitoring, documentation ... will be added in TFGrid 3.15
- Within the current TFGrid network, there is an impressive capacity of over 20 petabytes available for use.
- Lacking some monitoring but documentation will be added in TFGrid 3.15
- Previous releases have been successfully utilized by major government organizations on a massive scale (hundreds of petabytes), providing strong evidence of the concept's viability and effectiveness.

View File

@ -1,35 +1,30 @@
# The Internet Is Broken
**THE THREE LAYERS OF THE INTERNET**
**The Three Layers Of The Internet**
![](internet_3layers.png)
The Internet is made up out of 3 layers
The Internet is made up of 3 layers:
- compute, storage: this is where the applications are being served from
- today: highly centralized and running from large datacenters (see below)
- network: ability for information to travel
- can be as wireless, cables (fiber) and satelite links, ...
- right now the information needs to travel very far, for most countries there is few local information
- very few companies own +80% of the network capacity
- applications:
- today hosted in huge datacenters using the compute and storage capacity as provided
- too centralized and because of that also vulnerable
1. Compute & Storage: this is where applications are being served from. Currently this system is highly centralized and runs from large data centers (see below).
The information travels mainly over large fiber backbone links.
2. Network: this is the ability for information to travel and it can be wireless, via cables (fiber) or satellite links etc. Currently information needs to travel very far and for most countries very little information is stored locally. A handful of companies own more than 80% of the current Internet's network capacity.
3. Applications: currently applications are hosted in huge data centers using the compute and storage as provided. This system is too centralized and therefore very vulnerable.
Digital information mainly travels over large fiber backbone links as pictured here.
![](global_net.png)
The Internet as we know it is far away from the original intent, if 2 people in e.g. Zanzibar (an Island in Africa) use Zoom with each other then the information will travel to Europe in a large datacenter where the Zoom servers are being hosted.
The Internet as we know it has significantly diverged from its original intent. If 2 people in e.g. Zanzibar (an Island in Africa) use Zoom with each other then the information will travel from Zanzibar to a large European datacenter where the Zoom servers are being hosted and back again.
This leads to very inneficient behavior, slower performance, less reliability and a cost which is higher than what it should be.
![](network_path.png)
We became products.
Another important aspect is the lack of autonomy and sovereignty within this Internet. We have become the products. All of our data is hosted in large data centers owned by a few large corporations.
![alt text](we_are_products.png)
- All our data is hosted in large datacenters owned by few large corporations.
- We exist many times, and each time a full infrastructure has been built to deliver the applications from.
We also exist many times on the Internet across many applications and each time a full infrastructure has been built to deliver the applications from. This system is unsustainable and inefficient.

View File

@ -7,18 +7,18 @@ We are more than just Container or VM technology.
Default features:
- compatible with Docker
- compatible with any VM (Virtual Machine)
- compatible with any Linux workload
- integrated unique storage & network primitives
- Compatible with Docker
- Compatible with any VM (Virtual Machine)
- Compatible with any Linux workload
- Integrated unique storage & network primitives
We have following unique advantages:
We have the following unique advantages:
- no need to work with images, we work with our unique ZOS FS
- every container runs in a dedicated virtual machine providing more security
- the containers talk to each other over a private network (mycelium)
- the containers can use a web gatewat to allow users on the internet connect to the applications as running in their secure containers
- can use core-x to manage the workload
- No need to work with images, we work with our unique ZOS FS
- Every container runs in a dedicated virtual machine providing more security
- The containers talk to each other over a private network (Mycelium)
- The containers can use a web gateway to allow internet users to connect to the applications which are running in their secure containers
- Can use core-x to manage the workload
For more information see [ZeroOS](zos.md)

View File

@ -5,9 +5,9 @@
This tool allows you to manage your ZMachine over web remotely.
ZMachine process manager
ZMachine process manager:
- Provide a web interface and a REST API to control your processes
- Allow to watch the logs of your processes
- Or use it as a web terminal (access over https to your terminal)!
- Provides a web interface and a REST API to control your processes
- Allows you to watch the logs of your processes
- You can use it as a web terminal (access over https to your terminal)

View File

@ -1,25 +1,25 @@
# Mycelium our Planetary Network
# Mycelium: Our Planetary Network
![](img/planet_net_.jpg)
> TODO: need to upgrade image, also digital twin needs to be named '3bot'
> TODO: Need to update this image, also digital twin needs to be named '3bot'
The planetary network is an overlay network which lives on top of the existing internet or other peer2peer networks created. In this network, everyone is connected to everyone. End-to-end encryption between users of an app and the app running behind the network wall.
The planetary network is an overlay network which lives on top of the existing Internet or other peer-to-peer networks created. In this network, everyone is connected to everyone. End-to-end encryption between users of an app and the app runs behind the network wall.
Each user end network point is strongly authenticated and uniquely identified, independent of the network carrier used. There is no need for a centralized firewall or VPN solutions, as there is a circle based networking security in place.
Each user end network point is strongly authenticated and uniquely identified, independent of the network carrier used. There is no need for a centralized firewall or VPN solutions, as there is a circle-based networking security in place.
### Key Benefits
Benefits :
- It finds shortest possible paths between peers
- There's full security through end-to-end encrypted messaging
- It allows for peer2peer links like meshed wireless
- It can survive broken internet links and re-route when needed
- There is full security through end-to-end encrypted messaging
- It allows for peer-to-peer links, like meshed wireless
- It can survive broken Internet links and re-route when needed
- It resolves the shortage of IPV4 addresses
Whereas current computer networks depend heavily on very centralized design and configuration, this networking concept breaks this mold by making use of a global-spanning tree to form a scalable IPv6 encrypted mesh network. This is a peer-to-peer implementation of a networking protocol.
Whereas current computer networks depend heavily on very centralized design and configuration, this networking concept breaks this mould by making use of a global spanning tree to form a scalable IPv6 encrypted mesh network. This is a peer2peer implementation of a networking protocol.
The following table illustrates high-level differences between traditional networks like the internet, and the planetary threefold network:
The following table illustrates the high-level differences between traditional networks like today's Internet, and the Planetary Network created by ThreeFold:
| Characteristic | Traditional | Mycelium |
| --------------------------------------------------------------- | ----------- | ----------------- |
@ -32,18 +32,17 @@ The following table illustrates high-level differences between traditional netwo
## What are the problems solved here?
The internet as we know it today doesnt conform to a well-defined topology. This has largely happened over time - as the internet has grown, more and more networks have been “bolted together. The lack of defined topology gives us some unavoidable problems:
The Internet as we know it today doesnt conform to a well-defined topology. This has largely happened over time - as the Internet has grown, more and more networks have been “bolted together." The lack of defined topology gives us some unavoidable problems:
- The routing tables that hold a “map” of the internet are huge and inefficient
- There isnt really any way for a computer to know where it is located on the internet relative to anything else
- Its difficult to examine where a packet will go on its journey from source to destination without actually sending it
- Its very difficult to install reliable networks into locations that change often or are non-static, i.e. wireless mesh networks
- The routing tables that hold a “map” of the Internet are huge and inefficient
- There isnt really any way for a computer to know where it is located on the Internet relative to anything else
- It is difficult to examine where a packet will go on its journey, from source to destination, without actually sending it
- It is very difficult to install reliable networks into locations that change often or are non-static, i.e. wireless mesh networks
These problems have been partially mitigated (but not really solved) through centralization - rather than your computers at home holding a copy of the global routing table, your ISP does it for you. Your computers and network devices are configured just to “send it upstream” and to let your ISP decide where it goes from there, but this does leave you entirely at the mercy of your ISP who can redirect your traffic anywhere they like and to inspect, manipulate or intercept it.
These problems have been partially mitigated (but not really solved) through centralization - rather than your computers at home holding a copy of the global routing table, your ISP does it for you. Your computers and network devices are configured just to “send it upstream” and to let your ISP decide where it goes from there, but this does leave you entirely at the mercy of your ISP, who can redirect your traffic anywhere they like and to inspect, manipulate, or intercept it.
In addition, wireless meshing requires you to know a lot about the network around you, which would not typically be the case when you have outsourced this knowledge to your ISP. Many existing wireless mesh routing schemes are not scalable or efficient, and do not bridge well with existing networks.
![](img/planetary_net.jpg)
The planetary network is a continuation and implementation of the [Planetary Network](https://Planetary Network-network.github.io/about.html) network initiative. This technology is in beta but has been proven to work already quite well.
The Planetary Network is a continuation and implementation of the [Planetary Network](https://Planetary Network-network.github.io/about.html) network initiative. This technology is in beta but has been proven to work already quite well.

View File

@ -1,23 +1,23 @@
# ThreeFold Network Technology Overview
Decentralized networking platform allowing any compute and storage workload to be connected together on a private (overlay) network and exposed to the existing internet network. The Peer2Peer network platform allows any workload to be connected over secure encrypted networks which will look for the shortest path between the nodes.
ThreeFold's decentralized networking platform allows any compute and storage workload to be connected together on a private (overlay) network and exposed to the existing Internet network. The peer-to-peer network platform allows any workload to be connected over secure encrypted networks, which will look for the shortest path between nodes.
### Secure mesh overlay network (peer2peer)
### Secure Mesh Overlay Network (Peer-to-Peer)
Z_NET is the foundation of any architecture running on the TF Grid. It can be seen as a virtual private datacenter and the network allows all of the *N* containers to connect to all of the *(N-1)* other containers. Any network connection is a secure network connection between your containers, it creates peer 2 peer network between containers.
ZNet is the foundation of any architecture running on the TF Grid. It can be seen as a virtual private data center and the network allows all of the *N* containers to connect to all of the *(N-1)* other containers. Any network connection is a secure network connection between your containers, it creates a peer-to-peer network between containers.
![alt text](net1.png)
No connection is made with the internet. The ZNet is a single tenant network and by default not connected to the public internet. Everything stays private. For connecting to the public internet, a Web Gateway is included in the product to allows for public access if and when required.
No connection is made with the Internet. The ZNet is a single tenant network and by default not connected to the public Internet. Everything stays private. For connecting to the public Internet, a Web Gateway is included in the product to allow for public access, if and when required.
### Redundancy
As integrated with [WebGW](webgw):
As integrated with [Web Gateway (WebGW)](webgw):
![alt text](net2.png)
- Any app can get (securely) connected to the internet by any chosen IP address made available by ThreeFold network farmers through [WebGW](webgw)
- Any app can get (securely) connected to the Internet by any chosen IP address made available by ThreeFold network farmers through [WebGW](webgw)
- An app can be connected to multiple web gateways at once, the DNS round robin principle will provide load balancing and redundancy
- An easy clustering mechanism where web gateways and nodes can be lost and the public service will still be up and running
- Easy maintenance. When containers are moved or re-created, the same end user connection can be reused as that connection is terminated on the Web Gateway. The moved or newly created Web Gateway will recreate the socket to the Web Gateway and receive inbound traffic.

View File

@ -1,42 +1,38 @@
# TF Grid Web Gateway
# TFGrid WebGW
The Web Gateway is a mechanism to connect the private networks to the open Internet, in such a way that there is no direct connection between internet and the secure workloads running in the ZMachines.
The Web Gateway is a mechanism to connect private networks to the open Internet in such a way that there is no direct connection between the Internet and the secure workloads running in the ZMachines.
![](img/webgateway.jpg)
### Key Benefits
- Separation between where compute workloads are and where services are exposed
- Redundant
- Each app can be exposed on multiple webgateways at once
- Support for many interfaces...
- Redundancy: Each app can be exposed on multiple web gateways at once
- Support for many interfaces
- Helps resolve shortage of IPv4 addresses
### Implementation
Some 3nodes supports gateway functionality (configured by the farmers). A 3node with gateway configuration can then accept gateway workloads and then forward traffic to ZMachines that only have Planetary Network (planetary network) or Ipv6 addresses.
Some 3Nodes support gateway functionality (this is configured by the farmers). A 3Node with gateway configuration can then accept gateway workloads and forward traffic to ZMachines that only have Planetary Network or IPv6 addresses.
The gateway workloads consists of a name (prefix) that need to be reserved on the block chain first. Then the list of backend IPs. There are other flags that can be set to control automatic TLS (please check terraform documentations for the exact details of a reservation).
The gateway workloads consist of a name (prefix) that first needs to be reserved on the blockchain. Then, the list of backend IPs. There are other flags that can be set to control automatic TLS (please check Terraform documentation for the exact details of a reservation).
Once the 3node receives this workloads, the network configure proxy for this name and the Planetary Network IPs.
Once the 3Node receives this workload, the network configures proxy for this name and the Planetary Network IPs.
### Security
ZMachines have to have a Planetary Network IP or any other IPv6 (also IPv4 are accepted), it means that any person who is connected to the Planetary Network, can also reach the ZMachine without the need for a proxy.
ZMachines have to have a Planetary Network IP or any other IPv6 (IPv4 is also accepted). This means that any person connected to the Planetary Network can also reach the ZMachine without the need for a proxy.
So it's up to the ZMachine owner/maintainer to make sure it is secured and only have the required ports open.
So it's up to the ZMachine owner/maintainer to make sure it is secured and that only the required ports are open.
### Redundant Network Connection
![](img/redundant_net.jpg)
### Unlimited Scale
![](img/webgw_scaling.jpg)
The network architecture is a pure scale-out network system. It can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users.
The network architecture is a pure scale-out network system, it can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users. Supply and demand scale independently, for supply there can be unlimited network, farmers providing the web gateways on their own 3nodes, and unlimited compute farmers providing 3nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprises. This demand side is exponentially growing for data processing and storage use cases.
Supply and demand scale independently. For supply, there can be unlimited network farmers providing web gateways on their own 3Nodes, and unlimited compute farmers providing 3Nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprises, and so on. Globally, there is exponentially-growing demand for data processing and storage use cases.

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

View File

@ -4,53 +4,53 @@
> TODO: need to upgrade image, also digital twin needs to be named '3bot'
The owner of the NFT can upload the data using one of our supported interfaces
The owner of the NFT can upload the data using one of our supported interfaces:
- http upload (everything possible on https://nft.storage/ is also possible on our system)
- filesystem
- Http upload (everything possible on https://nft.storage/ is also possible on our system)
- Filesystem
Every person in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available everywhere in the world using multiple interfaces again (IPFS, HTTP(S), ...). Caching happens on global level. No special software or account on threefold is needed to do this.
Anyone in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available anywhere in the world using multiple interfaces again (IPFS, HTTP(S) etc.). Caching happens on a global level. No special software or account on ThreeFold is needed to do this.
The NFT system uses a super reliable storage system underneath which is sustainable for the planet (green) and ultra secure and private. The NFT owner also owns the data.
The NFT system operates on top of a very reliable storage system which is sustainable for the planet and ultra secure and private. The NFT owner also owns the data.
## Benefits
## The Benefits
#### Persistence = owned by the data user (as represented by digital twin)
![](img/nft_storage.jpg)
Is not based on a shared-all architecture.
The system is not based on a shared-all architecture.
Whoever stores the data has full control over
Whoever stores the data has full control over:
- where data is stored (specific locations)
- redundancy policy used
- how long should the data be kept
- CDN policy (where should data be available and how long)
- Where data is stored (specific locations)
- The redundancy policy which is used
- How long the data is kept
- CDN policy (where the data is available and for how long)
#### Reliability
- data cannot be corrupted
- data cannot be lost
- each time data is fetched back hash (fingerprint) is checked, if issues autorecovery happens
- all data is encrypted and compressed (unique per storage owner)
- data owner chooses the level of redundancy
- Data cannot be corrupted
- Data cannot be lost
- Each time data is fetched back the hash (fingerprint) is checked. If there are any issues then autorecovery occurs
- All data is encrypted and compressed (unique per storage owner)
- Data owner chooses the level of redundancy
#### Lookup
- multi URL & storage network support (see further the interfaces section)
- Multi URL & storage network support (see more in the interfaces section)
- IPFS, HyperDrive URL schema
- unique DNS schema (with long key which is globally unique)
- Unique DNS schema (with long key which is globally unique)
#### CDN support (with caching)
#### CDN Support
Each file (movie, image) stored is available on many places worldwide.
Each file (movie, image etc.) stored is available in many locations worldwide.
Each file gets a unique url pointing to the data which can be retrieved on all locations.
Each file gets a unique url pointing to the data which can be retrieved from all these locations.
Caching happens on each endpoint.
Caching happens at each endpoint.
#### Self Healing & Auto Correcting Storage Interface
@ -58,9 +58,9 @@ Any corruption e.g. bitrot gets automatically detected and corrected.
In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy.
#### Storage Algoritm = Uses Quantum Safe Storage System as base
#### The Storage Algoritm Uses Quantum Safe Storage System As Its Base
Not even a quantum computer can hack data as stored on our QSSS.
Not even a quantum computer can hack data stored on our QSSS.
The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems.
@ -74,15 +74,15 @@ Storage uses upto 10x less energy compared to classic replicated system.
The stored data is available over multiple interfaces at once.
| interface | |
| Interface | |
| -------------------------- | ----------------------- |
| IPFS | ![](img/ipfs.jpg) |
| http(s) on top of Digital Twin | ![](img/http.jpg) |
| syncthing | ![](img/syncthing.jpg) |
| filesystem | ![](img/filesystem.jpg) |
| Http(s) on top of Digital Twin | ![](img/http.jpg) |
| Syncthing | ![](img/syncthing.jpg) |
| Filesystem | ![](img/filesystem.jpg) |
This allows ultimate flexibility from enduser perspective.
This allows ultimate flexibility from the end user perspective.
The object (video,image) can easily be embedded in any website or other representation which supports http.
The object (video, image etc.) can easily be embedded in any website or other representation which supports http.

View File

@ -1,42 +1,42 @@
# Quantum Safe Storage Algoritm
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s)
The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s).
Data gets stored over multiple ZDB's in such a way that data can never be lost.
Unique features
- data always append, can never be lost
- even a quantum computer cannot decrypt the data
- is spread over multiple sites, sites can be lost, data will still be available
- protects for datarot.
- Data always append, can never be lost
- Even a quantum computer cannot decrypt the data
- Data is spread over multiple sites. If these sites are lost the data will still be available
- Protects from datarot
## Why
## The Problem
Today we produce more data than ever before. We could not continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
Today we produce more data than ever before. We cannot continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset.
ThreeFold is using space technology to store data (fragments) over multiple devices (physical storage devices in TFNodes). The solution does not distribute and store parts of an object (file, photo, movie...) but describes the part of an object. This could be visualized by thinking of it as equations.
ThreeFold is using space technology to store data fragments over multiple devices (physical storage devices in TFNodes). The solution does not distribute and store parts of an object (file, photo, movie etc.) but describes the part of an object. This can be visualized by thinking of it as equations.
## How is it done today
## How Data Is Stored Today
![alt text](storage_today.png)
In most distributed systems as used on the Internet or in blockchain land today the data will get replicated (sometimes after sharding, which means distributed based on the content of the file and spread out over the world).
In most distributed systems, as used on the Internet or in blockchain today, the data will get replicated (sometimes after sharding, which means distributed based on the content of the file and spread out over the world).
This leads to a lot of overhead and minimal control where the data is.
In well optimized systems overhead will be 400% but in some it can be orders of magnitude higher to get to a reasonable redundancy level.
## The Quantum Safe storage System Works Differently
## The Quantum Safe Storage System Works Differently
![alt text](qsss_overview.png)
ThreeFold has developed a new storage algoritm which is more efficient, ultra reliable and allows you full control over where you want your data to be stored.
ThreeFold has developed a new storage algorithm which is more efficient, ultra reliable and gives you full control over where your data is stored.
ThreeFold's approach is different, lets try to visualize by means of simple analogy with equations.
ThreeFold's approach is different. Let's try to visualize this new approach with a simple analogy using equations.
Let a,b,c,d.... be the parts of that original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
Let a,b,c,d.... be the parts of the original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values:
```
a=1
@ -44,7 +44,7 @@ b=2
c=3
```
(and for reference the part of real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`).
(and for reference the part of the real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`).
With these numbers we could create endless amounts of equations:
@ -56,11 +56,11 @@ With these numbers we could create endless amounts of equations:
4: 2b+a-c=2
5: 5c-b-a=12
......
etc.
```
Mathematically we only need 3 to describe the content (=value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and see, and we just remember the locations of all the equations created with the original data fragments.
Mathematically we only need 3 to describe the content (value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and we just remember the locations of all the equations created with the original data fragments.
Mathematically we need three equations (any 3 of the total) to recover the original values for a, b or c. So do a request to retrieve 3 of the many equations and the first 3 to arrive are good enough to recalculate the original values. Three randomly retrieved equations are:
@ -77,36 +77,36 @@ And this is a mathematical system we could solve:
Now that we know `a=1` we could solve the rest `c=a+2=3` and `b=c-a=2`. And we have from 3 random equations regenerated the original fragments and could now recreate the original object.
The redundancy and reliability in such system comes in the form of creating (more than needed) equations and storing them. As shown these equations in any random order could recreate the original fragments and therefore redundancy comes in at a much lower overhead.
The redundancy and reliability in this system results from creating equations (more than needed) and storing them. As shown these equations in any random order can recreate the original fragments and therefore redundancy comes in at a much lower overhead.
In our system we don't don this with 3 parts but with thousands.
In our system we don't do this with 3 parts but with thousands.
### Example of 16/4
![](img/quantumsafe_storage_algo.jpg)
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations, the first 16 that we find and collect which allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations. The first 16 that we find and collect allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations.
The likelihood of losing 4 independent, dispersed storage devices at the same time is very low. Since we have continuous monitoring of all of the stored equations, we could create additional equations immediately when one of them is missing, making it an auto-regeneration of lost data and a self-repairing storage system.
> The overhead in this example is 4 out of 20 which is a mere **20%** instead of **400%** .
## Can be used for content delivery.
## Content Delivery
This system can be used as backend for content delivery networks.
This system can be used as backend for content delivery networks.
e.g. Content distribution Policy could be a 10/50 distribution which means, the content of a movie would be distributed over 60 locations from which we can loose 50 at the same time.
E.g. content distribution policy could be a 10/50 distribution which means, the content of a movie would be distributed over 60 locations from which we can lose 50 at the same time.
If someone now wants to download the data, the first 10 locations who answer fastest will provide enough of the data parts to allow the data to be rebuild.
If someone now wants to download the data, the first 10 locations to answer will provide enough of the data parts to rebuild the data.
The overhead here is more, compared to previous example, but stil order of magnitude lower compared to other cdn systems.
The overhead here is more, compared to previous example, but stil orders of magnitude lower compared to other CDN systems.
## The Quantum Safe Storage System is capable to avoid Datarot
## The Quantum Safe Storage System Can Avoid Datarot
Datarot is the fact that data storage degrades over time and becomes unreadable, on e.g. a harddisk.
Datarot is the fact that data storage degrades over time and becomes unreadable e.g. on a harddisk.
The storage system provided by ThreeFold intercepts this silent data corruption, making that it can pass by unnotified.
The storage system provided by ThreeFold intercepts this silent data corruption ensurinf that data does not rot.
> see also https://en.wikipedia.org/wiki/Data_degradation
> See also https://en.wikipedia.org/wiki/Data_degradation

View File

@ -1,44 +1,42 @@
i![](img/qsss_intro.png)
# Quantum Safe Filesystem
![](img/qsss_intro.png)
A redundant filesystem, can store PB's (millions of gigabytes) of information.
Unique features:
- Unlimited scalable (many petabytes) filesystem
- Unlimited scalability (many petabytes)
- Quantum Safe:
- On the TFGrid, no farmer knows what the data is about
- Even a quantum computer cannot decrypt
- On the TFGrid no farmer knows what the data is
- Even a quantum computer cannot decrypt the data
- Data can't be lost
- Protection for datarot, data will autorepair
- Data is kept for ever (data does not get deleted)
- Data is kept forever (data does not get deleted)
- Data is dispersed over multiple sites
- Sites can go down, data not lost
- Even if the sites go down the data will not be lost
- Up to 10x more efficient than storing on classic storage cloud systems
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid, ...)
- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid etc.)
- Compatible with ± all data workloads (not high performance data driven workloads like a database)
- Self-healing: when a node or disk is lost, the storage system can get back to the original redundancy level
- Helps with compliance to regulations like GDPR (as the hosting facility has no view on what is stored, information is encrypted and incomplete)
- Hybrid: can be installed onsite, public, private, ...
- Helps with compliance for regulations like GDPR (as the hosting facility has no view on what is stored: information is encrypted and incomplete)
- Hybrid: can be installed onsite, public and private
- Read-write caching on encoding node (the front end)
![](img/planet_fs.jpg)
## Mount Any Files in your Storage Infrastructure
## Mount Any Files In Your Storage Infrastructure
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum-secure way.
The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum secure way.
This storage layer relies on 3 primitives of the ThreeFold technology :
- [0-db](https://github.com/threefoldtech/0-db) is the storage engine.
It is an always append database, which stores objects in an immutable format. It allows keeping the history out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
It is an always append database, which stores objects in an immutable format. It allows history to be kept out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files).
- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations.
It takes files in any format as input, encrypts the file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It's an essential element of the operational backup.
to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It is an essential element of the operational backup.
- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace.
@ -51,7 +49,6 @@ This concept scales forever, and you can bring any file system on top of it:
- any backup system
- an ftp-server
- IPFS and Hypercore distributed file sharing protocols
- ...
![](img/quantum_safe_storage_scale.jpg)

View File

@ -1,11 +1,11 @@
# Zero Knowledge Proof Storage system.
# Zero Knowledge Proof Storage System
The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: The actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
The Quantum Safe Storage System is zero knowledge proof compliant. The storage system is made up of / split into 2 components: the actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine.
![](img/qss_system.jpg)
The zero proof knowledge compliancy comes from the fact the all the physical storage nodes (tf_nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data and have no access to the original data part or the complete origal data objects.
The zero proof knowledge compliancy comes from the fact that all of the physical storage nodes (TFnodes) can prove that they store a valid part of the data that the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all of the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the original data and have no access to the original data part or the complete original data objects.

View File

@ -1,10 +1,10 @@
<!-- ![](img/qsss_intro_.jpg) -->
i![](img/qsss_intro.png)
# Quantum Safe Storage System
Our storage architecture follows the true peer2peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases...) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
i![](img/qsss_intro.png)
Our storage architecture follows the true peer2peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases etc.) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'.
![](img/qsss_intro_0_.jpg)

View File

@ -1,14 +1,15 @@
# S3 Service
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [quantumsafe_filesystem](qss_filesystem.md).
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
Thanks to our quantum safe storage layer, you could build fast, robust and reliable storage and archiving solutions.
A typical setup would look like:
![](img/storage_architecture_1.jpg)
To deploy MinIO using Helm 3, you can consult [this guide](https://forum.threefold.io/t/minio-operator-with-helm-3/4294).
# S3 Service
If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [Quantum Safe File System](qss_filesystem.md).
A good opensource solution delivering an S3 solution is [min.io](https://min.io/).
Thanks to our Quantum Safe Storage Layer, you can build fast, robust and reliable storage and archiving solutions.
A typical setup would look like this:
![](img/storage_architecture_1.jpg)
<!-- TODO: link to manual on cloud how to deploy minio, using helm (3.0 release) -->

View File

@ -2,7 +2,9 @@
![](img/iac_overview.jpg)
IAC = DevOps is a process framework that ensures collaboration between Development and Operations Team to deploy code to production environment faster in a repeatable and automated way. ... In simple terms, DevOps can be defined as an alignment between development and IT operations with better communication and collaboration.
IAC = DevOps is a process framework that ensures collaboration between Development and Operations Team to deploy code to production environment faster in a repeatable and automated way.
In simple terms, DevOps can be defined as an alignment between development and IT operations with better communication and collaboration.
![](img/smartcontract_iac.png)

View File

@ -4,20 +4,20 @@
ThreeFold has developed a highly efficient infrastructure layer for a new internet.
Providing Internet & Cloud Capacity is as easy as buying or building a node and connecting it to the internet.
Providing Internet & Cloud capacity is as easy as buying or building a node and connecting it to the internet.
![](3node_simple.png)
A lot of capacity has been deployed in the world, ThreeFold farmers buy a computer and they connect it to the internet, as such they use our Operating system to provide Internet capacity to the world.
A lot of capacity has been deployed in the world, ThreeFold farmers buy a computer and they connect it to the internet, as such they use our operating system to provide Internet capacity to the world.
There are multiple ways how people can interactive without our platform (as developer or IT expert = sysadmin):
There are multiple ways in which people can interact with our platform (as a developer or IT an expert: sysadmin):
![](img/architecture_usage.png)
All technology is developed by ThreeFold and is opensource, this technology is being used for the ThreeFold grid see https://www.threefold.io which is a deployment of a new internet which is green, safe and owned by all of us.
All technology is developed by ThreeFold and is opensource, this technology is being used for the ThreeFold grid (see https://www.threefold.io) which is the deployment of a new internet which is green, safe and owned by all of us.
This document explains how we are a missing layer for the full web2, web3 and blockchain world.
This document explains how ThreeFold is the missing layer for the full web2, web3 and blockchain world.
This leads to a system which is highly scalable.

View File

@ -1,38 +1,38 @@
![](img/zos22.png)
# Zero-OS
![](img/zos22.png)
![](img/zero_os_overview.jpg)
## ZOS compute storage overview
## ZOS Compute & Storage Overview
![](img/zos_overview_compute_storage.jpg)
## ZOS network overview
## ZOS Network Overview
![](img/zos_network_overview.jpg)
### Imagine an operating system with the following benefits
### Imagine An Operating System With The Following Benefits
- upto 10x more efficient for certain workloads (e.g. storage)
- no install required
- all files are deduped for the VM's, containers and the ZOS itself, no more data duplicated filesystems
- the hacking footprint is super small, which leads to much safer systems
- every file is fingerprinted and gets checked at launch time of an application
- there is no shell or server interface on the operating system
- the networks are end2end encrypted between all Nodes
- there is the possibility to completely disconnect the compute/storage from the network service part which means hackers have a lot less chance to get to the data
- a smart contract for IT layer allows groups of people to deploy IT workloads with concensus and full control
- all workloads which can run on linux can run on Zero-OS but in a much more controlled, private and safe way
- Up to 10x more efficient for certain workloads (e.g. storage)
- No install required
- All files are deduped for the VM's, containers and the ZOS itself, no more data duplicated filesystems
- The hacking footprint is very small which leads to much safer systems
- Every file is fingerprinted and gets checked at launch time of an application
- There is no shell or server interface on the operating system
- The networks are end2end encrypted between all Nodes
- It is possible to completely disconnect the compute/storage from the network service part which means hackers have a lot less chance to access the data
- A smart contract for the IT layer allows groups of people to deploy IT workloads with consensus and full control
- All workloads which can run on linux can run on Zero-OS but in a much more controlled, private and safe way
> ThreeFold has created an operating system from scratch, we used the Linux kernel and its components and then build further on it, we have been able to achieve all the above benefits.
> ThreeFold has created an operating system from scratch. We used the Linux kernel and its components and then built further on it. We have been able to achieve all of the above benefits.
## The requirements for our TFGrid based on Zero OS are:
## The Requirements For Our TFGrid Based On Zero OS Are:
- **Autonomy**: TF Grid needs to create compute, storage and networking capacity everywhere. We could not rely on a remote (or a local) maintenance of the operating system by owners or operating system administrators
- **Simplicity**: An operating system should be simple, able to exist anywhere, for anyone, and be good for the planet
- **Stateless**: In a grid (peer2peer) set up, the sum of the components is providing a stable basis for single elements to fail and not bring the whole system down. Therefore, it is necessary for single elements to be stateless, and the state needs to be stored within the grid.
- **Autonomy**: TF Grid needs to create compute, storage and networking capacity everywhere. We could not rely on a remote (or a local) maintenance of the operating system by owners or operating system administrators.
- **Simplicity**: An operating system should be simple, able to exist anywhere for anyone, and be good for the planet.
- **Stateless**: In a grid (peer-to-peer) set up, the sum of the components provides a stable basis for single elements to fail and not bring the whole system down. Therefore, it is necessary for single elements to be stateless, and the state needs to be stored within the grid.

View File

@ -4,32 +4,32 @@
GEP stands for Grid Enhancement Proposal.
A GEP is a document (can be on forum) providing information to the ThreeFold community, or describing a new feature for the TFGrid or its processes or any other change as is managed by the TF DAO.
A GEP is a document (can be on the forum) providing information to the ThreeFold community, describing a new feature or processes for the TFGrid and any other change that is managed by the TF DAO.
## Requirements before a GEP can be voted
## Requirements Before A GEP Can Be Voted
- The why and how needs to be well enough defined and specified
- The GEP always needs to be in line with planet/people first values
- There needs to be a forum post linked to it which gave people the time to discuss the GEP
- The GEP should provide a concise technical specification of the feature and a rationale for the feature if relevant.
- The GEP always needs to be in line with the values planet first and people first
- There needs to be a forum post linked to it which gaves the community time to discuss the GEP
- The GEP should provide a concise technical specification of the new feature and a rationale for the feature if relevant
## Process
- A GEP gets registered in TFChain.
- Community has to approve a GEP
- Guardians can block a GEP and ask for a re-vote and explain why re-vote is needed
- e.g. if mistake in process would have been made
- e.g. if something would happen which puts any of the entities related to TF in danger, e.g. not in line with T&C (legally)
- e.g. if values are not followed (planet/people first)
- e.g. if a change violates our security (e.g. introduction of non opensource in our TF Stack)
- e.g. if a change has potential to damage someone in our community
- Guardians and or TFCoop Team will implement
- A GEP gets registered in the TFChain
- The community has to approve the GEP
- Guardians can block a GEP and ask for a re-vote and explain why the re-vote is needed. For example if:
- A mistake in the process would have been made
- Something would happen which puts any of the entities related to TF in danger, e.g. not in line with T&C (legally)
- Values are not followed (planet/people first)
- A change violates our security (e.g. introduction of non opensource in our TF Stack)
- A change has the potential to damage someone in our community
- Guardians and/or the TFCoop team will then implement the GEP
## Voting Power
- 1 TFT, 1 Vote (NEW for TFGrid 3.14 and still needs to be voted by a GEP at March 21 2024)
- 1 TFT = 1 Vote (NEW for TFGrid 3.14 and still needs to be voted by a GEP on March 21st 2024)
*some inspiration comes from https://www.python.org/dev/peps/pep-0001*
*Some inspiration comes from https://www.python.org/dev/peps/pep-0001*

View File

@ -1,8 +1,7 @@
![alt text](governance.png)
# Governance
Governance is important to us, there is a wish from TF DMCC to outsource the operations and promotion to a cooperative and the community for which we suggest a combination of GEP, Guardians and a Cooperative
Community governance is important to us. TF DMCC is working to outsource the operations and promotion of the TF Grid with a community-led / cooperative approach. To achieve this, we are releasing a series of Grid Enhancement Proposals (GEPs), searching for a group of Guardians, and establishing the ThreeFold Cooperative.
## Planned for 3.14 (our next release)
@ -12,5 +11,9 @@ Governance is important to us, there is a wish from TF DMCC to outsource the ope
## Treasury
<<<<<<< HEAD
This section needs to be filled in by the team.
=======
This section needs to be filled in by the team.
>>>>>>> main_commsteam

View File

@ -5,25 +5,24 @@
TFGrid 3.13 requires 9 guardians to start with.
Requirements
Requirements:
* Good knowledge how to use Linux to allow you to deploy
and upgrade your validator starting from code.
* At least 4h time available per week (will be more at start)
* Willingness to participate in the forum of TF and coordinate with Coders or TFTech who is main contributor of code.
* You feel aligned with our values of planet and people first
* Willingness to look at open issues at least 5 times a week (there is chat and email notification) unless during the holiday period.
* Choose a backup which can help you when needed (sick, holidays, …).
* Complete your candidacy on our forum on … Fill in the motivation, your profile, …
* Get at least 3 people from the community to endorse your skills and motivation.
* Good knowledge of how to use Linux to be able to deploy and upgrade your validator starting from code
* At least 4 hours of time available per week (this will be more at start)
* Willingness to participate in the TF forum and coordinate with coders or TF9 who is the main contributor of code
* You are aligned with our values of planet and people first
* Willingness to look at open issues on Gittea (our work tool) at least 5 times a week (there are chat and email notification) unless it is during the holiday period
* Choose a backup who can help you when needed (e.g. when sick or on holidays)
* Complete your application on our forum (fill in your motivation and profile)
* Get at least 3 people from the community to endorse your skills and motivation
What do you get in return
What you will get in return:
* Eternal recognition from your Regional Internet community (-:
* TBDk USD per month in TFT as provided by the *TFCOOP
* The TFT will come from the *TFCOOP Treasury
## They will host a validator
## Guardians will host a validator
- see [validator](validator.md)
See documentation on [validators](validator.md) for more information.

View File

@ -2,11 +2,11 @@
# ThreeFold Cooperative
ThreeFold Dubai would like to hand over the day2day operation of the TFGrid to a cooperative.
ThreeFold Dubai would like to hand over the day to day operation of the TFGrid to a cooperative.
- Cooperative are very trustworthy decentralized structures which allow upto millions of people to be part of a common goal.
- Cooperative members vote for their directors which might be the Cooperative Founders
- Untill this is done we keep on operating from ThreeFold DMCC
- Cooperatives are trustworthy decentralized structures which allow millions of people to be part of a common goal
- Cooperative members vote for their directors
- Until this is done we will continue to operate from ThreeFold DMCC
## Cooperative Founder
@ -14,36 +14,37 @@ TF Dubai is looking for 9 ThreeFold Cooperative Founders to setup the COOP Struc
## Cooperative Director
Are voted by the members, but at start are the same as the 9 Founders.
The cooperative directors are chosen by the members by vote. When the coop is first created the 9 founders will act as de facto directors until the first vote.
They are like the board of a Cooperative and need to structure how the Cooperative Members will vote and be part of the governance.
The Cooperative will have a team which will do the day2day for the TFGrid.
The directors are like the board of a cooperative and help to structure how the cooperative members will vote and be part of the governance.
The Cooperative is funded by utilization of the grid (40% at start).
The cooperative will have a team which will carry out the day to day operations for the TFGrid.
Cooperative Directors and the team are renumerated for their contributions.
The cooperative is funded by the utilization of the grid (40% at start).
## Cooperative Startup
Cooperative Directors and the team are remunerated for their contributions.
## Starting The Cooperative
- ThreeFold Dubai grants 2 million TFT to TF COOP (TF DMCC and others can grant more)
- TFCoop Founders will look for initial funding (sell the TFT, look for extra)
- TFCoop Founders will setup the cooperative in chosen jurisdiction (might be NL)
- TFCoop Founders will work with TFTech for technical implementation of membership (as NFT), ...
- TFCoop founders will look for initial funding (sell the TFT, look for extra)
- TFCoop founders will setup the cooperative in chosen jurisdiction (might be NL)
- TFCoop founders will work with TF9 for the technical implementation of membership (as NFT etc.)
## Link to TFGrid DAO
> [The DAO on the TFGrid](tfdao.md) stays intact as is working today.
> [The DAO on the TFGrid](tfdao.md) will stay intact as is working today.
## TFCoop Functions
- Promotion & Communication
- Operate the TFGrid Marketplace ( which is regulated marketplace selling/buying capacity on TFGrid)
- Operate the Tokens (exchanges, ...)
- Operate the tools (forum, websites, ...)
- Promotion and communication
- Operate the TFGrid Marketplace (a regulated marketplace where selling and buying capacity on the TFGrid occurs)
- Operate the Tokens (exchanges etc.)
- Operate the tools (forum, websites, manuals etc.)
- Collaborate with the Guardians for the operations of the TFChain and supporting tools
- Define & operate the benefits for the Members
- Distribute the TFT Fees for Utilization.
- Define and operate the benefits for the members
- Distribute the TFT fees for utilization.
## Status
> At start of our commercial operation (2024) we operate our Cooperative functions though our company in dubai called ThreeFold DMCC, somewhere in 2024 the cooperative will be established and all functions in relation to TFGrid transfered.
> At the start of our commercial operation (2024) we operate our cooperative functions though our company in dubai called ThreeFold DMCC, somewhere in 2024 the cooperative will be established and all functions in relation to the TFGrid will be transfered.

View File

@ -1,23 +1,23 @@
# Validators
Validators & staking as is happening in TFGrid 3.14.
Validators & staking on TFGrid 3.14.
- 9 [guardians](tfgrid3:guardians.md) are needed
- they run a full TF Validator Stack (technically)
- anyone can stake TFT on that stack
- They will run a full TF Validator Stack (technically)
- Anyone can stake TFT on that stack
reward
Reward:
- 10 percent of revenue over TFGrid in TFT or INCA
- this percentage can change over time and will be voted by GEP
- a validator gets a monthly income to support his/her work as well in e.g. INCA or TFT
- 10% of revenue over TFGrid in TFT or INCA
- This percentage can change over time and will be voted by GEP
- A validator recieves a monthly income to support his/her work as well in e.g. INCA or TFT
## what is deployed on a validator
## What Is Deployed On A Validator
- TFChain Node (our blockchain node)
- TFHub (let people go from docker to tfgrid ZOS flists)
- TFHub (lets people go from docker to tfgrid ZOS flists)
- TFBootstrap (how to install new node)
- Explorer (has all stats)
- Validator Code (keep the grid clean & healthy)
- Validator Code (keeps the grid clean & healthy)
- Monitoring Software
- Bridges (needs to be migrated carefully, might take some time)

View File

@ -2,7 +2,7 @@
# TFGrid 3.14
This knowledge base tries to bring together information which is relevant for our new version of the Grid.
This knowledge base brings together all of the information which is relevant for our new version of the Grid.
> use the forum to discuss the content or directly edit in our git system, see [https://forum.threefold.io](https://forum.threefold.io/c/dao/rfc/81)

View File

@ -1,19 +1,19 @@
# ThreeFold Messaging
## Principles
## Key Principles
- We don't sell anything, we just show people what you can do with the project and where we are headed
- We are clear about the status of the project including its size and features
- We don't sell anything - we just show people what they can do with the project and the overarching aim of where it is heading
- We clearly communicate the current status of the project including its size and features
## Narrative
## The Narrative
> Decentralized Autonomous Cloud
- Cloud means any service
- Cloud means any service or workload
- Decentralized means everyone is a participant. Everyone can deliver and/or use cloud services
- Autonomous means cloud service providers don't need to be experts, the software is capable to run 100% autonomously
- Autonomous means cloud service providers don't need to be experts. The software is capable to run 100% autonomously.

View File

@ -2,7 +2,7 @@
# Promotion
TF Coop will organize the promotion activities for the future but for now we urgently need more people stepping up.
The TF Coop will organize promotion activities in the future but at the moment we urgently need more people to step up and help with promoting the project.
## current team

View File

@ -1,4 +1,3 @@
<h1> TFGrid 3.14 Farming Updates </h1>
<h2>Table of Contents</h2>
@ -101,4 +100,4 @@ This GEP can be done once voting system has been changed and GEP 1, 2 and 3 are
We highly encourage everyone to discuss on the forum about this exciting phase of the ThreeFold adventure.
Please share your thoughts and feedback on the ongoing forum post [here](https://forum.threefold.io/t/feedback-on-farming-logic-as-suggested-for-tfgrid-3-14/4275).
Please share your thoughts and feedback on the ongoing forum post [here](https://forum.threefold.io/t/feedback-on-farming-logic-as-suggested-for-tfgrid-3-14/4275).

View File

@ -2,9 +2,9 @@
# Farming Reward TFGrid 3.14
The amount of ThreeFold_Token earned by farmers is relative to the amount of compute, storage or network capacity they provide to the ThreeFold Grid.
The amount of ThreeFold tokens earned by farmers is relative to the amount of compute, storage or network capacity they provide to the ThreeFold Grid.
### 1. Proof-of-Capacity
### 1. Proof Of Capacity
The reward is in line with capacity provided by the farmer based on:
@ -12,20 +12,26 @@ The reward is in line with capacity provided by the farmer based on:
* Memory Capacity (RAM)
* Storage Capacity (SSD/HDD)
- There will be a maximum 1 Billion TFT farmed (new from 3.14)
- Once we reach the 1 billion TFT farming will stop and all reward will be utilization based and rewards based on achievements based on location, quality, ... (the cooperative and GEP's will define this future)
<br>
- A maximum of 1 Billion TFT can be farmed (new from 3.14)
- Once we reach 1 billion TFT, farming will stop and all rewards will be based on utilization and achievements (e.g. location and quality of the node). The cooperative and GEP's will define this in the future.
<br>
> See simulator in [https://dashboard.grid.tf/#/farms/simulator](https://dashboard.grid.tf/#/farms/simulator/)
More information [in our manual](https://manual.grid.tf/knowledge_base/farming/farming_toc.html)
### 2. Proof-of-Utilization
### 2. Proof Of Utilization
- 50% of utilized capacity which comes over portal from *TFCOOP will be given to farmer
- 40% of utilized capacity goes to *TFCOOP (which is a decentralized org and all income will be used to the benefits of the community)
- 50% of utilized capacity which comes over the portal from *TFCOOP will be given to the farmer
- 40% of utilized capacity goes to *TFCOOP
- 10% of utilized capacity goes to Stakers on the Validators
> DISCLAIMER: ThreeFold Dubai organizes the farming (proof of capacity) process. This process is the result of the execution of code written by open source developers (zero-os and minting code) and a group of people - who checks this process voluntarily. No claims can be made or damages asked for to any person or group related to ThreeFold Dubai like but not limited to the different councils.
<br>
> DISCLAIMER: ThreeFold Dubai organizes this process. This process is the result of the execution of code written by open source developers (zero-os and minting code) and a group of people who check this process voluntarily. No claims can be made or damages asked for to any person or group related to ThreeFold Dubai like but not limited to the different councils. This process changes for TFGrid 3.X by means of the TFDAO

View File

@ -1,11 +1,11 @@
# Tokenomics TFGrid 3
## Principles
## Key Principles
- keep it all as simple as possible
- max 1 billion TFT (used to be 4 billion)
- Keep it as simple as possible
- Maximum 1 billion TFT (used to be 4 billion)
## More info
- [farming reward](farming_reward.md)
- [token overview](token_overview.md)
- [Farming rewards](farming_reward.md)
- [Token overview](token_overview.md)

View File

@ -1,4 +1,4 @@
# Overview Threefoldtoken on the stellar network
# Overview of Threefold Tokens on the Stellar Network
> Status 13 March 2024:

View File

@ -1,55 +1,108 @@
# Who is behind the project ?
# About Us
## who created the project
## People
A group of passionate people who want to build a new foundational layer for an better working internet, more like the Internet was originally intended. They operate from TFTech (now TF9) and ThreeFold Dubai.
We are a group of passionate people who want to build a new foundational layer for a better working internet, more like how the Internet was originally intended to be.
The project was started by some Internet & Cloud veterans who now want to handover to a much more decentralized environment.
More than 1000 farmers made this project possible, we are supper grateful for all their support.
More than 1000 farmers made this project possible and we are supper grateful for all their support.
## the project mission
## The Project's Mission
- we help others to create any AI, web2 or web3 solution
- TFGrid is a foundational layer which can be used by any AI/web2/3 project
- enable everyone to build on top of TFGrid who needs cloud capacity (network, gpu, cpu, storage, ...)
ThreeFold Grid (TFGrid) is a foundational layer which can be used by any web2/3 project. It empowers individuals requiring cloud resources such as network, GPU, CPU, and storage to leverage its capabilities for their projects. We help others to create any web2 or web3 solution on our grid.
## the project purpose
## The Project's Purpose
- deliver a new infrastructure layer to build a new internet on top off
- this infra layer is sovereign, more scalable, peer2peer, co-owned, ...
- project delivers network, compute, storage constructs to work with anyone who needs it for their own usecases
- [see tech high level description](tech:key_innovations.md)
The purpose is to deliver a new infrastructure layer to build a new internet on top of. This layer is sovereign, more scalable, peer-to-peer and co-owned. The project delivers network, compute and storage to anyone who needs it for their own usecases.
[See the high level tech description.](tech:key_innovations.md)
## who can benefit from TFGrid capabilities most?
## Who Benefits The Most From The TFGrid's Capabilities?
- developers for CI/CD
- countries to deploy their own internet
- DePIN movement
- CI/CD developers
- Countries: to deploy their own internet
- The DePIN movement
- Social Media Apps
- ...
## what has project achieved
## The Projects Key Achievements To Date
- grid
- country
- community
- ...
- The ThreeFold Grid: connected across ±60 countries by independent people and organizations called ThreeFold farmers, live and usable in its third generation technology
- Partnerships with the governments of Tanzania, to deploy physical infrastructure and introducing coding academies and innovation hubs across the country, and Zanzibar, to deploy physical infrastructure locally and introduce a digital free zone
- An extensive community of 1000+ farmers and several thousand others token holders, grid users, and supporters / advocates of the project
- Our strategic partnerships with key DePIN players: more will be announced soon
## how biased is the project
- not biased, we are tech platform
- keep [values simple](values:planet_people_first) but strong
- project intent is to clearly want to explain what the capabilities are of our tech and let everone decide for themselves what to do with it
## Values
- Our core [values](values:planet_people_first) are simple and strong
- We are fundamentally a tech platform and are therefore not biased
- The project's intent is to clearly explain our tech's capabilities and let everyone decide for themselves what to do with it
## Messaging
- project is here to make it easy for everyone to play their role (see above)
- project believe very much in an internet where everyone can communicate freely, owns their own data, ...
- project wants to help others to really become decentralized (which is not the case today)
- project wants to help governments to have their own Internet
- project wants to help datacenter providers to build datacenters in other ways
- The project believes in an internet where everyone can communicate freely and own their own data
- The project wants to help others to become truly decentralized (which is not the case with the current Internet)
- The project wants to help countries to have their own sovereign Internet
- The project wants to help data center providers to build improved data centers
## Ecosystem
> see [ecosystem doc](ecosystem.md)
## Participants
### Farmers
Farmers connect computers to our new internet to provide GPU, Storage and Compute capacity and make it available over Mycelium to all participants.
### Network Bridgers
- Make a bridge between the old and new internet
- Provide bandwidth to the TFGrid
### Guardians
- Protect the network
- Provide governance
- Keep all relevant services up and running to allow the TFGrid to function
### Farming Pools (new for a TBD version)
- Help farmers to be more effective
- Training & Support
- Improve uptime and provide authenticity
### TF Coop
- Our governance layer (DAO tooling can be extended)
- Everyone is part of the Cooperative
- The Coop directors will build this organization to streamline our expansion
- Gives everyone a voice and environment to operate from
### TFNode Suppliers
- Build & Sell TFNodes to our participants.
- Minimal service/support needs to be delivered
### Service / Solution Providers
- They create & provide solutions and/or services on top of TFGrid
- Customers of these solutions pay in INCA (providing value to our mutual credit currency)
- Each solution has T&C
- Each solution needs to be supported
### Technology Providers
- Create opensource technology which can be used in the TFGrid
- Grants might be available to reward Technology Providers
- TF9 (previously TFTech) is example of a Technology Provider
### DePIN Partners
- Our team is in active discussions in search of a party (company, project, community) who will support our launch into the DePIN world
- This party (or parties) will:
- Actively promote our project in the DePIN space.
- Explain our tokens and integrate with other currencies
- Integrate with the rest of the DePIN ecosystem
- Organize community growth
- Organize grants for the rest of the community
- Grow value for the whole ecosystem and let it "FLOW"

View File

@ -1,24 +1,24 @@
![](img/ppp.png)
We want to be at the forefront of a growing movement, more and more organizations are being pushed by their stakeholders to prioritize sustainability and planet positive policies.
We want to be at the forefront of a growing movement where more and more organizations are being pushed by their stakeholders to prioritize sustainability and planet positive policies.
We are not swimming against the stream, we are part of a group of leaders of change and making real the zeitgeist of the 21st century.
## Core Values
Anything we do needs to improve our planet's situation (climate change, regenerative, respect resources, …) and help the people around us.
Anything we do needs to improve our planet's situation (help limit climate change, be regenerative and respect resources) and help the people around us.
As a result of doing so, we as investors of time and money will have created most value and will get the benefits from our efforts.
By following these core values, we as investors of time and money will have created the most value possible and will benefit from our efforts.
## Tools
There are some practical tools which help us to achieve above values.
There are some practical tools which will help us to achieve the above values: opensource, simplicity and authenticity.
## OpenSource
OpenSource has been an incredible tool for us, it allowed us to grow and even exit more than 7 companies.
OpenSource has been an incredible tool for us, it has allowed us to grow and even exit more than 7 companies.
Open-source software offers transparency, allowing for enhanced security. It fosters innovation and rapid development by leveraging global contributions. The collaborative nature reduces costs and accelerates problem-solving