development to main #47

Merged
mik-tf merged 47 commits from development into main 2024-05-03 12:30:46 +00:00
2351 changed files with 247 additions and 67649 deletions
Showing only changes of commit 60b6c802ed - Show all commits

View File

@ -247,7 +247,7 @@ Contract cost/hour = CU cost/hour + SU cost/hour
### Applying the Dedicated Node Discount
There's a default `50%` discount for renting a node, this discount is not related to the staking discount. For more information on dedicated node discounts, please [read this section](../../../documentation/dashboard/deploy/node_finder.md#dedicated-nodes).
There's a default `50%` discount for renting a node, this discount is not related to the staking discount. For more information on dedicated node discounts, please [read this section](../../../documentation/dashboard/deploy/dedicated_machines.md).
```
Cost with 50% discount = 35.72532 * 0.5

View File

@ -19,21 +19,21 @@
| Cloud Units | Description | mUSD | mTFT |
| ----------------- | ------------------------------------------------ | ------------------ | ------------------ |
| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | {{#include ../../../values/CU_MUSD_HOUR.md}}/hour | {{#include ../../../values/CU_MTFT_HOUR.md}}/hour |
| Storage Unit (SU) | typically 1 TB of netto usable storage (*) | {{#include ../../../values/SU_MUSD_HOUR.md}}/hour | {{#include ../../../values/SU_MTFT_HOUR.md}}/hour |
| Network Unit (NU) | 1 GB transfer, bandwidth as used by TFGrid users | {{#include ../../../values/NU_MUSD_HOUR.md}}/hour | {{#include ../../../values/NU_MTFT_HOUR.md}}/hour |
| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | !!wiki.include page:'manual:su_musd_hour.md' | !!wiki.include page:'manual:cu_mtft_hour.md' |
| Storage Unit (SU) | typically 1 TB of netto usable storage (*) | !!wiki.include page:'manual:cu_musd_hour.md' | !!wiki.include page:'manual:su_mtft_hour.md' |
| Network Unit (NU) | 1 GB transfer, bandwidth as used by TFGrid users | !!wiki.include page:'manual:nu_musd_hour.md' | !!wiki.include page:'manual:nu_mtft_hour.md' |
<br>
| Network Addressing | Description | mUSD | mTFT |
| ------------------ | ------------------------------------------ | --------------------- | --------------------- |
| IPv4 Address | Public Ip Address as used by a TFGrid user | {{#include ../../../values/IP_MUSD_HOUR.md}}/hour | {{#include ../../../values/IP_MTFT_HOUR.md}}/hour |
| Unique Name | Usable as name on webgateways | {{#include ../../../values/NAME_MUSD_HOUR.md}} | {{#include ../../../values/NAME_MTFT_HOUR.md}}/hour |
| Unique Domain Name | Usable as dns name on webgateways | {{#include ../../../values/DNAME_MUSD_HOUR.md}}/hour | {{#include ../../../values/DNAME_MTFT_HOUR.md}}/hour |
| IPv4 Address | Public Ip Address as used by a TFGrid user | !!wiki.include page:'manual:ip_musd_hour.md' | !!wiki.include page:'manual:ip_mtft_hour.md' |
| Unique Name | Usable as name on webgateways | !!wiki.include page:'manual:name_musd_hour.md' | !!wiki.include page:'manual:name_mtft_hour.md' |
| Unique Domain Name | Usable as dns name on webgateways | !!wiki.include page:'manual:dname_musd_hour.md' | !!wiki.include page:'manual:dname_mtft_hour.md' |
- mUSD = 1/1000 of USD, mTFT = 1/1000 of TFT
- TFT pricing pegged to USD (pricing changes in line with TFT/USD rate)
- The current TFT to USD price is {{#include ../../../values/tft_value.md}} USD
- The current TFT to USD price is !!wiki.include page:'manual:tft_value.md' USD
- pricing is calculated per hour for the TFGrid 3.0
> Please check our [Cloud Pricing for utilization sheet](https://docs.google.com/spreadsheets/d/1E6MpGs15h1_flyT5AtyKp1TixH1ILuGo5tzHdmjeYdQ/edit#gid=2014089775) for more details.

View File

@ -14,7 +14,7 @@
## Resource Units Overview
The ThreeFold Zero-OS and TFChain software translates resource units (CRU, MRU, HRU, SRU) into cloud units (CU, SU) for farming reward purposes.
The threefold Zero-OS and TFChain software translates resource units (CRU, MRU, HRU, SRU) into cloud units (CU, SU) for farming reward purposes.
Resource units are used to measure and convert capacity on the hardware level into cloud units: CU & SU.

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 480 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 311 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 533 KiB

View File

@ -41,12 +41,3 @@ You can access the ThreeFold Dashboard on different TF Chain networks.
- Regarding browser support, we're only supporting Google Chrome browser (and thus Brave browser) at the moment with more browsers to be supported soon.
- Deploys one thing at a time.
- Might take sometime to deploy a solution like Peertube, so you should wait a little bit until it's fully running.
## Dashboard Backups
If the main Dashboard URLs are not working for any reason, the following URLs can be used. Those Dashboard URLs are fully independent of the main Dashboard URLs shown above.
- [https://dashboard.02.dev.grid.tf](https://dashboard.02.dev.grid.tf) for Dev net
- [https://dashboard.02.qa.grid.tf](https://dashboard.02.qa.grid.tf) for QA net
- [https://dashboard.02.test.grid.tf](https://dashboard.02.test.grid.tf) for Test net
- [https://dashboard.02.grid.tf](https://dashboard.02.grid.tf) for Main net

View File

@ -18,7 +18,6 @@ Easily deploy your favourite applications on the ThreeFold grid with a click of
- [ownCloud](../solutions/owncloud.md)
- [Peertube](../solutions/peertube.md)
- [Presearch](../solutions/presearch.md)
- [Static Website](../solutions/static_website.md)
- [Subsquid](../solutions/subsquid.md)
- [Taiga](../solutions/taiga.md)
- [Umbrel](../solutions/umbrel.md)

View File

@ -5,11 +5,12 @@ Here you will find everything related to deployments on the ThreeFold grid. This
- Checking the cost of a deployment using [Pricing Calculator](./pricing_calculator.md)
- Finding a node to deploy on using the [Node Finder](./node_finder.md)
- Deploying your desired workload from [Virtual Machines](../solutions/vm_intro.md), [Orchestrators](./orchestrators.md), or [Applictions](./applications.md)
- Renting your own node on the ThreeFold grid from [Dedicated Machines](./dedicated_machines.md)
- Consulting [Your Contracts](./your_contracts.md) on the TFGrid
- Finding or publishing Flists from [Images](./images.md)
- Updating or generating your SSH key from [SSH Keys](./ssh_keys.md)
![](../img/dashboard_deploy.png)
![](../img/sidebar_2.png)
***
@ -19,6 +20,7 @@ Here you will find everything related to deployments on the ThreeFold grid. This
- [Node Finder](./node_finder.md)
- [Virtual Machines](../solutions/vm_intro.md)
- [Orchestrators](./orchestrators.md)
- [Dedicated Machines](./dedicated_machines.md)
- [Applications](./applications.md)
- [Your Contracts](./your_contracts.md)
- [Images](./images.md)

View File

@ -2,119 +2,39 @@
<h2>Table of Contents</h2>
- [Overview](#overview)
- [Filters](#filters)
- [Node Details](#node-details)
- [Gateway Nodes](#gateway-nodes)
- [Dedicated Nodes](#dedicated-nodes)
- [Reservation](#reservation)
- [Billing \& Pricing](#billing--pricing)
- [Discounts](#discounts)
- [GPU Nodes](#gpu-nodes)
- [Nodes](#nodes)
- [GPU Support](#gpu-support)
- [GPU Support Links](#gpu-support-links)
***
## Overview
## Nodes
The Node Finder page provides a more detailed view for the nodes available on the ThreeFold grid with detailed information and statistics about nodes.
The Node Finder page provides a more detailed view for the nodes available on the ThreeFold grid With detailed information and statistics about any of the available nodes.
![](../img/dashboard_node_finder.png)
![](../img/nodes.png)
## Filters
You can get a node with the desired specifications using the filters available in the nodes page.
You can use the filters to narrow your search and find a node with the desired specifications.
![](../img/nodes_filters.png)
![](../img/dashboard_node_finder_filters_1.png)
You can see all of the node details by clicking on a node record.
![](../img/dashboard_node_finder_filters_2.png)
![](../img/nodes_details.png)
You can use the toggle buttons to filter your search.
## GPU Support
- Dedicated nodes
- Gateways nodes
- GPU nodes
- Rentable nodes
![GPU support](../img/gpu_filter.png)
You can choose a location for your node, with filters such as region and country. This can be highly useful for edge cloud projects.
- A new filter for GPU supported node is now available on the Nodes page.
- GPU count
- Filtering capabilities based on the model / device
Filtering nodes by their status (up, down, standby) can also improve your search.
On the details pages is shown the card information and its status (`reserved` or `available`) also the ID thats needed to be used during deployments is easily accessible and has a copy to clipboard button.
If your deployment has some minimum requirements, you can easily filter relevant nodes with the different resource filters.
![GPU details](../img/gpu_details.png)
## Node Details
Heres an example of how it looks in case of reserved
You can see all of the node details when you click on its row.
![GPU details](../img/gpu_details_reserved.png)
![](../img/dashboard_node_finder_node_view.png)
Note that the network speed test displayed in the Node Finder is updated every 6 hours.
## Gateway Nodes
To see only gateway nodes, enable **Gateways** in the filters.
![](../img/dashboard_node_finder_gateways.png)
## Dedicated Nodes
Dedicated machines are 3Nodes that can be reserved and rented entirely by one user. The user can thus reserve an entire node and use it exclusively to deploy solutions. This feature is ideal for users who want to host heavy deployments with the benefits of high reliability and cost effectiveness.
To see only dedicated nodes, enable **Dedicated Nodes** in the filters.
![](../img/dashboard_node_finder_dedicated.png)
### Reservation
When you have decided which node to reserve, you can easily rent it from the Node Finder page.
To reserve a node, simply click on `Reserve` on the node row.
![](../img/dashboard_node_finder_dedicated_reserve.png)
To unreserve a node, simply click on `Unreserve` on the node row.
![](../img/dashboard_node_finder_dedicated_unreserve.png)
Note that once you've rented a dedicated node that has a GPU, you can deploy GPU workloads.
### Billing & Pricing
- Once a node is rented, there is a fixed charge billed to the tenant regardless of deployed workloads.
- Any subsequent NodeContract deployed on a node where a rentContract is active (and the same user is creating the nodeContracts) can be excluded from billing (apart from public ip and network usage).
- Billing rates are calculated hourly on the TFGrid.
- While some of the documentation mentions a monthly price, the chain expresses pricing per hour. The monthly price shown within the manual is offered as a convenience to users, as it provides a simple way to estimate costs.
### Discounts
- Received Discounts for renting a node on TFGrid internet capacity
- 50% for dedicated node (TF Pricing policies)
- A second level discount up to 60% for balance level see [Discount Levels](../../../knowledge_base/cloud/pricing/staking_discount_levels.md)
- Discounts are calculated every time the grid bills by checking the available TFT balance on the user wallet and seeing if it is sufficient to receive a discount. As a result, if the user balance drops below the treshold of a given discount, the deployment price increases.
## GPU Nodes
To see only nodes with GPU, enable **GPU Node** in the filters.
![](../img/dashboard_node_finder_gpu.png)
This will filter nodes and only show nodes with GPU. You can see several information such as the model of the GPU and a GPU score.
![](../img/dashboard_node_finder_gpu2.png)
You can click on a given GPU node and see the GPU details.
![](../img/dashboard_node_finder_gpu3.png)
The ID thats needed to be used during deployments is easily accessible and has a button to copy to the clipboard.
### GPU Support
To use a GPU on the TFGrid, users need to rent a dedicated node. Once they have rented a dedicated node equipped with a GPU, users can deploy workloads on their dedicated GPU node.
### GPU Support Links
The ThreeFold Manual covers many ways to use a GPU node on the TFGrid. Read [this section](../../system_administrators/gpu/gpu_toc.md) to learn more.
The TF Dashboard is where to reserve the nodes the farmer should be able to set the extra fees on the form and the user also should be able to reserve and get the details of the node (cost including the extrafees, GPU informations).

View File

@ -0,0 +1,3 @@
dashboard_tc.png
dashboard_portal_terms_conditions.png
profile_manager1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 523 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 531 KiB

View File

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

View File

After

Width:  |  Height:  |  Size: 78 KiB

View File

@ -43,7 +43,7 @@ Deploy a new full virtual machine on the Threefold Grid
- `Myceluim` to enable mycelium on the virtual machine
- `Wireguard Access` to add a wireguard access to the Virtual Machine
- `GPU` flag to add GPU to the Virtual machine
- To deploy a Full VM with GPU, you first need to [rent a dedicated node](../../dashboard/deploy/node_finder.md#dedicated-nodes)
- To deploy a Full VM with GPU, you first need to [rent a dedicated node](../../dashboard/deploy/dedicated_machines.md)
- `Dedicated` flag to retrieve only dedicated nodes
- `Certified` flag to retrieve only certified nodes
- Choose the location of the node

View File

@ -63,6 +63,7 @@ If you're not sure and just want the easiest, most affordable option, skip the p
* **Recommended**: {cpu: 4, memory: 16gb, diskSize: 1000gb }
* Or choose a **Custom** plan
* If want to reserve a public IPv4 address, click on Network then select **Public IPv4**
* If you want a [dedicated](../deploy/dedicated_machines.md) and/or a certified node, select the corresponding option
* Choose the location of the node
* `Country`
* `Farm Name`

View File

@ -1,53 +0,0 @@
<h1> Static Website </h1>
<h2>Table of Contents </h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Deployment](#deployment)
---
## Introduction
Static Website is an application where a user provides a GitHub repository URL for the files to be automatically served online using Caddy.
## Prerequisites
- Make sure you have a [wallet](../wallet_connector.md)
- From the sidebar click on **Applications**
- Click on **Static Website**
## Deployment
![ ](./img/solutions_staticwebsite.png)
- Enter an instance name
- Enter a GitHub repository URL that needs to be cloned
- Enter the title for the cloned repository
- Select a capacity package:
- **Small**: {cpu: 1, memory: 2 , diskSize: 50 }
- **Medium**: {cpu: 2, memory: 4, diskSize: 100 }
- **Large**: {cpu: 4, memory: 16, diskSize: 250 }
- Or choose a **Custom** plan
- `Dedicated` flag to retrieve only dedicated nodes
- `Certified` flag to retrieve only certified nodes
- Choose the location of the node
- `Region`
- `Country`
- `Farm Name`
- Choose the node to deploy on
- Note: You can select a specific node with manual selection
- `Custom Domain` flag allows the user to use a custom domain
- Choose a gateway node to deploy your static website
Once this is done, you can see a list of all of your deployed instances:
![ ](./img/staticwebsite_list.png)
Click on the button **Visit** under **Actions** to go to your static website!

View File

@ -8,7 +8,6 @@ The TFChain DAO (i.e. Decentralized Autonomous Organization) feature integrates
- [Prerequisites to Vote](#prerequisites-to-vote)
- [How to Vote for a Proposal](#how-to-vote-for-a-proposal)
- [The Goal of the Threefold DAO](#the-goal-of-the-threefold-dao)
- [Voting Weight](#voting-weight)
***
@ -40,17 +39,3 @@ To vote, you need to log into your Threefold Dashboard account, go to **TF DAO**
The goal of DAO voting system is to gather the thoughts and will of the Threefold community and build projects that are aligned with the ethos of the project.
We encourage anyone to share their ideas. Who knows? Your sudden spark of genius might lead to an accepted proposal on the Threefold DAO!
## Voting Weight
The DAO votes are weighted as follows:
- Get all linked farms to the account
- Get all nodes per farm
- Get compute and storage units per node (CU and SU)
- Compute the weight of a farm:
```
2 * (sum of CU of all nodes) + (sum of SU of all nodes)
```
Voting weights are tracked per farm to keep it easy and traceable. Thus, if an account has multiple farms, the vote will be registered per farm.

View File

@ -4,16 +4,13 @@
- [Introduction](#introduction)
- [Supported Networks](#supported-networks)
- [Create a Wallet](#create-a-wallet)
- [Import a Wallet](#import-a-wallet)
- [Process](#process)
***
## Introduction
To interact with TFChain, users can connect their TFChain wallet to the wallet connector available on the ThreeFold Dashboard.
You can create a new wallet or import an existing wallet.
To interact with TFChain, users need to set a wallet connector.
## Supported Networks
@ -30,36 +27,16 @@ Currently, we're supporting four different networks:
![ ](./img/profile_manager1.png)
## Create a Wallet
## Process
To create a new wallet, open the ThreeFold Dashboard on the desired network, click on `Create Account`, enter the following information and click `Connect`.
Start entering the following information required to create your new profile.
- `Mnemonics`: The secret words of your Polkadot account. Click on the **Create Account** button to generate yours.
- `Email`: Enter a valid email address.
- `Password`: Choose a password and confirm it. This will be used to access your account.
![ ](./img/profile_manager2.png)
![](./img/dashboard_walletconnector_window.png)
- `Mnemonics` are the secret words of your Polkadot account. Click on the **Create Account** button to generate yours.
- `Password` is used to access your account
- `Confirm Password`
You will be asked to accept ThreeFold's Terms and Conditions:
After you finish typing your credentials, click on **Connect**. Once your profile gets activated, you should find your **Twin ID** and **Address** generated under your **_Mnemonics_** for verification. Also, your **Account Balance** will be available at the top right corner under your profile name.
![](./img/dashboard_terms_conditions.png)
Once you've set your credentials, clicked on **Connect** and accepted the terms and conditions, your profile will be activated.
Upon activation, you will find your **Twin ID**, **Address** and wallet current **balance** generated under your **Mnemonics**.
![](./img/dashboard_walletconnector_info.png)
Your current and locked balances will also be available at the top right corner of the dashboard. Here's an example of the balances you can find for your wallet. Some TFT is locked during utilization as the TFGrid bills you for your workloads and traffic.
![](./img/dashboard_balances.png)
## Import a Wallet
You can import an existing wallet by entering in `Mnemonics` the associated seed phrase or HEX secret of the existing wallet.
- To import a wallet created with the TF Dashboard, use the seed phrase provided when you created the account.
- To import a wallet or a farm created on the TF Connect app, use the TFChain HEX secret.
- From the menu, open **Wallet** -> **Wallet name** -> **Info symbol (i)**, and then reveal and copy the **TFChain Secret**.
When you import a new wallet, you can decide a new password and email address, i.e. you only need the mnemonics to import an existing wallet on the dashboard.
![ ](./img/profile_manager3.png)

View File

@ -88,4 +88,3 @@ For complementary information on the technology developed by ThreeFold, refer to
- [TFGrid Stacks](./grid_deployment/tfgrid_stacks.md)
- [Full VM Grid Deployment](./grid_deployment/grid_deployment_full_vm.md)
- [Grid Snapshots](./grid_deployment/snapshots.md)
- [Deploy the Dashboard](./grid_deployment/deploy_dashboard.md)

View File

@ -2,8 +2,8 @@
<h2> Table of Contents </h2>
- [Zero-OS Hub](./flist_hub/zos_hub.md)
- [Generate an API Token](./flist_hub/api_token.md)
- [Zero-OS Hub](manual:zos_hub.md)
- [Generate an API Token](api_token.md)
- [Convert Docker Image Into Flist](./flist_hub/convert_docker_image.md)
- [Supported Flists](./grid3_supported_flists.md)
- [Flist Case Studies](./flist_case_studies/flist_case_studies.md)

View File

@ -11,8 +11,8 @@
- [Upload your Existing Flist to Reduce Bandwidth](#upload-your-existing-flist-to-reduce-bandwidth)
- [Authenticate via 3Bot](#authenticate-via-3bot)
- [Get and Update Information Through the API](#get-and-update-information-through-the-api)
- [Public API Endpoints (No Authentication Required)](#public-api-endpoints-no-authentication-required)
- [Restricted API Endpoints (Authentication Required)](#restricted-api-endpoints-authentication-required)
- [Public API Endpoints - No Authentication Required](#public-api-endpoints---no-authentication-required)
- [Restricted API Endpoints - Authentication Required](#restricted-api-endpoints---authentication-required)
- [API Request Templates and Examples](#api-request-templates-and-examples)
***
@ -71,7 +71,7 @@ If your `jwt` contains memberof, you can choose which user you want to use by sp
See example below.
### Public API Endpoints (No Authentication Required)
### Public API Endpoints - No Authentication Required
- `/api/flist` (**GET**)
- Returns a json array with all repository/flists found
- `/api/repositories` (**GET**)
@ -84,7 +84,7 @@ See example below.
- `/api/flist/<repository>/<flist>` (**GET**)
- Returns json object with flist dumps (full file list)
### Restricted API Endpoints (Authentication Required)
### Restricted API Endpoints - Authentication Required
- `/api/flist/me` (**GET**)
- Returns json object with some basic information about yourself (authenticated user)
- `/api/flist/me/<flist>` (**GET**, **DELETE**)

View File

@ -1,127 +0,0 @@
<h1>Deploy the Dashboard</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Create an SSH Tunnel](#create-an-ssh-tunnel)
- [Editor SSH Remote Connection](#editor-ssh-remote-connection)
- [Set the VM](#set-the-vm)
- [Build the Dashboard](#build-the-dashboard)
- [Dashboard Public Access](#dashboard-public-access)
- [Questions and Feedback](#questions-and-feedback)
***
## Introduction
We show how to deploy the Dashboard (devnet) on a full VM. To do so, we set an SSH tunnel and use the VSCodium Remote Explorer function. We will then be able to use a source-code editor to explore the code and see changes on a local browser.
We also show how to provide a public access to the Dashboard by setting a gateway domain to your full VM deployment. Note that this method is not production-ready and should only be used to test the Dashboard.
## Prerequisites
- TFChain account with TFT
- [Deploy full VM with WireGuard connection](../../system_administrators/getstarted/ssh_guide/ssh_wireguard.md)
- [Make sure you can connect via SSH on the terminal](../../system_administrators/getstarted/ssh_guide/ssh_openssh.md)
In this guide, we use WireGuard, but you can use other connection methods, such as [Mycelium](../../system_administrators/mycelium/mycelium_toc.md).
## Create an SSH Tunnel
- Open a terminal and create an SSH tunnel
```
ssh -4 -L 5173:127.0.0.1:5173 root@10.20.4.2
```
Simply leave this window open and follow the next steps.
If you use an IPv6 address, e.g. with Mycelium, set `-6` in the line above instead of `-4`.
## Editor SSH Remote Connection
You can connect via SSH through the source-code editor to a VM on the grid. In this example, WireGuard is set.
- Add the SSH Remote extension to [VSCodium](https://vscodium.com/)
- Add a new SSH remote connection
- Set the following (adjust with your own username and host)
```
Host 10.20.4.2
HostName 10.20.4.2
User root
```
- Click on `Connect to host`
## Set the VM
We set the VM to be able to build the Dashboard.
```
apt update && apt install build-essential python3 -y
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
nvm install 18
npm install -g yarn
```
## Build the Dashboard
We now build the Dashboard.
Clone the repository, then install, build and run the Dashboard. Note that here it is called `playground`:
```
git clone https://github.com/threefoldtech/tfgrid-sdk-ts
cd tfgrid-sdk-ts/
yarn install
make build
make run project=playground
```
You can then access the dev net Dashboard on your local browser.
To stop running the Dashboard, simply enter ̀`Ctrl-C` on the terminal window.
## Dashboard Public Access
> Note: This method is not production-ready. Use only for testing purposes.
Once you've tested the Dashboard with the SSH tunnel, you can explore how to access it from the public Internet. For this, we will create a gateway domain and bind the host to `0.0.0.0`.
On the Full VM page, [add a domain](../../dashboard/solutions/add_domain.md) to access your deployment from the public Internet.
- Under `Actions`, click on `Manage Domains`
- Go to `Add New Domain`
- Choose a gateway domain under `Select domain`
- Set the port 5173
- Click on `Add`
To run the Dashboard from the added domain, use this instead of the previous `make run` line:
```
cd packages/playground
yarn dev --host 0.0.0.0
```
You can then access the Dashboard from the domain you just created.
## Questions and Feedback
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.

View File

@ -2,11 +2,8 @@
The TFGrid whole source code is open-source and instances of the grid can be deployed by anyone thanks to the distribution of daily grid snapshots of the complete ThreeFold Grid stacks.
This section also covers the steps to deploy the Dashboard locally. This can be useful when testing the grid or contributing to the open-source project.
## Table of Contents
- [TFGrid Stacks](./tfgrid_stacks.md)
- [Full VM Grid Deployment](./grid_deployment_full_vm.md)
- [Grid Snapshots](./snapshots.md)
- [Deploy the Dashboard](./deploy_dashboard.md)

View File

@ -4,7 +4,6 @@
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Deploy All 3 Network Instances](#deploy-all-3-network-instances)
- [DNS Settings](#dns-settings)
- [DNS Verification](#dns-verification)
- [Prepare the VM](#prepare-the-vm)
@ -18,11 +17,9 @@
## Introduction
We present the steps to deploy an instance of the TFGrid on a full VM.
We present the steps to deploy a network instance of the TFGrid on a full VM.
For this guide, we will be deploying a mainnet instance. While the steps are similar for testnet and devnet, you will have to adjust your deployment depending on which network you use. Details are provided when needed.
We also provide information to deploy the 3 different network instances.
For this guide, we will be deploying a mainnet instance. While the steps are similar for testnet and devnet, you will have to adjust your deployment depending on which network you use.
## Prerequisites
@ -36,30 +33,17 @@ For this guide, you will need to deploy a full VM on the ThreeFold Grid with at
After deploying the full VM, take note of the IPv4 and IPv6 addresses to properly set the DNS records and then SSH into the VM.
It is recommended to deploy on a machine with modern hardware and NVME storage disk.
## Deploy All 3 Network Instances
To deploy the 3 network instances, mainnet, testnet and mainnet, you need to follow the same process for each network on a separate machine or at least on a different VM.
This means that you can either deploy each network instance on 3 different machines, or you can also deploy 3 different VMs on the same machine, e.g. a dedicated node. Then, each VM will run a different network instance. In this case, you will certainly need a machine with NVME storage disk and modern hardware.
## DNS Settings
You need to set an A record for the IPv4 address and an AAAA record for the IPv6 address with a wildcard subdomain.
The following table explicitly shows how to set the A and AAAA records for your domain for all 3 networks. Note that both `testnet` and `devnet` have a subdomain. The last two lines are for mainnet since no subdomain is needed in this case.
The following table explicitly shows how to set the A and AAAA records for your domain.
| Type | Host | Value |
| ---- | ---- | -------------- |
| A | \*.dev | <devnet_ipv4_address> |
| AAAA | \*.dev | <devnet_ipv6_address> |
| A | \*.test | <testnet_ipv4_address> |
| AAAA | \*.test | <testnet_ipv6_address> |
| A | \* | <mainnet_ipv4_address> |
| AAAA | \* | <mainnet_ipv6_address> |
| A | \* | <ipv4_address> |
| AAAA | \* | <ipv6_address> |
As stated above, each network instance must be on its own VM or machine to work properly. Make sure to adjust the DNS records accordingly.
### DNS Verification
@ -67,17 +51,12 @@ You can use tools such as [DNSChecker](https://dnschecker.org/) or [dig](https:/
## Prepare the VM
We show the steps to prepare the VM to run the network instance.
If you are deploying on testnet or devnet, simply replace `mainnet` by the proper network in the following lines.
- Download the ThreeFold Tech `grid_deployment` repository
```
git clone https://github.com/threefoldtech/grid_deployment
cd grid_deployment/docker-compose/mainnet
```
- Generate a TFChain node key with `subkey`
- Note: If you deploy the 3 network instances, you can use the same node key for all 3 networks. But it is recommended to use 3 different keys to facilitate management.
```
echo .subkey_mainnet >> .gitignore
../subkey generate-node-key > .nodekey_mainnet
@ -101,7 +80,7 @@ If you are deploying on testnet or devnet, simply replace `mainnet` by the prope
- **GRID_PROXY_MNEMONIC**="word1 word2 ... word24"
- Write the seed phrase of an account on mainnet with at least 10 TFT in the wallet and a registered twin ID\*
> \*Note: If you've created an account using the ThreeFold Dashboard on a given network, the twin ID is automatically registered for this network.
> \*Note: If you've created an account using the ThreeFold Dashboard on mainnet, the twin ID is automatically registered.
## Set the Firewall
@ -131,18 +110,16 @@ This will take some time since you are downloading the whole mainnet grid snapsh
Once you've deployed the grid stack online, you can access the different grid services by usual the usual subdomains:
```
dashboard.example.com
metrics.example.com
tfchain.example.com
graphql.example.com
relay.example.com
gridproxy.example.com
activation.example.com
stats.example.com
dashboard.your.domain
metrics.your.domain
tfchain.your.domain
graphql.your.domain
relay.your.domain
gridproxy.your.domain
activation.your.domain
stats.your.domain
```
In the case of testnet and devnet, links will also have the given subdomain, such as `dashboard.test.example.com` for a `testnet` instance.
## Manual Commands
Once you've run the install script, you can deploy manually the grid stack with the following command:

View File

@ -4,10 +4,6 @@
- [Introduction](#introduction)
- [Services](#services)
- [ThreeFold Public Snapshots](#threefold-public-snapshots)
- [Requirements](#requirements)
- [Files for Each Net](#files-for-each-net)
- [Deploy All 3 Network Instances](#deploy-all-3-network-instances)
- [Deploy a Snapshot Backend](#deploy-a-snapshot-backend)
- [Deploy the Services with Scripts](#deploy-the-services-with-scripts)
- [Create the Snapshots](#create-the-snapshots)
- [Start All the Services](#start-all-the-services)
@ -55,65 +51,6 @@ ThreeFold hosts all available snapshots at: [https://bknd.snapshot.grid.tf/](htt
rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/processor-devnet-latest.tar.gz .
```
## Requirements
To run your own snapshot backend, you need the following:
- Configuration
- A working docker environment
- 'node key' for the TFchain public RPC node, generated with `subkey generate-node-key`
Hardware
- min of 8 modern CPU cores
- min of 32GB RAM
- min of 1TB SSD storage (high preference for NVMe based storage), preferably more (as the chain keeps growing in size)
- min of 2TB HDD storage (to store and share the snapshots)
Dev, QA and Testnet can do with a Sata SSD setup. Mainnet requires NVMe based SSDs due to the data size.
**Note**: If a deployment does not have enough disk input/output operations per second (iops) available, you might see the processor container restarting regulary and grid_proxy errors regarding processor database timeouts.
### Files for Each Net
Each folder contains the required deployment files for its net. Make sure to work in the folder that has the name of the network you want to create snapshots for.
What does each file do:
- `.env` - contains environment files maintaned by Threefold Tech
- `.gitignore` - has a list of files to ignore once the repo has been cloned. This has the purpose to not have uncommited changes to files when working in this repo
- `.secrets.env-examples` - is where you have to add all your unique environment variables
- `create_snapshot.sh` - script to create a snapshot (used by cron)
- `docker-compose.yml` - has all the required docker-compose configuration to deploy a working Grid stack
- `open_logs_tmux.sh` - opens all the docker logs in tmux sessions
- `typesBundle.json` - contains data for the Graphql indexer and is not to be touched
- `startall.sh` - starts all the (already deployed) containers
- `stopall.sh` - stops all the (already deployed) containers
### Deploy All 3 Network Instances
To deploy the 3 network instances, mainnet, testnet and mainnet, you need to follow the same process for each network on a separate machine or at least on a different VM.
This means that you can either deploy each network instance on 3 different machines, or you can also deploy 3 different VMs on the same machine, e.g. a dedicated node. Then, each VM will run a different network instance. In this case, you will certainly need a machine with NVME storage disk and modern hardware.
## Deploy a Snapshot Backend
Here's how to deploy a snapshot backend of a given network.
- Go to the corresponding network folder (e.g. `mainnet`).
```sh
cd mainnet
cp .secrets.env-example .secrets.env
```
- Open `.secrets.env` and add your generated subkey node-key.
- Check that all environment variables are correct.
```
docker compose --env-file .secrets.env --env-file .env config
```
- Deploy the snapshot backend. Depending on the disk iops available, it can take up until a week to sync from block 0.
```sh
docker compose --env-file .secrets.env --env-file .env up -d
```
## Deploy the Services with Scripts
You can deploy the 3 individual services using known methods such as [Docker](../../system_administrators/computer_it_basics/docker_basics.md). To facilitate the process, scripts are provided that run the necessary docker commands.
@ -150,7 +87,7 @@ You can set a cron job to execute a script running rsync to create the snapshots
```
- Here is an example of a cron job where we execute the script every day at 1 AM and send the logs to `/var/log/snapshots/snapshots-cron.log`.
```sh
0 1 * * * sh /root/code/grid_deployment/grid-snapshots/mainnet/create_snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1
0 1 * * * sh /opt/snapshots/create-snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1
```
### Start All the Services

View File

@ -1,28 +0,0 @@
<h1> Zero-OS </h1>
<h2> Table of Contents </h2>
- [Manual](./manual/manual.md)
- [Workload Types](./manual/workload_types.md)
- [Internal Modules](./internals/internals.md)
- [Identity](./internals/identity/index.md)
- [Node ID Generation](./internals/identity/identity.md)
- [Node Upgrade](./internals/identity/upgrade.md)
- [Node](./internals/node/index.md)
- [Storage](./internals/storage/index.md)
- [Network](./internals/network/index.md)
- [Introduction](./internals/network/introduction.md)
- [Definitions](./internals/network/definitions.md)
- [Mesh](./internals/network/mesh.md)
- [Setup](./internals/network/setup_farm_network.md)
- [Flist](./internals/flist/index.md)
- [Container](./internals/container/index.md)
- [VM](./internals/vmd/index.md)
- [Provision](./internals/provision/index.md)
- [Capacity](./internals/capacity.md)
- [Performance Monitor Package](./performance/performance.md)
- [Public IPs Validation Task](./performance/publicips.md)
- [CPUBenchmark](./performance/cpubench.md)
- [IPerf](./performance/iperf.md)
- [Health Check](./performance/healthcheck.md)
- [API](./manual/api.md)

View File

@ -66,7 +66,7 @@ After preparing the postgres database you can `go run` the main file in `cmds/pr
The server options
| Option | Description |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------- |
|---|---|
| -address | Server ip address (default `":443"`) |
| -ca | certificate authority used to generate certificate (default `"https://acme-staging-v02.api.letsencrypt.org/directory"`) |
| -cert-cache-dir | path to store generated certs in (default `"/tmp/certs"`) |

View File

@ -1,95 +0,0 @@
<h1> ThreeFold Chain <h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Twins](#twins)
- [Farms](#farms)
- [Nodes](#nodes)
- [Node Contract](#node-contract)
- [Rent Contract](#rent-contract)
- [Name Contract](#name-contract)
- [Contract billing](#contract-billing)
- [Contract locking](#contract-locking)
- [Contract grace period](#contract-grace-period)
- [DAO](#dao)
- [Farming Policies](#farming-policies)
- [Node Connection price](#node-connection-price)
- [Node Certifiers](#node-certifiers)
***
## Introduction
ThreeFold Chain (TFChain) is the base layer for everything that interacts with the grid. Nodes, farms, users are registered on the chain. It plays the central role in achieving decentralised consensus between a user and Node to deploy a certain workload. A contract can be created on the chain that is essentially an agreement between a node and user.
## Twins
A twin is the central Identity object that is used for every entity that lives on the grid. A twin optionally has an IPV6 planetary network address which can be used for communication between twins no matter of the location they are in. A twin is coupled to a private/public keypair on chain. This keypair can hold TFT on TF Chain.
## Farms
A farm must be created before a Node can be booted. Every farms needs to have an unique name and is linked to the Twin that creates the farm. Once a farm is created, a unique ID is generated. This ID can be used to provide to the boot image of a Node.
## Nodes
When a node is booted for the first time, it registers itself on the chain and a unique identity is generated for this Node.
## Node Contract
A node contract is a contract between a user and a Node to deploy a certain workload. The contract is specified as following:
```
{
"contract_id": auto generated,
"node_id": unique id of the node,
"deployment_data": some additional deployment data
"deployment_hash": hash of the deployment definition signed by the user
"public_ips": number of public ips to attach to the deployment contract
}
```
We don't save the raw workload definition on the chain but only a hash of the definition. After the contract is created, the user must send the raw deployment to the specified node in the contract. He can find where to send this data by looking up the Node's twin and contacting that twin over the planetary network.
## Rent Contract
A rent contract is also a contract between a user and a Node, but instead of being able to reserve a part of the node's capacity, the full capacity is rented. Once a rent contract is created on a Node by a user, only this user can deploy node contracts on this specific node. A discount of 50% is given if a the user wishes to rent the full capacity of a node by creating a rent contract. All node contracts deployed on a node where a user has a rent contract are free of use expect for the public ip's which can be added on a node contract.
## Name Contract
A name contract is a contract that specifies a unique name to be used on the grid's webgateways. Once a name contract is created, this name can be used as and entrypoint for an application on the grid.
## Contract billing
Every contract is billed every 1 hour on the chain, the amount that is due is deducted from the user's wallet every 24 hours or when the user cancels his contract. The total amount acrued in those 24 hours gets send to following destinations:
- 10% goes to the threefold foundation
- 5% goes to staking pool wallet (to be implemented in a later phase)
- 50% goes to certified sales channel
- 35% TFT gets burned
See [pricing](../../../knowledge_base/cloud/pricing/pricing.md) for more information on how the cost for a contract is calculated.
## Contract locking
To not overload the chain with transfer events and others we choose to lock the amount due for a contract every hour and after 24 hours unlock the amount and deduct it in one go. This lock is saved on a user's account, if the user has multiple contracts the locked amount will be stacked.
## Contract grace period
When the owner of a contract runs out funds on his wallet to pay for his deployment, the contract goes in to a Grace Period state. The deployment, whatever that might be, will be unaccessible during this period to the user. When the wallet is funded with TFT again, the contract goes back to a normal operating state. If the grace period runs out (by default 2 weeks) the user's deployment and data will be deleted from the node.
## DAO
See [DAO](../../dashboard/tfchain/tf_dao.md) for more information on the DAO on TF Chain.
## Farming Policies
See [farming_policies](farming_policies.md) for more information on the farming policies on TF Chain.
## Node Connection price
A connection price is set to every new Node that boots on the Grid. This connection price influences the amount of TFT farmed in a period. The connection price set on a node is permanent. The DAO can propose the increase / decrease of the connection price. At the time of writing the connection price is set to $ 0.08. When the DAO proposes a connection price and the vote is passed, new nodes will attach to the new connection price.
## Node Certifiers
Node certifiers are entities who are allowed to set a node's certification level to `Certified`. The DAO can propose to add / remove entities that can certify nodes. This is usefull for allowing approved resellers of Threefold nodes to mark nodes as Certified. A certified node farms 25% more tokens than `Diy` a node.

View File

@ -1,90 +0,0 @@
# ThreeFold Developers
This section covers all practical tutorials on how to develop and build on the ThreeFold Grid.
For complementary information on the technology developed by ThreeFold, refer to the [Technology](../../knowledge_base/technology/technology_toc.md) section.
<h2> Table of Contents </h2>
- [Javascript Client](./javascript/grid3_javascript_readme.md)
- [Installation](./javascript/grid3_javascript_installation.md)
- [Loading Client](./javascript/grid3_javascript_loadclient.md)
- [Deploy a VM](./javascript/grid3_javascript_vm.md)
- [Capacity Planning](./javascript/grid3_javascript_capacity_planning.md)
- [Deploy Multiple VMs](./javascript/grid3_javascript_vms.md)
- [Deploy CapRover](./javascript/grid3_javascript_caprover.md)
- [Gateways](./javascript/grid3_javascript_vm_gateways.md)
- [Deploy a Kubernetes Cluster](./javascript/grid3_javascript_kubernetes.md)
- [Deploy a ZDB](./javascript/grid3_javascript_zdb.md)
- [Deploy ZDBs for QSFS](./javascript/grid3_javascript_qsfs_zdbs.md)
- [QSFS](./javascript/grid3_javascript_qsfs.md)
- [Key Value Store](./javascript/grid3_javascript_kvstore.md)
- [VM with Wireguard and Gateway](./javascript/grid3_wireguard_gateway.md)
- [GPU Support](./javascript/grid3_javascript_gpu_support.md)
- [Go Client](./go/grid3_go_readme.md)
- [Installation](./go/grid3_go_installation.md)
- [Loading Client](./go/grid3_go_load_client.md)
- [Deploy a VM](./go/grid3_go_vm.md)
- [Deploy Multiple VMs](./go/grid3_go_vms.md)
- [Deploy Gateways](./go/grid3_go_gateways.md)
- [Deploy Kubernetes](./go/grid3_go_kubernetes.md)
- [Deploy a QSFS](./go/grid3_go_qsfs.md)
- [GPU Support](./go/grid3_go_gpu.md)
- [TFCMD](./tfcmd/tfcmd.md)
- [Getting Started](./tfcmd/tfcmd_basics.md)
- [Deploy a VM](./tfcmd/tfcmd_vm.md)
- [Deploy Kubernetes](./tfcmd/tfcmd_kubernetes.md)
- [Deploy ZDB](./tfcmd/tfcmd_zdbs.md)
- [Gateway FQDN](./tfcmd/tfcmd_gateway_fqdn.md)
- [Gateway Name](./tfcmd/tfcmd_gateway_name.md)
- [Contracts](./tfcmd/tfcmd_contracts.md)
- [TFROBOT](./tfrobot/tfrobot.md)
- [Installation](./tfrobot/tfrobot_installation.md)
- [Configuration File](./tfrobot/tfrobot_config.md)
- [Deployment](./tfrobot/tfrobot_deploy.md)
- [Commands and Flags](./tfrobot/tfrobot_commands_flags.md)
- [Supported Configurations](./tfrobot/tfrobot_configurations.md)
- [ThreeFold Chain](./tfchain/tfchain.md)
- [Introduction](./tfchain/introduction.md)
- [Farming Policies](./tfchain/farming_policies.md)
- [External Service Contract](./tfchain/tfchain_external_service_contract.md)
- [Solution Provider](./tfchain/tfchain_solution_provider.md)
- [Grid Proxy](./proxy/proxy_readme.md)
- [Introducing Grid Proxy](./proxy/proxy.md)
- [Setup](./proxy/setup.md)
- [DB Testing](./proxy/db_testing.md)
- [Commands](./proxy/commands.md)
- [Contributions](./proxy/contributions.md)
- [Explorer](./proxy/explorer.md)
- [Database](./proxy/database.md)
- [Production](./proxy/production.md)
- [Release](./proxy/release.md)
- [Flist](./flist/flist.md)
- [ThreeFold Hub Intro](./flist/flist_hub/zos_hub.md)
- [Generate an API Token](./flist/flist_hub/api_token.md)
- [Convert Docker Image Into Flist](./flist/flist_hub/convert_docker_image.md)
- [Supported Flists](./flist/grid3_supported_flists.md)
- [Flist Case Studies](./flist/flist_case_studies/flist_case_studies.md)
- [Case Study: Debian 12](./flist/flist_case_studies/flist_debian_case_study.md)
- [Case Study: Nextcloud AIO](./flist/flist_case_studies/flist_nextcloud_case_study.md)
- [Internals](./internals/internals.md)
- [Reliable Message Bus (RMB)](./internals/rmb/rmb_toc.md)
- [Introduction to RMB](./internals/rmb/rmb_intro.md)
- [RMB Specs](./internals/rmb/rmb_specs.md)
- [RMB Peer](./internals/rmb/uml/peer.md)
- [RMB Relay](./internals/rmb/uml/relay.md)
- [ZOS](./internals/zos/index.md)
- [Manual](./internals/zos/manual/manual.md)
- [Workload Types](./internals/zos/manual/workload_types.md)
- [Internal Modules](./internals/zos/internals/internals.md)
- [Capacity](./internals/zos/internals/capacity.md)
- [Performance Monitor Package](./internals/zos/performance/performance.md)
- [Public IPs Validation Task](./internals/zos/performance/publicips.md)
- [CPUBenchmark](./internals/zos/performance/cpubench.md)
- [IPerf](./internals/zos/performance/iperf.md)
- [Health Check](./internals/zos/performance/healthcheck.md)
- [API](./internals/zos/manual/api.md)
- [Grid Deployment](./grid_deployment/grid_deployment.md)
- [TFGrid Stacks](./grid_deployment/tfgrid_stacks.md)
- [Full VM Grid Deployment](./grid_deployment/grid_deployment_full_vm.md)
- [Grid Snapshots](./grid_deployment/snapshots.md)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

View File

@ -1,95 +0,0 @@
<h1> ThreeFold Chain <h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Twins](#twins)
- [Farms](#farms)
- [Nodes](#nodes)
- [Node Contract](#node-contract)
- [Rent Contract](#rent-contract)
- [Name Contract](#name-contract)
- [Contract billing](#contract-billing)
- [Contract locking](#contract-locking)
- [Contract grace period](#contract-grace-period)
- [DAO](#dao)
- [Farming Policies](#farming-policies)
- [Node Connection price](#node-connection-price)
- [Node Certifiers](#node-certifiers)
***
## Introduction
ThreeFold Chain (TFChain) is the base layer for everything that interacts with the grid. Nodes, farms, users are registered on the chain. It plays the central role in achieving decentralised consensus between a user and Node to deploy a certain workload. A contract can be created on the chain that is essentially an agreement between a node and user.
## Twins
A twin is the central Identity object that is used for every entity that lives on the grid. A twin optionally has an IPV6 planetary network address which can be used for communication between twins no matter of the location they are in. A twin is coupled to a private/public keypair on chain. This keypair can hold TFT on TF Chain.
## Farms
A farm must be created before a Node can be booted. Every farms needs to have an unique name and is linked to the Twin that creates the farm. Once a farm is created, a unique ID is generated. This ID can be used to provide to the boot image of a Node.
## Nodes
When a node is booted for the first time, it registers itself on the chain and a unique identity is generated for this Node.
## Node Contract
A node contract is a contract between a user and a Node to deploy a certain workload. The contract is specified as following:
```
{
"contract_id": auto generated,
"node_id": unique id of the node,
"deployment_data": some additional deployment data
"deployment_hash": hash of the deployment definition signed by the user
"public_ips": number of public ips to attach to the deployment contract
}
```
We don't save the raw workload definition on the chain but only a hash of the definition. After the contract is created, the user must send the raw deployment to the specified node in the contract. He can find where to send this data by looking up the Node's twin and contacting that twin over the planetary network.
## Rent Contract
A rent contract is also a contract between a user and a Node, but instead of being able to reserve a part of the node's capacity, the full capacity is rented. Once a rent contract is created on a Node by a user, only this user can deploy node contracts on this specific node. A discount of 50% is given if a the user wishes to rent the full capacity of a node by creating a rent contract. All node contracts deployed on a node where a user has a rent contract are free of use expect for the public ip's which can be added on a node contract.
## Name Contract
A name contract is a contract that specifies a unique name to be used on the grid's webgateways. Once a name contract is created, this name can be used as and entrypoint for an application on the grid.
## Contract billing
Every contract is billed every 1 hour on the chain, the amount that is due is deducted from the user's wallet every 24 hours or when the user cancels his contract. The total amount acrued in those 24 hours gets send to following destinations:
- 10% goes to the threefold foundation
- 5% goes to staking pool wallet (to be implemented in a later phase)
- 50% goes to certified sales channel
- 35% TFT gets burned
See [pricing](../../../knowledge_base/cloud/pricing/pricing.md) for more information on how the cost for a contract is calculated.
## Contract locking
To not overload the chain with transfer events and others we choose to lock the amount due for a contract every hour and after 24 hours unlock the amount and deduct it in one go. This lock is saved on a user's account, if the user has multiple contracts the locked amount will be stacked.
## Contract grace period
When the owner of a contract runs out funds on his wallet to pay for his deployment, the contract goes in to a Grace Period state. The deployment, whatever that might be, will be unaccessible during this period to the user. When the wallet is funded with TFT again, the contract goes back to a normal operating state. If the grace period runs out (by default 2 weeks) the user's deployment and data will be deleted from the node.
## DAO
See [DAO](../../dashboard/tfchain/tf_dao.md) for more information on the DAO on TF Chain.
## Farming Policies
See [farming_policies](farming_policies.md) for more information on the farming policies on TF Chain.
## Node Connection price
A connection price is set to every new Node that boots on the Grid. This connection price influences the amount of TFT farmed in a period. The connection price set on a node is permanent. The DAO can propose the increase / decrease of the connection price. At the time of writing the connection price is set to $ 0.08. When the DAO proposes a connection price and the vote is passed, new nodes will attach to the new connection price.
## Node Certifiers
Node certifiers are entities who are allowed to set a node's certification level to `Certified`. The DAO can propose to add / remove entities that can certify nodes. This is usefull for allowing approved resellers of Threefold nodes to mark nodes as Certified. A certified node farms 25% more tokens than `Diy` a node.

View File

@ -1,14 +0,0 @@
<h1> ThreeFold Documentation </h1>
The section contains all the practical information for farmers, developers and system administrators of the ThreeFold Grid.
For complementary information on ThreeFold, refer to the [ThreeFold Knowledge Base](../knowledge_base/knowledge_base.md).
<h2>Table of Contents</h2>
- [Dashboard](./dashboard/dashboard.md)
- [Developers](./developers/developers.md)
- [Farmers](./farmers/farmers.md)
- [System Administrators](./system_administrators/system_administrators.md)
- [ThreeFold Token](./threefold_token/threefold_token.md)
- [FAQ](./faq/faq.md)

Some files were not shown because too many files have changed in this diff Show More