manual developer section update

This commit is contained in:
mik-tf 2024-04-15 20:58:40 +00:00
parent 2c0f849383
commit ebee66d3cd
100 changed files with 7 additions and 4635 deletions

View File

@ -115,7 +115,7 @@
- [Case Study: Debian 12](manual/documentation/developers/flist/flist_case_studies/flist_debian_case_study.md) - [Case Study: Debian 12](manual/documentation/developers/flist/flist_case_studies/flist_debian_case_study.md)
- [Case Study: Nextcloud AIO](manual/documentation/developers/flist/flist_case_studies/flist_nextcloud_case_study.md) - [Case Study: Nextcloud AIO](manual/documentation/developers/flist/flist_case_studies/flist_nextcloud_case_study.md)
- [Internals](manual/documentation/developers/internals/internals.md) - [Internals](manual/documentation/developers/internals/internals.md)
- [Reliable Message Bus (RMB)](manual/documentation/developers/internals/rmb/rmb_toc.md) - [Reliable Message Bus - RMB](manual/documentation/developers/internals/rmb/rmb_toc.md)
- [Introduction to RMB](manual/documentation/developers/internals/rmb/rmb_intro.md) - [Introduction to RMB](manual/documentation/developers/internals/rmb/rmb_intro.md)
- [RMB Specs](manual/documentation/developers/internals/rmb/rmb_specs.md) - [RMB Specs](manual/documentation/developers/internals/rmb/rmb_specs.md)
- [RMB Peer](manual/documentation/developers/internals/rmb/uml/peer.md) - [RMB Peer](manual/documentation/developers/internals/rmb/uml/peer.md)

View File

@ -2,8 +2,8 @@
<h2> Table of Contents </h2> <h2> Table of Contents </h2>
- [Zero-OS Hub](./flist_hub/zos_hub.md) - [Zero-OS Hub](manual:zos_hub.md)
- [Generate an API Token](./flist_hub/api_token.md) - [Generate an API Token](api_token.md)
- [Convert Docker Image Into Flist](./flist_hub/convert_docker_image.md) - [Convert Docker Image Into Flist](./flist_hub/convert_docker_image.md)
- [Supported Flists](./grid3_supported_flists.md) - [Supported Flists](./grid3_supported_flists.md)
- [Flist Case Studies](./flist_case_studies/flist_case_studies.md) - [Flist Case Studies](./flist_case_studies/flist_case_studies.md)

View File

@ -11,8 +11,8 @@
- [Upload your Existing Flist to Reduce Bandwidth](#upload-your-existing-flist-to-reduce-bandwidth) - [Upload your Existing Flist to Reduce Bandwidth](#upload-your-existing-flist-to-reduce-bandwidth)
- [Authenticate via 3Bot](#authenticate-via-3bot) - [Authenticate via 3Bot](#authenticate-via-3bot)
- [Get and Update Information Through the API](#get-and-update-information-through-the-api) - [Get and Update Information Through the API](#get-and-update-information-through-the-api)
- [Public API Endpoints (No Authentication Required)](#public-api-endpoints-no-authentication-required) - [Public API Endpoints - No Authentication Required](#public-api-endpoints---no-authentication-required)
- [Restricted API Endpoints (Authentication Required)](#restricted-api-endpoints-authentication-required) - [Restricted API Endpoints - Authentication Required](#restricted-api-endpoints---authentication-required)
- [API Request Templates and Examples](#api-request-templates-and-examples) - [API Request Templates and Examples](#api-request-templates-and-examples)
*** ***
@ -71,7 +71,7 @@ If your `jwt` contains memberof, you can choose which user you want to use by sp
See example below. See example below.
### Public API Endpoints (No Authentication Required) ### Public API Endpoints - No Authentication Required
- `/api/flist` (**GET**) - `/api/flist` (**GET**)
- Returns a json array with all repository/flists found - Returns a json array with all repository/flists found
- `/api/repositories` (**GET**) - `/api/repositories` (**GET**)
@ -84,7 +84,7 @@ See example below.
- `/api/flist/<repository>/<flist>` (**GET**) - `/api/flist/<repository>/<flist>` (**GET**)
- Returns json object with flist dumps (full file list) - Returns json object with flist dumps (full file list)
### Restricted API Endpoints (Authentication Required) ### Restricted API Endpoints - Authentication Required
- `/api/flist/me` (**GET**) - `/api/flist/me` (**GET**)
- Returns json object with some basic information about yourself (authenticated user) - Returns json object with some basic information about yourself (authenticated user)
- `/api/flist/me/<flist>` (**GET**, **DELETE**) - `/api/flist/me/<flist>` (**GET**, **DELETE**)

View File

@ -1,110 +0,0 @@
<h1> Capacity Planning </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
It's almost the same as in [deploying a single VM](../javascript/grid3_javascript_vm.md) the only difference is you can automate the choice of the node to deploy on using code. We now support `FilterOptions` to filter nodes based on specific criteria e.g the node resources (CRU, SRU, HRU, MRU) or being part of a specific farm or located in some country, or being a gateway or not
## Example
```ts
FilterOptions: { accessNodeV4?: boolean; accessNodeV6?: boolean; city?: string; country?: string; cru?: number; hru?: number; mru?: number; sru?: number; farmId?: number; farmName?: string; gateway?: boolean; publicIPs?: boolean; certified?: boolean; dedicated?: boolean; availableFor?: number; page?: number;}
```
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "dynamictest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "dynamicDisk";
disk.size = 8;
disk.mountpoint = "/testdisk";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 2, // GB
sru: 9,
country: "Belgium",
availableFor: grid3.twinId,
};
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choise
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 1;
vm.memory = 1024 * 2;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm.entrypoint = "/sbin/zinit init";
vm.env = {
SSH_KEY: config.ssh_key,
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "dynamicVMS";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
In this example you can notice the criteria for `server1`
```typescript
const server1_options: FilterOptions = {
cru: 1,
mru: 2, // GB
sru: 9,
country: "Belgium",
availableFor: grid3.twinId,
};
```
Here we want all the nodes with `CRU:1`, `MRU:2`, `SRU:9`, located in `Belgium` and available for me (not rented for someone else).
> Note some libraries allow reverse lookup of countries codes by name e.g [i18n-iso-countries](https://www.npmjs.com/package/i18n-iso-countries)
and then in the MachineModel, we specified the `node_id` to be the first value of our filteration
```typescript
vm.node_id = +(await nodes.filterNodes(server1_options))[0].nodeId;
```

View File

@ -1,232 +0,0 @@
<h1> Deploy CapRover </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Leader Node](#leader-node)
- [Code Example](#code-example)
- [Environment Variables](#environment-variables)
- [Worker Node](#worker-node)
- [Code Example](#code-example-1)
- [Environment Variables](#environment-variables-1)
- [Questions and Feedback](#questions-and-feedback)
***
## Introduction
In this section, we show how to deploy CapRover with the Javascript client.
This deployment is very similar to what we have in the section [Deploy a VM](./grid3_javascript_vm.md), but the environment variables are different.
## Leader Node
We present here a code example and the environment variables to deploy a CapRover Leader node.
For further details about the Leader node deployment, [read this documentation](https://github.com/freeflowuniverse/freeflow_caprover#a-leader-node-deploymentsetup).
### Code Example
```ts
import {
DiskModel,
FilterOptions,
MachineModel,
MachinesModel,
NetworkModel,
} from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const vmQueryOptions: FilterOptions = {
cru: 4,
mru: 4, // GB
sru: 10,
farmId: 1,
};
const CAPROVER_FLIST =
"https://hub.grid.tf/tf-official-apps/tf-caprover-latest.flist";
// create network Object
const n = new NetworkModel();
n.name = "wedtest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "wedDisk";
disk.size = 10;
disk.mountpoint = "/var/lib/docker";
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
vm.disks = [disk];
vm.public_ip = true;
vm.planetary = false;
vm.cpu = 4;
vm.memory = 1024 * 4;
vm.rootfs_size = 0;
vm.flist = CAPROVER_FLIST;
vm.entrypoint = "/sbin/zinit init";
vm.env = {
PUBLIC_KEY: config.ssh_key,
SWM_NODE_MODE: "leader",
CAPROVER_ROOT_DOMAIN: "rafy.grid.tf", // update me
DEFAULT_PASSWORD: "captain42",
CAPTAIN_IMAGE_VERSION: "latest",
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "newVMS5";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "caprover leader machine/node";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
log(
`You can access Caprover via the browser using: https://captain.${vm.env.CAPROVER_ROOT_DOMAIN}`
);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
### Environment Variables
- PUBLIC_KEY: Your public IP to be able to access the VM.
- SWM_NODE_MODE: Caprover Node type which must be `leader` as we are deploying a leader node.
- CAPROVER_ROOT_DOMAIN: The domain which you we will use to bind the deployed VM.
- DEFAULT_PASSWORD: Caprover default password you want to deploy with.
## Worker Node
We present here a code example and the environment variables to deploy a CapRover Worker node.
Note that before deploying the Worker node, you should check the following:
- Get the Leader node public IP address.
- The Worker node should join the cluster from the UI by adding public IP address and the private SSH key.
For further information, [read this documentation](https://github.com/freeflowuniverse/freeflow_caprover#step-4-access-the-captain-dashboard).
### Code Example
```ts
import {
DiskModel,
FilterOptions,
MachineModel,
MachinesModel,
NetworkModel,
} from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const vmQueryOptions: FilterOptions = {
cru: 4,
mru: 4, // GB
sru: 10,
farmId: 1,
};
const CAPROVER_FLIST =
"https://hub.grid.tf/tf-official-apps/tf-caprover-latest.flist";
// create network Object
const n = new NetworkModel();
n.name = "wedtest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "wedDisk";
disk.size = 10;
disk.mountpoint = "/var/lib/docker";
// create vm node Object
const vm = new MachineModel();
vm.name = "capworker1";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
vm.disks = [disk];
vm.public_ip = true;
vm.planetary = false;
vm.cpu = 4;
vm.memory = 1024 * 4;
vm.rootfs_size = 0;
vm.flist = CAPROVER_FLIST;
vm.entrypoint = "/sbin/zinit init";
vm.env = {
// These env. vars needed to be changed based on the leader node.
PUBLIC_KEY: config.ssh_key,
SWM_NODE_MODE: "worker",
LEADER_PUBLIC_IP: "185.206.122.157",
CAPTAIN_IMAGE_VERSION: "latest",
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "newVMS6";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "caprover worker machine/node";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
### Environment Variables
The deployment of the Worker node is similar to the deployment of the Leader node, with the exception of the environment variables which differ slightly.
- PUBLIC_KEY: Your public IP to be able to access the VM.
- SWM_NODE_MODE: Caprover Node type which must be `worker` as we are deploying a worker node.
- LEADER_PUBLIC_IP: Leader node public IP.
## Questions and Feedback
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@ -1,91 +0,0 @@
<h1> GPU Support and JavaScript </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
***
## Introduction
We present here a quick introduction to GPU support with JavaScript.
There are a couple of updates regarding finding nodes with GPU, querying node for GPU information and deploying with support of GPU.
This is an ongoing development and this section will be updated as new information comes in.
## Example
Here is an example script to deploy with GPU support:
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "vmgpuNetwork";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "vmgpuDisk";
disk.size = 100;
disk.mountpoint = "/testdisk";
const vmQueryOptions: FilterOptions = {
cru: 8,
mru: 16, // GB
sru: 100,
availableFor: grid3.twinId,
hasGPU: true,
rentedBy: grid3.twinId,
};
// create vm node Object
const vm = new MachineModel();
vm.name = "vmgpu";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 8;
vm.memory = 1024 * 16;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist";
vm.entrypoint = "/";
vm.env = {
SSH_KEY: config.ssh_key,
};
vm.gpu = ["0000:0e:00.0/1002/744c"]; // gpu card's id, you can check the available gpu from the dashboard
// create VMs Object
const vms = new MachinesModel();
vms.name = "vmgpu";
vms.network = n;
vms.machines = [vm];
vms.metadata = "";
vms.description = "test deploying VM with GPU via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// delete
const d = await grid3.machines.delete({ name: vms.name });
log(d);
await grid3.disconnect();
}
main();
```

View File

@ -1,124 +0,0 @@
<h1>Installation</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [External Package](#external-package)
- [Local Usage](#local-usage)
- [Getting Started](#getting-started)
- [Client Configuration](#client-configuration)
- [Generate the Documentation](#generate-the-documentation)
- [How to Run the Scripts](#how-to-run-the-scripts)
- [Reference API](#reference-api)
***
## Introduction
We present here the general steps required to install and use the ThreeFold Grid Client.
The [Grid Client](https://github.com/threefoldtech/tfgrid-sdk-ts/tree/development/packages/grid_client) is written using [TypeScript](https://www.typescriptlang.org/) to provide more convenience and type-checked code. It is used to deploy workloads like virtual machines, kubernetes clusters, quantum storage, and more.
## Prerequisites
To install the Grid Client, you will need the following on your machine:
- [Node.js](https://nodejs.org/en) ^18
- npm 8.2.0 or higher
- may need to install libtool (**apt-get install libtool**)
> Note: [nvm](https://nvm.sh/) is the recommended way for installing node.
To use the Grid Client, you will need the following on the TFGrid:
- A TFChain account
- TFT in your wallet
If it is not the case, please visit the [Get started section](../../system_administrators/getstarted/tfgrid3_getstarted.md).
## Installation
### External Package
To install the external package, simply run the following command:
```bash
yarn add @threefold/grid_client
```
> Note: For the **qa**, **test** and **main** networks, please use @2.1.1 version.
### Local Usage
To use the Grid Client locally, clone the repository then install the Grid Client:
- Clone the repository
- ```bash
git clone https://github.com/threefoldtech/tfgrid-sdk-ts
```
- Install the Grid Client
- With yarn
- ```bash
yarn install
```
- With npm
- ```bash
npm install
```
> Note: In the directory **grid_client/scripts**, we provided a set of scripts to test the Grid Client.
## Getting Started
You will need to set the client configuration either by setting the json file manually (**scripts/config.json**) or by using the provided script (**scripts/client_loader.ts**).
### Client Configuration
Make sure to set the client configuration properly before using the Grid Client.
- **network**: The network environment (**dev**, **qa**, **test** or **main**).
- **mnemonic**: The 12 words mnemonics for your account.
- Learn how to create one [here](../../dashboard/wallet_connector.md).
- **storeSecret**: This is any word that will be used for encrypting/decrypting the keys on ThreeFold key-value store.
- **ssh_key**: The public SSH key set on your machine.
> Note: Only networks can't be isolated, all projects can see the same network.
## Generate the Documentation
The easiest way to test the installation is to run the following command with either yarn or npm to generate the Grid Client documentation:
* With yarn
* ```
yarn run serve-docs
```
* With npm
* ```
npm run serve-docs
```
> Note: You can also use the command **yarn run** to see all available options.
## How to Run the Scripts
You can explore the Grid Client by testing the different scripts proposed in **grid_client/scripts**.
- Update your customized deployments specs if needed
- Run using [ts-node](https://www.npmjs.com/ts-node)
- With yarn
- ```bash
yarn run ts-node --project tsconfig-node.json scripts/zdb.ts
```
- With npx
- ```bash
npx ts-node --project tsconfig-node.json scripts/zdb.ts
```
## Reference API
While this is still a work in progress, you can have a look [here](https://threefoldtech.github.io/tfgrid-sdk-ts/packages/grid_client/docs/api/index.html).

View File

@ -1,186 +0,0 @@
<h1> Deploying a Kubernetes Cluster </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [Building network](#building-network)
- [Building nodes](#building-nodes)
- [Building cluster](#building-cluster)
- [Deploying](#deploying)
- [Getting deployment information](#getting-deployment-information)
- [Deleting deployment](#deleting-deployment)
***
## Introduction
We show how to deploy a Kubernetes cluster on the TFGrid with the Javascript client.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
```ts
import { FilterOptions, K8SModel, KubernetesNodeModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "monNetwork";
n.ip_range = "10.238.0.0/16";
n.addAccess = true;
const masterQueryOptions: FilterOptions = {
cru: 2,
mru: 2, // GB
sru: 2,
availableFor: grid3.twinId,
farmId: 1,
};
const workerQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
// create k8s node Object
const master = new KubernetesNodeModel();
master.name = "master";
master.node_id = +(await grid3.capacity.filterNodes(masterQueryOptions))[0].nodeId;
master.cpu = 1;
master.memory = 1024;
master.rootfs_size = 0;
master.disk_size = 1;
master.public_ip = false;
master.planetary = true;
// create k8s node Object
const worker = new KubernetesNodeModel();
worker.name = "worker";
worker.node_id = +(await grid3.capacity.filterNodes(workerQueryOptions))[0].nodeId;
worker.cpu = 1;
worker.memory = 1024;
worker.rootfs_size = 0;
worker.disk_size = 1;
worker.public_ip = false;
worker.planetary = true;
// create k8s Object
const k = new K8SModel();
k.name = "testk8s";
k.secret = "secret";
k.network = n;
k.masters = [master];
k.workers = [worker];
k.metadata = "{'testk8s': true}";
k.description = "test deploying k8s via ts grid3 client";
k.ssh_key = config.ssh_key;
// deploy
const res = await grid3.k8s.deploy(k);
log(res);
// get the deployment
const l = await grid3.k8s.getObj(k.name);
log(l);
// // delete
// const d = await grid3.k8s.delete({ name: k.name });
// log(d);
await grid3.disconnect();
}
main();
```
## Detailed explanation
### Building network
```typescript
// create network Object
const n = new NetworkModel();
n.name = "monNetwork";
n.ip_range = "10.238.0.0/16";
```
### Building nodes
```typescript
// create k8s node Object
const master = new KubernetesNodeModel();
master.name = "master";
master.node_id = +(await grid3.capacity.filterNodes(masterQueryOptions))[0].nodeId;
master.cpu = 1;
master.memory = 1024;
master.rootfs_size = 0;
master.disk_size = 1;
master.public_ip = false;
master.planetary = true;
// create k8s node Object
const worker = new KubernetesNodeModel();
worker.name = "worker";
worker.node_id = +(await grid3.capacity.filterNodes(workerQueryOptions))[0].nodeId;
worker.cpu = 1;
worker.memory = 1024;
worker.rootfs_size = 0;
worker.disk_size = 1;
worker.public_ip = false;
worker.planetary = true;
```
### Building cluster
Here we specify the cluster project name, cluster secret, network model to be used, master and workers nodes and sshkey to access them
```ts
// create k8s Object
const k = new K8SModel();
k.name = "testk8s";
k.secret = "secret";
k.network = n;
k.masters = [master];
k.workers = [worker];
k.metadata = "{'testk8s': true}";
k.description = "test deploying k8s via ts grid3 client";
k.ssh_key = config.ssh_key;
```
### Deploying
use `deploy` function to deploy the kubernetes project
```ts
const res = await grid3.k8s.deploy(k);
log(res);
```
### Getting deployment information
```ts
const l = await grid3.k8s.getObj(k.name);
log(l);
```
### Deleting deployment
```ts
const d = await grid3.k8s.delete({ name: k.name });
log(d);
```

View File

@ -1,101 +0,0 @@
<h1>Using TFChain KVStore</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [setting values](#setting-values)
- [getting key](#getting-key)
- [listing keys](#listing-keys)
- [deleting key](#deleting-key)
***
## Introduction
As part of the tfchain, we support a keyvalue store module that can be used for any value within `2KB` range. practically it's used to save the user configurations state, so it can be built up again on any machine, given they used the same mnemonics and same secret.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
```ts
import { getClient } from "./client_loader";
import { log } from "./utils";
/*
KVStore example usage:
*/
async function main() {
//For creating grid3 client with KVStore, you need to specify the KVStore storage type in the pram:
const gridClient = await getClient();
//then every module will use the KVStore to save its configuration and restore it.
// also you can use it like this:
const db = gridClient.kvstore;
// set key
const key = "hamada";
const exampleObj = {
key1: "value1",
key2: 2,
};
// set key
await db.set({ key, value: JSON.stringify(exampleObj) });
// list all the keys
const keys = await db.list();
log(keys);
// get the key
const data = await db.get({ key });
log(JSON.parse(data));
// remove the key
await db.remove({ key });
await gridClient.disconnect();
}
main();
```
### setting values
`db.set` is used to set key to any value `serialized as string`
```ts
await db.set({ key, value: JSON.stringify(exampleObj) });
```
### getting key
`db.get` is used to get a specific key
```ts
const data = await db.get({ key });
log(JSON.parse(data));
```
### listing keys
`db.list` is used to list all the keys.
```ts
const keys = await db.list();
log(keys);
```
### deleting key
`db.remove` is used to delete a specific key.
```ts
await db.remove({ key });
```

View File

@ -1,68 +0,0 @@
<h1> Grid3 Client</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Client Configurations](#client-configurations)
- [Creating/Initializing The Grid3 Client](#creatinginitializing-the-grid3-client)
- [What is `rmb-rs` | Reliable Message Bus --rust](#what-is-rmb-rs--reliable-message-bus---rust)
- [Grid3 Client Options](#grid3-client-options)
## Introduction
Grid3 Client is a client used for deploying workloads (VMs, ZDBs, k8s, etc.) on the TFGrid.
## Client Configurations
so you have to set up your configuration file to be like this:
```json
{
"network": "dev",
"mnemonic": "<Your mnemonic>",
"storeSecret": "secret",
"ssh_key": ""
}
```
## Creating/Initializing The Grid3 Client
```ts
async function getClient(): Promise<GridClient> {
const gridClient = new GridClient({
network: "dev", // can be dev, qa, test, main, or custom
mnemonic: "<add your mnemonic here>",
});
await gridClient.connect();
return gridClient;
}
```
The grid client uses `rmb-rs` tool to send requests to/from nodes.
## What is `rmb-rs` | Reliable Message Bus --rust
Reliable message bus is a secure communication panel that allows bots to communicate together in a chat like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT.
Out of the box RMB provides the following:
- Guarantee authenticity of the messages. You are always sure that the received message is from whoever is pretending to be
- End to End encryption
- Support for 3rd party hosted relays. Anyone can host a relay and people can use it safely since there is no way messages can be inspected while
using e2e. That's similar to home servers by matrix
## Grid3 Client Options
- network: `dev` for devnet, `test` for testnet
- mnemonics: used for signing the requests.
- storeSecret: used to encrypt data while storing in backend. It's any word that will be used for encrypting/decrypting the keys on threefold key-value store. If left empty, the Grid client will use the mnemonics as the storeSecret.
- BackendStorage : can be `auto` which willl automatically adapt if running in node environment to use `filesystem backend` or the browser enviornment to use `localstorage backend`. Also you can set it to `kvstore` to use the tfchain keyvalue store module.
- keypairType: is defaulted to `sr25519`, most likely you will never need to change it. `ed25519` is supported too.
for more details, check [client options](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/docs/client_configuration.md)
> Note: The choice of the node is completely up to the user at this point. They need to do the capacity planning. Check [Node Finder](../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria.
Check the document for [capacity planning using code](../javascript/grid3_javascript_capacity_planning.md) if you want to automate it
> Note: this feature is still experimental

View File

@ -1,297 +0,0 @@
<h1>Deploying a VM with QSFS</h1>
<h2>Table of Contents</h2>
- [Prerequisites](#prerequisites)
- [Code Example](#code-example)
- [Detailed Explanation](#detailed-explanation)
- [Getting the Client](#getting-the-client)
- [Preparing QSFS](#preparing-qsfs)
- [Deploying a VM with QSFS](#deploying-a-vm-with-qsfs)
- [Getting the Deployment Information](#getting-the-deployment-information)
- [Deleting a Deployment](#deleting-a-deployment)
***
## Prerequisites
First, make sure that you have your [client](./grid3_javascript_loadclient.md) prepared.
## Code Example
```ts
import { FilterOptions, MachinesModel, QSFSZDBSModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const qsfs_name = "wed2710q1";
const machines_name = "wed2710t1";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsQueryOptions: FilterOptions = {
hru: 6,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 8,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my qsfs test",
metadata: "",
};
const vms: MachinesModel = {
name: machines_name,
network: {
name: "wed2710n1",
ip_range: "10.201.0.0/16",
},
machines: [
{
name: "wed2710v1",
node_id: vmNode,
disks: [
{
name: "wed2710d1",
size: 1,
mountpoint: "/mydisk",
},
],
qsfs_disks: [
{
qsfs_zdbs_name: qsfs_name,
name: "wed2710d2",
minimal_shards: 2,
expected_shards: 4,
encryption_key: "hamada",
prefix: "hamada",
cache: 1,
mountpoint: "/myqsfsdisk",
},
],
public_ip: false,
public_ip6: false,
planetary: true,
cpu: 1,
memory: 1024,
rootfs_size: 0,
flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
entrypoint: "/sbin/zinit init",
env: {
SSH_KEY: config.ssh_key,
},
},
],
metadata: "{'testVMs': true}",
description: "test deploying VMs via ts grid3 client",
};
async function cancel(grid3) {
// delete
const d = await grid3.machines.delete({ name: machines_name });
log(d);
const r = await grid3.qsfs_zdbs.delete({ name: qsfs_name });
log(r);
}
//deploy qsfs
const res = await grid3.qsfs_zdbs.deploy(qsfs);
log(">>>>>>>>>>>>>>>QSFS backend has been created<<<<<<<<<<<<<<<");
log(res);
const vm_res = await grid3.machines.deploy(vms);
log(">>>>>>>>>>>>>>>vm has been created<<<<<<<<<<<<<<<");
log(vm_res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(">>>>>>>>>>>>>>>Deployment result<<<<<<<<<<<<<<<");
log(l);
// await cancel(grid3);
await grid3.disconnect();
}
main();
```
## Detailed Explanation
We present a detailed explanation of the example shown above.
### Getting the Client
```ts
const grid3 = getClient();
```
### Preparing QSFS
```ts
const qsfs_name = "wed2710q1";
const machines_name = "wed2710t1";
```
We prepare here some names to use across the client for the QSFS and the machines project
```ts
const qsfsQueryOptions: FilterOptions = {
hru: 6,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 8,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my qsfs test",
metadata: "",
};
const res = await grid3.qsfs_zdbs.deploy(qsfs);
log(">>>>>>>>>>>>>>>QSFS backend has been created<<<<<<<<<<<<<<<");
log(res);
```
Here we deploy `8` ZDBs on nodes `2,3` with password `mypassword`, all of them having disk size of `10GB`
### Deploying a VM with QSFS
```ts
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
// deploy vms
const vms: MachinesModel = {
name: machines_name,
network: {
name: "wed2710n1",
ip_range: "10.201.0.0/16",
},
machines: [
{
name: "wed2710v1",
node_id: vmNode,
disks: [
{
name: "wed2710d1",
size: 1,
mountpoint: "/mydisk",
},
],
qsfs_disks: [
{
qsfs_zdbs_name: qsfs_name,
name: "wed2710d2",
minimal_shards: 2,
expected_shards: 4,
encryption_key: "hamada",
prefix: "hamada",
cache: 1,
mountpoint: "/myqsfsdisk",
},
],
public_ip: false,
public_ip6: false,
planetary: true,
cpu: 1,
memory: 1024,
rootfs_size: 0,
flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
entrypoint: "/sbin/zinit init",
env: {
SSH_KEY: config.ssh_key,
},
},
],
metadata: "{'testVMs': true}",
description: "test deploying VMs via ts grid3 client",
};
const vm_res = await grid3.machines.deploy(vms);
log(">>>>>>>>>>>>>>>vm has been created<<<<<<<<<<<<<<<");
log(vm_res);
```
So this deployment is almost similiar to what we have in the [vm deployment section](./grid3_javascript_vm.md). We only have a new section `qsfs_disks`
```ts
qsfs_disks: [{
qsfs_zdbs_name: qsfs_name,
name: "wed2710d2",
minimal_shards: 2,
expected_shards: 4,
encryption_key: "hamada",
prefix: "hamada",
cache: 1,
mountpoint: "/myqsfsdisk"
}],
```
`qsfs_disks` is a list, representing all of the QSFS disks used within that VM.
- `qsfs_zdbs_name`: that's the backend ZDBs we defined in the beginning
- `expected_shards`: how many ZDBs that QSFS should be working with
- `minimal_shards`: the minimal possible amount of ZDBs to recover the data with when losing disks e.g due to failure
- `mountpoint`: where it will be mounted on the VM `/myqsfsdisk`
### Getting the Deployment Information
```ts
const l = await grid3.machines.getObj(vms.name);
log(l);
```
### Deleting a Deployment
```ts
// delete
const d = await grid3.machines.delete({ name: machines_name });
log(d);
const r = await grid3.qsfs_zdbs.delete({ name: qsfs_name });
log(r);
```

View File

@ -1,142 +0,0 @@
<h1>Deploying ZDBs for QSFS</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [Getting the client](#getting-the-client)
- [Preparing the nodes](#preparing-the-nodes)
- [Preparing ZDBs](#preparing-zdbs)
- [Deploying the ZDBs](#deploying-the-zdbs)
- [Getting deployment information](#getting-deployment-information)
- [Deleting a deployment](#deleting-a-deployment)
***
## Introduction
We show how to deploy ZDBs for QSFS on the TFGrid with the Javascript client.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
````typescript
import { FilterOptions, QSFSZDBSModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const qsfs_name = "zdbsQsfsDemo";
const qsfsQueryOptions: FilterOptions = {
hru: 8,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 12,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my zdbs test",
metadata: "",
};
const deploy_res = await grid3.qsfs_zdbs.deploy(qsfs);
log(deploy_res);
const zdbs_data = await grid3.qsfs_zdbs.get({ name: qsfs_name });
log(zdbs_data);
await grid3.disconnect();
}
main();
````
## Detailed explanation
### Getting the client
```typescript
const grid3 = getClient();
```
### Preparing the nodes
we need to deploy the zdbs on two different nodes so, we setup the filters here to retrieve the available nodes.
````typescript
const qsfsQueryOptions: FilterOptions = {
hru: 16,
availableFor: grid3.twinId,
farmId: 1,
};
const qsfsNodes = [];
const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions);
if (allNodes.length >= 2) {
qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId);
} else {
throw Error("Couldn't find nodes for qsfs");
}
````
Now we have two nodes in `qsfsNode`.
### Preparing ZDBs
````typescript
const qsfs_name = "zdbsQsfsDemo";
````
We prepare here a name to use across the client for the QSFS ZDBs
### Deploying the ZDBs
````typescript
const qsfs: QSFSZDBSModel = {
name: qsfs_name,
count: 12,
node_ids: qsfsNodes,
password: "mypassword",
disk_size: 1,
description: "my qsfs test",
metadata: "",
};
const deploy_res = await grid3.qsfs_zdbs.deploy(qsfs);
log(deploy_res);
````
Here we deploy `12` ZDBs on nodes in `qsfsNode` with password `mypassword`, all of them having disk size of `1GB`, the client already add 4 zdbs for metadata.
### Getting deployment information
````typescript
const zdbs_data = await grid3.qsfs_zdbs.get({ name: qsfs_name });
log(zdbs_data);
````
### Deleting a deployment
````typescript
const delete_response = await grid3.qsfs_zdbs.delete({ name: qsfs_name });
log(delete_response);
````

View File

@ -1,24 +0,0 @@
<h1> Javascript Client </h1>
This section covers developing projects on top of Threefold Grid using Javascript language.
Javascript has a huge ecosystem, and first class citizen when it comes to blockchain technologies like substrate and that was one of the reasons for it to become one the very first supported languages on the grid.
Please make sure to check the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) before continuing.
<h2> Table of Contents </h2>
- [Installation](./grid3_javascript_installation.md)
- [Loading Client](./grid3_javascript_loadclient.md)
- [Deploy a VM](./grid3_javascript_vm.md)
- [Capacity Planning](./grid3_javascript_capacity_planning.md)
- [Deploy Multiple VMs](./grid3_javascript_vms.md)
- [Deploy CapRover](./grid3_javascript_caprover.md)
- [Gateways](./grid3_javascript_vm_gateways.md)
- [Deploy a Kubernetes Cluster](./grid3_javascript_kubernetes.md)
- [Deploy a ZDB](./grid3_javascript_zdb.md)
- [Deploy ZDBs for QSFS](./grid3_javascript_qsfs_zdbs.md)
- [QSFS](./grid3_javascript_qsfs.md)
- [Key Value Store](./grid3_javascript_kvstore.md)
- [VM with Wireguard and Gateway](./grid3_wireguard_gateway.md)
- [GPU Support](./grid3_javascript_gpu_support.md)

View File

@ -1,15 +0,0 @@
## How to run the scripts
- Set your grid3 client configuration in `scripts/client_loader.ts` or easily use one of `config.json`
- update your customized deployments specs
- Run using [ts-node](https://www.npmjs.com/ts-node)
```bash
npx ts-node --project tsconfig-node.json scripts/zdb.ts
```
or
```bash
yarn run ts-node --project tsconfig-node.json scripts/zdb.ts
```

View File

@ -1,194 +0,0 @@
<h1> Deploying a VM </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example](#example)
- [Detailed Explanation](#detailed-explanation)
- [Building Network](#building-network)
- [Building the Disk Model](#building-the-disk-model)
- [Building the VM](#building-the-vm)
- [Building VMs Collection](#building-vms-collection)
- [deployment](#deployment)
- [Getting Deployment Information](#getting-deployment-information)
- [Deleting a Deployment](#deleting-a-deployment)
***
## Introduction
We present information on how to deploy a VM with the Javascript client with concrete examples.
## Example
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "dynamictest";
n.ip_range = "10.249.0.0/16";
// create disk Object
const disk = new DiskModel();
disk.name = "dynamicDisk";
disk.size = 8;
disk.mountpoint = "/testdisk";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
country: "Belgium",
};
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 1;
vm.memory = 1024;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm.entrypoint = "/sbin/zinit init";
vm.env = {
SSH_KEY: config.ssh_key,
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "dynamicVMS";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
## Detailed Explanation
### Building Network
```ts
// create network Object
const n = new NetworkModel();
n.name = "dynamictest";
n.ip_range = "10.249.0.0/16";
```
Here we prepare the network model that is going to be used by specifying a name to our network and the range it will be spanning over
## Building the Disk Model
```ts
// create disk Object
const disk = new DiskModel();
disk.name = "dynamicDisk";
disk.size = 8;
disk.mountpoint = "/testdisk";
```
here we create the disk model specifying its name, size in GB and where it will be mounted eventually
## Building the VM
```ts
// create vm node Object
const vm = new MachineModel();
vm.name = "testvm";
vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice
vm.disks = [disk];
vm.public_ip = false;
vm.planetary = true;
vm.cpu = 1;
vm.memory = 1024;
vm.rootfs_size = 0;
vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm.entrypoint = "/sbin/zinit init";
vm.env = {
SSH_KEY: config.ssh_key,
};
```
Now we go to the VM model, that will be used to build our `zmachine` object
We need to specify its
- name
- node_id: where it will get deployed
- disks: disks model collection
- memory
- root filesystem size
- flist: the image it is going to start from. Check the [supported flists](../flist/grid3_supported_flists.md)
- entry point: entrypoint command / script to execute
- env: has the environment variables needed e.g sshkeys used
- public ip: if we want to have a public ip attached to the VM
- planetary: to enable planetary network on VM
## Building VMs Collection
```ts
// create VMs Object
const vms = new MachinesModel();
vms.name = "dynamicVMS";
vms.network = n;
vms.machines = [vm];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
```
Here it's quite simple we can add one or more VM to the `machines` property to have them deployed as part of our project
## deployment
```ts
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
```
## Getting Deployment Information
can do so based on the name you gave to the `vms` collection
```ts
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
```
## Deleting a Deployment
```ts
// delete
const d = await grid3.machines.delete({ name: vms.name });
log(d);
```
In the underlying layer we cancel the contracts that were created on the chain and as a result all of the workloads tied to his project will get deleted.

View File

@ -1,189 +0,0 @@
<h1> Deploying a VM and exposing it over a Gateway Prefix </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [deploying](#deploying)
- [getting deployment object](#getting-deployment-object)
- [deletion](#deletion)
- [Deploying a VM and exposing it over a Gateway using a Full domain](#deploying-a-vm-and-exposing-it-over-a-gateway-using-a-full-domain)
- [Example code](#example-code-1)
- [Detailed explanation](#detailed-explanation-1)
- [deploying](#deploying-1)
- [get deployment object](#get-deployment-object)
- [deletion](#deletion-1)
***
## Introduction
After the [deployment of a VM](./grid3_javascript_vm.md), now it's time to expose it to the world
## Example code
```ts
import { FilterOptions, GatewayNameModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
// read more about the gateway types in this doc: https://github.com/threefoldtech/zos/tree/main/docs/gateway
async function main() {
const grid3 = await getClient();
const gatewayQueryOptions: FilterOptions = {
gateway: true,
farmId: 1,
};
const gw = new GatewayNameModel();
gw.name = "test";
gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId;
gw.tls_passthrough = false;
// the backends have to be in this format `http://ip:port` or `https://ip:port`, and the `ip` pingable from the node so using the ygg ip or public ip if available.
gw.backends = ["http://185.206.122.35:8000"];
// deploy
const res = await grid3.gateway.deploy_name(gw);
log(res);
// get the deployment
const l = await grid3.gateway.getObj(gw.name);
log(l);
// // delete
// const d = await grid3.gateway.delete_name({ name: gw.name });
// log(d);
grid3.disconnect();
}
main();
```
## Detailed explanation
```ts
const gw = new GatewayNameModel();
gw.name = "test";
gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId;
gw.tls_passthrough = false;
gw.backends = ["http://185.206.122.35:8000"];
```
- we created a gateway name model and gave it a `name` -that's why it's called GatewayName- `test` to be deployed on gateway node to end up with a domain `test.gent01.devnet.grid.tf`,
- we create a proxy for the gateway to send the traffic coming to `test.ghent01.devnet.grid.tf` to the backend `http://185.206.122.35`, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replace it with `true` your backend service needs to be able to do the TLS termination
### deploying
```ts
// deploy
const res = await grid3.gateway.deploy_name(gw);
log(res);
```
this deploys `GatewayName` on the grid
### getting deployment object
```ts
const l = await grid3.gateway.getObj(gw.name);
log(l);
```
getting the deployment information can be done using `getObj`
### deletion
```ts
const d = await grid3.gateway.delete_name({ name: gw.name });
log(d);
```
## Deploying a VM and exposing it over a Gateway using a Full domain
After the [deployment of a VM](./grid3_javascript_vm.md), now it's time to expose it to the world
## Example code
```ts
import { FilterOptions, GatewayFQDNModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
// read more about the gateway types in this doc: https://github.com/threefoldtech/zos/tree/main/docs/gateway
async function main() {
const grid3 = await getClient();
const gatewayQueryOptions: FilterOptions = {
gateway: true,
farmId: 1,
};
const gw = new GatewayFQDNModel();
gw.name = "applyFQDN";
gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId;
gw.fqdn = "test.hamada.grid.tf";
gw.tls_passthrough = false;
// the backends have to be in this format `http://ip:port` or `https://ip:port`, and the `ip` pingable from the node so using the ygg ip or public ip if available.
gw.backends = ["http://185.206.122.35:8000"];
// deploy
const res = await grid3.gateway.deploy_fqdn(gw);
log(res);
// get the deployment
const l = await grid3.gateway.getObj(gw.name);
log(l);
// // delete
// const d = await grid3.gateway.delete_fqdn({ name: gw.name });
// log(d);
grid3.disconnect();
}
main();
```
## Detailed explanation
```ts
const gw = new GatewayFQDNModel();
gw.name = "applyFQDN";
gw.node_id = 1;
gw.fqdn = "test.hamada.grid.tf";
gw.tls_passthrough = false;
gw.backends = ["my yggdrasil IP"];
```
- we created a `GatewayFQDNModel` and gave it a name `applyFQDNN` to be deployed on gateway node `1` and specified the fully qualified domain `fqdn` to a domain we own `test.hamada.grid.tf`
- we created a record on our name provider for `test.hamada.grid.tf` to point to the IP of gateway node `1`
- we specified the backened would be an yggdrassil ip so once this is deployed when we go to `test.hamada.grid.tf` we go to the gateway server and from their our traffic goes to the backend.
### deploying
```ts
// deploy
const res = await grid3.gateway.deploy_fqdn(gw);
log(res);
```
this deploys `GatewayName` on the grid
### get deployment object
```ts
const l = await grid3.gateway.getObj(gw.name);
log(l);
```
getting the deployment information can be done using `getObj`
### deletion
```ts
const d = await grid3.gateway.delete_fqdn({ name: gw.name });
log(d);
```

View File

@ -1,108 +0,0 @@
<h1> Deploying multiple VMs</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Example code](#example-code)
***
## Introduction
It is possible to deploy multiple VMs with the Javascript client.
## Example code
```ts
import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
// create network Object
const n = new NetworkModel();
n.name = "monNetwork";
n.ip_range = "10.238.0.0/16";
// create disk Object
const disk1 = new DiskModel();
disk1.name = "newDisk1";
disk1.size = 1;
disk1.mountpoint = "/newDisk1";
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 1, // GB
sru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
// create vm node Object
const vm1 = new MachineModel();
vm1.name = "testvm1";
vm1.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
vm1.disks = [disk1];
vm1.public_ip = false;
vm1.planetary = true;
vm1.cpu = 1;
vm1.memory = 1024;
vm1.rootfs_size = 0;
vm1.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm1.entrypoint = "/sbin/zinit init";
vm1.env = {
SSH_KEY: config.ssh_key,
};
// create disk Object
const disk2 = new DiskModel();
disk2.name = "newDisk2";
disk2.size = 1;
disk2.mountpoint = "/newDisk2";
// create another vm node Object
const vm2 = new MachineModel();
vm2.name = "testvm2";
vm2.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[1].nodeId;
vm2.disks = [disk2];
vm2.public_ip = false;
vm2.planetary = true;
vm2.cpu = 1;
vm2.memory = 1024;
vm2.rootfs_size = 0;
vm2.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist";
vm2.entrypoint = "/sbin/zinit init";
vm2.env = {
SSH_KEY: config.ssh_key,
};
// create VMs Object
const vms = new MachinesModel();
vms.name = "monVMS";
vms.network = n;
vms.machines = [vm1, vm2];
vms.metadata = "{'testVMs': true}";
vms.description = "test deploying VMs via ts grid3 client";
// deploy vms
const res = await grid3.machines.deploy(vms);
log(res);
// get the deployment
const l = await grid3.machines.getObj(vms.name);
log(l);
// // delete
// const d = await grid3.machines.delete({ name: vms.name });
// log(d);
await grid3.disconnect();
}
main();
```
It's similiar to the previous section of [deploying a single VM](../javascript/grid3_javascript_vm.md), but just adds more vm objects to vms collection.

View File

@ -1,143 +0,0 @@
<h1>Deploying ZDB</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Example code](#example-code)
- [Detailed explanation](#detailed-explanation)
- [Getting the client](#getting-the-client)
- [Building the model](#building-the-model)
- [preparing ZDBs collection](#preparing-zdbs-collection)
- [Deployment](#deployment)
- [Getting Deployment information](#getting-deployment-information)
- [Deleting a deployment](#deleting-a-deployment)
***
## Introduction
We show how to deploy ZDB on the TFGrid with the Javascript client.
## Prerequisites
- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared
## Example code
```ts
import { FilterOptions, ZDBModel, ZdbModes, ZDBSModel } from "../src";
import { getClient } from "./client_loader";
import { log } from "./utils";
async function main() {
const grid3 = await getClient();
const zdbQueryOptions: FilterOptions = {
sru: 1,
hru: 1,
availableFor: grid3.twinId,
farmId: 1,
};
// create zdb object
const zdb = new ZDBModel();
zdb.name = "hamada";
zdb.node_id = +(await grid3.capacity.filterNodes(zdbQueryOptions))[0].nodeId;
zdb.mode = ZdbModes.user;
zdb.disk_size = 1;
zdb.publicNamespace = false;
zdb.password = "testzdb";
// create zdbs object
const zdbs = new ZDBSModel();
zdbs.name = "tttzdbs";
zdbs.zdbs = [zdb];
zdbs.metadata = '{"test": "test"}';
// deploy zdb
const res = await grid3.zdbs.deploy(zdbs);
log(res);
// get the deployment
const l = await grid3.zdbs.getObj(zdbs.name);
log(l);
// // delete
// const d = await grid3.zdbs.delete({ name: zdbs.name });
// log(d);
await grid3.disconnect();
}
main();
```
## Detailed explanation
### Getting the client
```ts
const grid3 = getClient();
```
### Building the model
```ts
// create zdb object
const zdb = new ZDBModel();
zdb.name = "hamada";
zdb.node_id = +(await grid3.capacity.filterNodes(zdbQueryOptions))[0].nodeId;
zdb.mode = ZdbModes.user;
zdb.disk_size = 1;
zdb.publicNamespace = false;
zdb.password = "testzdb";
```
Here we define a `ZDB model` and setting the relevant properties e.g
- name
- node_id : where to deploy on
- mode: `user` or `seq`
- disk_size: disk size in GB
- publicNamespace: a public namespace can be read-only if a password is set
- password: namespace password
### preparing ZDBs collection
```ts
// create zdbs object
const zdbs = new ZDBSModel();
zdbs.name = "tttzdbs";
zdbs.zdbs = [zdb];
zdbs.metadata = '{"test": "test"}';
```
you can attach multiple ZDBs into the collection and send it for deployment
### Deployment
```ts
const res = await grid3.zdbs.deploy(zdbs);
log(res);
```
### Getting Deployment information
`getObj` gives detailed information about the workload.
```ts
// get the deployment
const l = await grid3.zdbs.getObj(zdbs.name);
log(l);
```
### Deleting a deployment
`.delete` method helps cancelling the relevant contracts related to that ZDBs deployment
```ts
// delete
const d = await grid3.zdbs.delete({ name: zdbs.name });
log(d);
```

View File

@ -1,302 +0,0 @@
<h1> Deploying a VM with Wireguard and Gateway </h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Client Configurations](#client-configurations)
- [Code Example](#code-example)
- [Detailed Explanation](#detailed-explanation)
- [Get the Client](#get-the-client)
- [Get the Nodes](#get-the-nodes)
- [Deploy the VM](#deploy-the-vm)
- [Deploy the Gateway](#deploy-the-gateway)
- [Get the Deployments Information](#get-the-deployments-information)
- [Disconnect the Client](#disconnect-the-client)
- [Delete the Deployments](#delete-the-deployments)
- [Conclusion](#conclusion)
***
## Introduction
We present here the relevant information when it comes to deploying a virtual machine with Wireguard and a gateway.
## Client Configurations
To configure the client, have a look at [this section](./grid3_javascript_loadclient.md).
## Code Example
```ts
import { FilterOptions, GatewayNameModel, GridClient, MachineModel, MachinesModel, NetworkModel } from "../src";
import { config, getClient } from "./client_loader";
import { log } from "./utils";
function createNetworkModel(gwNode: number, name: string): NetworkModel {
return {
name,
addAccess: true,
accessNodeId: gwNode,
ip_range: "10.238.0.0/16",
} as NetworkModel;
}
function createMachineModel(node: number) {
return {
name: "testvm1",
node_id: node,
public_ip: false,
planetary: true,
cpu: 1,
memory: 1024 * 2,
rootfs_size: 0,
disks: [],
flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist",
entrypoint: "/usr/bin/python3 -m http.server --bind ::",
env: {
SSH_KEY: config.ssh_key,
},
} as MachineModel;
}
function createMachinesModel(vm: MachineModel, network: NetworkModel): MachinesModel {
return {
name: "newVMs",
network,
machines: [vm],
metadata: "",
description: "test deploying VMs with wireguard via ts grid3 client",
} as MachinesModel;
}
function createGwModel(node_id: number, ip: string, networkName: string, name: string, port: number) {
return {
name,
node_id,
tls_passthrough: false,
backends: [`http://${ip}:${port}`],
network: networkName,
} as GatewayNameModel;
}
async function main() {
const grid3 = await getClient();
const gwNode = +(await grid3.capacity.filterNodes({ gateway: true }))[0].nodeId;
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 2, // GB
availableFor: grid3.twinId,
farmId: 1,
};
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
const network = createNetworkModel(gwNode, "monNetwork");
const vm = createMachineModel(vmNode);
const machines = createMachinesModel(vm, network);
log(`Deploying vm on node: ${vmNode}, with network node: ${gwNode}`);
// deploy the vm
const vmResult = await grid3.machines.deploy(machines);
log(vmResult);
const deployedVm = await grid3.machines.getObj(machines.name);
log("+++ deployed vm +++");
log(deployedVm);
// deploy the gateway
const vmPrivateIP = (deployedVm as { interfaces: { ip: string }[] }[])[0].interfaces[0].ip;
const gateway = createGwModel(gwNode, vmPrivateIP, network.name, "pyserver", 8000);
log(`deploying gateway ${network.name} on node ${gwNode}`);
const gatewayResult = await grid3.gateway.deploy_name(gateway);
log(gatewayResult);
log("+++ Deployed gateway +++");
const deployedGw = await grid3.gateway.getObj(gateway.name);
log(deployedGw);
await grid3.disconnect();
}
main();
```
## Detailed Explanation
What we need to do with that code is: Deploy a name gateway with the wireguard IP as the backend; that allows accessing a server inside the vm through the gateway using the private network (wireguard) as the backend.
This will be done through the following steps:
### Get the Client
```ts
const grid3 = getClient();
```
### Get the Nodes
Determine the deploying nodes for the vm, network and gateway.
- Gateway and network access node
```ts
const gwNode = +(await grid3.capacity.filterNodes({ gateway: true }))[0].nodeId;
```
Using the `filterNodes` method, will get the first gateway node id, we will deploy the gateway and will use it as our network access node.
> The gateway node must be the same as the network access node.
- VM node
we need to set the filter options first for this example we will deploy the vm with 1 cpu, 2 GB of memory.
now will crete a `FilterOptions` object with that specs and get the firs node id of the result.
```ts
const vmQueryOptions: FilterOptions = {
cru: 1,
mru: 2, // GB
availableFor: grid3.twinId,
farmId: 1,
};
const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId;
```
### Deploy the VM
We need to create the network and machine models, the deploy the VM
```ts
const network = createNetworkModel(gwNode, "monNetwork");
const vm = createMachineModel(vmNode);
const machines = createMachinesModel(vm, network);
log(`Deploying vm on node: ${vmNode}, with network node: ${gwNode}`);
// deploy the vm
const vmResult = await grid3.machines.deploy(machines);
log(vmResult);
```
- `CreateNetWorkModel` :
we are creating a network and set the node id to be `gwNode`, the name `monNetwork` and inside the function we set `addAccess: true` to add __wireguard__ access.
- `createMachineModel` and `createMachinesModel` is similar to the previous section of [deploying a single VM](../javascript/grid3_javascript_vm.md), but we are passing the created `NetworkModel` to the machines model and the entry point here runs a simple python server.
### Deploy the Gateway
Now we have our VM deployed with it's network, we need to make the gateway on the same node, same network and pointing to the VM's private IP address.
- Get the VM's private IP address:
```ts
const vmPrivateIP = (deployedVm as { interfaces: { ip: string }[] }[])[0].interfaces[0].ip;
```
- Create the Gateway name model:
```ts
const gateway = createGwModel(gwNode, vmPrivateIP, network.name, "pyserver", 8000);
```
This will create a `GatewayNameModel` with the following properties:
- `name` : the subdomain name
- `node_id` : the gateway node id
- `tls_passthrough: false`
- `backends: [`http://${ip}:${port}`]` : the private ip address and the port number of our machine
- `network: networkName` : the network name, we already created earlier.
### Get the Deployments Information
```ts
const deployedVm = await grid3.machines.getObj(machines.name);
log("+++ deployed vm +++");
log(deployedVm);
log("+++ Deployed gateway +++");
const deployedGw = await grid3.gateway.getObj(gateway.name);
log(deployedGw);
```
- `deployedVm` : is an array of one object contains the details about the vm deployment.
```ts
[
{
version: 0,
contractId: 30658,
nodeId: 11,
name: 'testvm1',
created: 1686225126,
status: 'ok',
message: '',
flist: 'https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist',
publicIP: null,
planetary: '302:9e63:7d43:b742:3582:a831:cd41:3f19',
interfaces: [ { network: 'monNetwork', ip: '10.238.2.2' } ],
capacity: { cpu: 1, memory: 2048 },
mounts: [],
env: {
SSH_KEY: 'ssh'
},
entrypoint: '/usr/bin/python3 -m http.server --bind ::',
metadata: '{"type":"vm","name":"newVMs","projectName":""}',
description: 'test deploying VMs with wireguard via ts grid3 client',
rootfs_size: 0,
corex: false
}
]
```
- `deployedGw` : is an array of one object contains the details of the gateway name.
```ts
[
{
version: 0,
contractId: 30659,
name: 'pyserver1',
created: 1686225139,
status: 'ok',
message: '',
type: 'gateway-name-proxy',
domain: 'pyserver1.gent02.dev.grid.tf',
tls_passthrough: false,
backends: [ 'http://10.238.2.2:8000' ],
metadata: '{"type":"gateway","name":"pyserver1","projectName":""}',
description: ''
}
]
```
Now we can access the vm using the `domain` that returned in the object.
### Disconnect the Client
finally we need to disconnect the client using `await grid3.disconnect();`
### Delete the Deployments
If we want to delete the deployments we can just do this:
```ts
const deletedMachines = await grid3.machines.delete({ name: machines.name});
log(deletedMachines);
const deletedGW = await grid3.gateway.delete_name({ name: gateway.name});
log(deletedGW);
```
## Conclusion
This section presented a detailed description on how to create a virtual machine with private IP using Wireguard and use it as a backend for a name gateway.
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@ -1,11 +0,0 @@
- [Installation](@grid3_javascript_installation)
- [Loading client](@grid3_javascript_loadclient)
- [Deploy a VM](@grid3_javascript_vm)
- [Capacity planning](@grid3_javascript_capacity_planning)
- [Deploy multiple VMs](@grid3_javascript_vms)
- [Deploy CapRover](@grid3_javascript_caprover)
- [Gateways](@grid3_javascript_vm_gateways)
- [Deploy a Kubernetes cluster](@grid3_javascript_kubernetes)
- [Deploy a ZDB](@grid3_javascript_zdb)
- [QSFS](@grid3_javascript_qsfs)
- [Key Value Store](@grid3_javascript_kvstore)

View File

@ -1,127 +0,0 @@
<h1>Commands</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Work on Docs](#work-on-docs)
- [To start the GridProxy server](#to-start-the-gridproxy-server)
- [Run tests](#run-tests)
***
## Introduction
The Makefile makes it easier to do mostly all the frequently commands needed to work on the project.
## Work on Docs
we are using [swaggo/swag](https://github.com/swaggo/swag) to generate swagger docs based on the annotation inside the code.
- install swag executable binary
```bash
go install github.com/swaggo/swag/cmd/swag@latest
```
- now if you check the binary directory inside go directory you will find the executable file.
```bash
ls $(go env GOPATH)/bin
```
- to run swag you can either use the full path `$(go env GOPATH)/bin/swag` or export go binary to `$PATH`
```bash
export PATH=$PATH:$(go env GOPATH)/bin
```
- use swag to format code comments.
```bash
swag fmt
```
- update the docs
```bash
swag init
```
- to parse external types from vendor
```bash
swag init --parseVendor
```
- for a full generate docs command
```bash
make docs
```
## To start the GridProxy server
After preparing the postgres database you can `go run` the main file in `cmds/proxy_server/main.go` which responsible for starting all the needed server/clients.
The server options
| Option | Description |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| -address | Server ip address (default `":443"`) |
| -ca | certificate authority used to generate certificate (default `"https://acme-staging-v02.api.letsencrypt.org/directory"`) |
| -cert-cache-dir | path to store generated certs in (default `"/tmp/certs"`) |
| -domain | domain on which the server will be served |
| -email | email address to generate certificate with |
| -log-level | log level `[debug\|info\|warn\|error\|fatal\|panic]` (default `"info"`) |
| -no-cert | start the server without certificate |
| -postgres-db | postgres database |
| -postgres-host | postgres host |
| -postgres-password | postgres password |
| -postgres-port | postgres port (default 5432) |
| -postgres-user | postgres username |
| -tfchain-url | tF chain url (default `"wss://tfchain.dev.grid.tf/ws"`) |
| -relay-url | RMB relay url (default`"wss://relay.dev.grid.tf"`) |
| -mnemonics | Dummy user mnemonics for relay calls |
| -v | shows the package version |
For a full server setup:
```bash
make restart
```
## Run tests
There is two types of tests in the project
- Unit Tests
- Found in `pkg/client/*_test.go`
- Run with `go test -v ./pkg/client`
- Integration Tests
- Found in `tests/queries/`
- Run with:
```bash
go test -v \
--seed 13 \
--postgres-host <postgres-ip> \
--postgres-db tfgrid-graphql \
--postgres-password postgres \
--postgres-user postgres \
--endpoint <server-ip> \
--mnemonics <insert user mnemonics>
```
- Or to run a specific test you can append the previous command with
```bash
-run <TestName>
```
You can found the TestName in the `tests/queries/*_test.go` files.
To run all the tests use
```bash
make test-all
```

View File

@ -1,55 +0,0 @@
<h1>Contributions Guide</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Project structure](#project-structure)
- [Internal](#internal)
- [Pkg](#pkg)
- [Writing tests](#writing-tests)
***
## Introduction
We propose a quick guide to learn how to contribute.
## Project structure
The main structure of the code base is as follows:
- `charts`: helm chart
- `cmds`: includes the project Golang entrypoints
- `docs`: project documentation
- `internal`: contains the explorer API logic and the cert manager implementation, this where most of the feature work will be done
- `pkg`: contains client implementation and shared libs
- `tests`: integration tests
- `tools`: DB tools to prepare the Postgres DB for testing and development
- `rootfs`: ZOS root endpoint that will be mounted in the docker image
### Internal
- `explorer`: contains the explorer server logic:
- `db`: the db connection and operations
- `mw`: defines the generic action mount that will be be used as http handler
- `certmanager`: logic to ensure certificates are available and up to date
`server.go` includes the logic for all the API operations.
### Pkg
- `client`: client implementation
- `types`: defines all the API objects
## Writing tests
Adding a new endpoint should be accompanied with a corresponding test. Ideally every change or bug fix should include a test to ensure the new behavior/fix is working as intended.
Since these are integration tests, you need to first make sure that your local db is already seeded with the ncessary data. See tools [doc](./db_testing.md) for more information about how to prepare your db.
Testing tools offer two clients that are the basic of most tests:
- `local`: this client connects to the local db
- `proxy client`: this client connects to the running local instance
You need to start an instance of the server before running the tests. Check [here](./commands.md) for how to start.

View File

@ -1,21 +0,0 @@
<h1>Database</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Max Open Connections](#max-open-connections)
***
## Introduction
The grid proxy has access to a postgres database containing information about the tfgrid, specifically information about grid nodes, farms, twins, and contracts.\
The database is filled/updated by this [indexer](https://github.com/threefoldtech/tfchain_graphql).
The grid proxy mainly retrieves information from the db with a few modifications for efficient retrieval (e.g. adding indices, caching node gpus, etc..).
## Max Open Connections
The postgres database can handle 100 open connections concurrently (that is the default value set by postgres), this number can be increased, depending on the infrastructure, by modifying it in the postgres.conf file where the db is deployed, or by executing the following query `ALTER system SET max_connections=size-of-connection`, but this requires a db restart to take effect.\
The explorer creates a connection pool to the postgres db, with a max open pool connections set to a specific number (currently 80).\
It's important to distinguish between the database max connections, and the max pool open connections, because if the pool did not have any constraints, it would try to open as many connections as it wanted, without any notion of the maximum connections the database accepts. It's the database responsibility then to accept or deny the connection.\
This is why the max number of open pool connections is set to 80: It's below the max connections the database could handle (100), and it gives room for other actors outside of the explorer to open connections with the database.\

View File

@ -1,45 +0,0 @@
<h1>DB for testing</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Run postgresql container](#run-postgresql-container)
- [Create the DB](#create-the-db)
- [Method 1: Generate a db with relevant schema using the db helper tool:](#method-1-generate-a-db-with-relevant-schema-using-the-db-helper-tool)
- [Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run:](#method-2-fill-the-db-from-a-production-db-dump-file-for-example-if-you-have-dumpsql-file-you-can-run)
***
## Introduction
We show how to use a database for testing.
## Run postgresql container
```bash
docker run --rm --name postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=tfgrid-graphql \
-p 5432:5432 -d postgres
```
## Create the DB
you can either Generate a db with relevant schema to test things locally quickly, or load a previously taken DB dump file:
### Method 1: Generate a db with relevant schema using the db helper tool:
```bash
cd tools/db/ && go run . \
--postgres-host 127.0.0.1 \
--postgres-db tfgrid-graphql \
--postgres-password postgres \
--postgres-user postgres \
--reset \
```
### Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run:
```bash
psql -h 127.0.0.1 -U postgres -d tfgrid-graphql < dump.sql
```

View File

@ -1,38 +0,0 @@
<h1>The Grid Explorer</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Explorer Overview](#explorer-overview)
- [Explorer Endpoints](#explorer-endpoints)
***
## Introduction
The Grid Explorer is a rest API used to index a various information from the TFChain.
## Explorer Overview
- Due to limitations on indexing information from the blockchain, Complex inter-tables queries and limitations can't be applied directly on the chain.
- Here comes the TFGridDB, a shadow database contains all the data on the chain which is being updated each 2 hours.
- Then the explorer can apply a raw SQL queries on the database with all limitations and filtration needed.
- The used technology to extract the info from the blockchain is Subsquid check the [repo](https://github.com/threefoldtech/tfchain_graphql).
## Explorer Endpoints
| HTTP Verb | Endpoint | Description |
| --------- | --------------------------- | ---------------------------------- |
| GET | `/contracts` | Show all contracts on the chain |
| GET | `/farms` | Show all farms on the chain |
| GET | `/gateways` | Show all gateway nodes on the grid |
| GET | `/gateways/:node_id` | Get a single gateway node details |
| GET | `/gateways/:node_id/status` | Get a single node status |
| GET | `/nodes` | Show all nodes on the grid |
| GET | `/nodes/:node_id` | Get a single node details |
| GET | `/nodes/:node_id/status` | Get a single node status |
| GET | `/stats` | Show the grid statistics |
| GET | `/twins` | Show all the twins on the chain |
| GET | `/nodes/:node_id/statistics`| Get a single node ZOS statistics |
For the available filters on each node. check `/swagger/index.html` endpoint on the running instance.

View File

@ -1,117 +0,0 @@
<h1>Running Proxy in Production</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Production Run](#production-run)
- [To upgrade the machine](#to-upgrade-the-machine)
- [Dockerfile](#dockerfile)
- [Update helm package](#update-helm-package)
- [Install the chart using helm package](#install-the-chart-using-helm-package)
***
## Introduction
We show how to run grid proxy in production.
## Production Run
- Download the latest binary [here](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-client)
- add the execution permission to the binary and move it to the bin directory
```bash
chmod +x ./gridproxy-server
mv ./gridproxy-server /usr/local/bin/gridproxy-server
```
- Add a new systemd service
```bash
cat << EOF > /etc/systemd/system/gridproxy-server.service
[Unit]
Description=grid proxy server
After=network.target
[Service]
ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics <insert user mnemonics>
Type=simple
Restart=always
User=root
Group=root
[Install]
WantedBy=multi-user.target
Alias=gridproxy.service
EOF
```
- enable the service
```bash
systemctl enable gridproxy.service
```
- start the service
```bash
systemctl start gridproxy.service
```
- check the status
```bash
systemctl status gridproxy.service
```
- The command options:
- domain: the host domain which will generate ssl certificate to.
- email: the mail used to run generate the ssl certificate.
- ca: certificate authority server url, e.g.
- let's encrypt staging: `https://acme-staging-v02.api.letsencrypt.org/directory`
- let's encrypt production: `https://acme-v02.api.letsencrypt.org/directory`
- postgres -\*: postgres connection info.
## To upgrade the machine
- just replace the binary with the new one and apply
```bash
systemctl restart gridproxy-server.service
```
- it you have changes in the `/etc/systemd/system/gridproxy-server.service` you have to run this command first
```bash
systemctl daemon-reload
```
## Dockerfile
To build & run dockerfile
```bash
docker build -t threefoldtech/gridproxy .
docker run --name gridproxy -e POSTGRES_HOST="127.0.0.1" -e POSTGRES_PORT="5432" -e POSTGRES_DB="db" -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="password" -e MNEMONICS="<insert user mnemonics>" threefoldtech/gridproxy
```
## Update helm package
- Do `helm lint charts/gridproxy`
- Regenerate the packages `helm package -u charts/gridproxy`
- Regenerate index.yaml `helm repo index --url https://threefoldtech.github.io/tfgridclient_proxy/ .`
- Push your changes
## Install the chart using helm package
- Adding the repo to your helm
```bash
helm repo add gridproxy https://threefoldtech.github.io/tfgridclient_proxy/
```
- install a chart
```bash
helm install gridproxy/gridproxy
```

View File

@ -1,149 +0,0 @@
<h1> Introducing Grid Proxy </h1>
<h2> Table of Content</h2>
- [About](#about)
- [How to Use the Project](#how-to-use-the-project)
- [Used Technologies \& Prerequisites](#used-technologies--prerequisites)
- [Start for Development](#start-for-development)
- [Setup for Production](#setup-for-production)
- [Get and Install the Binary](#get-and-install-the-binary)
- [Add as a Systemd Service](#add-as-a-systemd-service)
***
<!-- About -->
## About
The TFGrid client Proxy acts as an interface to access information about the grid. It supports features such as filtering, limitation, and pagination to query the various entities on the grid like nodes, contracts and farms. Additionally the proxy can contact the required twin ID to retrieve stats about the relevant objects and performing ZOS calls.
The proxy is used as the backend of several threefold projects like:
- [Dashboard](../../dashboard/dashboard.md)
<!-- Usage -->
## How to Use the Project
If you don't want to care about setting up your instance you can use one of the live instances. each works against a different TFChain network.
- Dev network: <https://gridproxy.dev.grid.tf>
- Swagger: <https://gridproxy.dev.grid.tf/swagger/index.html>
- Qa network: <https://gridproxy.qa.grid.tf>
- Swagger: <https://gridproxy.qa.grid.tf/swagger/index.html>
- Test network: <https://gridproxy.test.grid.tf>
- Swagger: <https://gridproxy.test.grid.tf/swagger/index.html>
- Main network: <https://gridproxy.grid.tf>
- Swagger: <https://gridproxy.grid.tf/swagger/index.html>
Or follow the [development guide](#start-for-development) to run yours.
By default, the instance runs against devnet. to configure that you will need to config this while running the server.
> Note: You may face some differences between each instance and the others. that is normal because each network is in a different stage of development and works correctly with others parts of the Grid on the same network.
<!-- Prerequisites -->
## Used Technologies & Prerequisites
1. **GoLang**: Mainly the two parts of the project written in `Go 1.17`, otherwise you can just download the compiled binaries from github [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases)
2. **Postgresql**: Used to load the TFGrid DB
3. **Docker**: Containerize the running services such as Postgres and Redis.
4. **Mnemonics**: Secret seeds for adummy identity to use for the relay client.
For more about the prerequisites and how to set up and configure them. follow the [Setup guide](./setup.md)
<!-- Development -->
## Start for Development
To start the services for development or testing make sure first you have all the [Prerequisites](#used-technologies--prerequisites).
- Clone this repo
```bash
git clone https://github.com/threefoldtech/tfgrid-sdk-go.git
cd tfgrid-sdk-go/grid-proxy
```
- The `Makefile` has all that you need to deal with Db, Explorer, Tests, and Docs.
```bash
make help # list all the available subcommands.
```
- For a quick test explorer server.
```bash
make all-start e=<MNEMONICS>
```
Now you can access the server at `http://localhost:8080`
- Run the tests
```bash
make test-all
```
- Generate docs.
```bash
make docs
```
To run in development environment see [here](./db_testing.md) how to generate test db or load a db dump then use:
```sh
go run cmds/proxy_server/main.go --address :8080 --log-level debug -no-cert --postgres-host 127.0.0.1 --postgres-db tfgrid-graphql --postgres-password postgres --postgres-user postgres --mnemonics <insert user mnemonics>
```
Then visit `http://localhost:8080/<endpoint>`
For more illustrations about the commands needed to work on the project, see the section [Commands](./commands.md). For more info about the project structure and contributions guidelines check the section [Contributions](./contributions.md).
<!-- Production-->
## Setup for Production
## Get and Install the Binary
- You can either build the project:
```bash
make build
chmod +x cmd/proxy_server/server \
&& mv cmd/proxy_server/server /usr/local/bin/gridproxy-server
```
- Or download a release:
Check the [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) page and edit the next command with the chosen version.
```bash
wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v1.6.7-rc2/tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \
&& tar -xzf tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \
&& chmod +x server \
&& mv server /usr/local/bin/gridproxy-server
```
## Add as a Systemd Service
- Create the service file
```bash
cat << EOF > /etc/systemd/system/gridproxy-server.service
[Unit]
Description=grid proxy server
After=network.target
[Service]
ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --substrate wss://tfchain.dev.grid.tf/ws --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics <insert user mnemonics>
Type=simple
Restart=always
User=root
Group=root
[Install]
WantedBy=multi-user.target
Alias=gridproxy.service
EOF
```

View File

@ -1,25 +0,0 @@
<h1>Grid Proxy</h1>
Welcome to the *Grid Proxy* section of the TFGrid Manual!
In this comprehensive guide, we delve into the intricacies of the ThreeFold Grid Proxy, a fundamental component that empowers the ThreeFold Grid ecosystem.
This section is designed to provide users, administrators, and developers with a detailed understanding of the TFGrid Proxy, offering step-by-step instructions for its setup, essential commands, and insights into its various functionalities.
The Grid Proxy plays a pivotal role in facilitating secure and efficient communication between nodes within the ThreeFold Grid, contributing to the decentralized and autonomous nature of the network.
Whether you are a seasoned ThreeFold enthusiast or a newcomer exploring the decentralized web, this manual aims to be your go-to resource for navigating the ThreeFold Grid Proxy landscape.
To assist you on your journey, we have organized the content into distinct chapters below, covering everything from initial setup procedures and database testing to practical commands, contributions, and insights into the ThreeFold Explorer and the Grid Proxy Database functionalities.
<h2>Table of Contents</h2>
- [Introducing Grid Proxy](./proxy.md)
- [Setup](./setup.md)
- [DB Testing](./db_testing.md)
- [Commands](./commands.md)
- [Contributions](./contributions.md)
- [Explorer](./explorer.md)
- [Database](./database.md)
- [Production](./production.md)
- [Release](./release.md)

View File

@ -1,32 +0,0 @@
<h1>Release Grid-Proxy</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Steps](#steps)
- [Debugging](#debugging)
***
## Introduction
We show the steps to release a new version of the Grid Proxy.
## Steps
To release a new version of the Grid-Proxy component, follow these steps:
Update the `appVersion` field in the `charts/Chart.yaml` file. This field should reflect the new version number of the release.
The release process includes generating and pushing a Docker image with the latest GitHub tag. This step is automated through the `gridproxy-release.yml` workflow.
Trigger the `gridproxy-release.yml` workflow by pushing the desired tag to the repository. This will initiate the workflow, which will generate the Docker image based on the tag and push it to the appropriate registry.
## Debugging
In the event that the workflow does not run automatically after pushing the tag and making the release, you can manually execute it using the GitHub Actions interface. Follow these steps:
Go to the [GitHub Actions page](https://github.com/threefoldtech/tfgrid-sdk-go/actions/workflows/gridproxy-release.yml) for the Grid-Proxy repository.
Locate the workflow named gridproxy-release.yml.
Trigger the workflow manually by selecting the "Run workflow" option.

View File

@ -1,50 +0,0 @@
<h1>Setup</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Install Golang](#install-golang)
- [Docker](#docker)
- [Postgres](#postgres)
- [Get Mnemonics](#get-mnemonics)
***
## Introduction
We show how to set up grid proxy.
## Install Golang
To install Golang, you can follow the official [guide](https://go.dev/doc/install).
## Docker
Docker is useful for running the TFGridDb in container environment. Read this to [install Docker engine](../../system_administrators/computer_it_basics/docker_basics.md#install-docker-desktop-and-docker-engine).
Note: it will be necessary to follow step #2 in the previous article to run docker without sudo. if you want to avoid that. edit the docker commands in the `Makefile` and add sudo.
## Postgres
If you have docker installed you can run postgres on a container with:
```bash
make db-start
```
Then you can either load a dump of the database if you have one:
```bash
make db-dump p=~/dump.sql
```
or easier you can fill the database tables with randomly generated data with the script `tools/db/generate.go` to do that run:
```bash
make db-fill
```
## Get Mnemonics
1. Install [polkadot extension](https://github.com/polkadot-js/extension) on your browser.
2. Create a new account from the extension. It is important to save the seeds.

View File

@ -1,94 +0,0 @@
<h1> Farming Policies </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Farming Policy Fields](#farming-policy-fields)
- [Limits on linked policy](#limits-on-linked-policy)
- [Creating a Policy](#creating-a-policy)
- [Linking a policy to a Farm](#linking-a-policy-to-a-farm)
***
## Introduction
A farming policy defines how farming rewards are handed out for nodes. Every node has a farming policy attached. A farming policy is either linked to a farm, in which case new nodes are given the farming policy of the farm they are in once they register themselves. Alternatively a farming policy can be a "default". These are not attached to a farm, but instead they are used for nodes registered in farms which don't have a farming policy. Multiple defaults can exist at the same time, and the most fitting should be chosen.
## Farming Policy Fields
A farming policy has the following fields:
- id (used to link policies)
- name
- Default. This indicates if the policy can be used by any new node (if the parent farm does not have a dedicated attached policy). Essentially, a `Default` policy serves as a base which can be overriden per farm by linking a non default policy to said farm.
- Reward tft per CU, SU and NU, IPV4
- Minimal uptime needed in integer format (example 995)
- Policy end (After this block number the policy can not be linked to new farms any more)
- If this policy is immutable or not. Immutable policies can never be changed again
Additionally, we also use the following fields, though those are only useful for `Default` farming policies:
- Node needs to be certified
- Farm needs to be certified (with certification level, which will be changed to an enum).
In case a farming policy is not attached to a farm, new nodes will pick the most appropriate farming policy from the default ones. To decide which one to pick, they should be considered in order with most restrictive first until one matches. That means:
- First check for the policy with highest farming certification (in the current case gold) and certified nodes
- Then check for a policy with highest farming certification (in the current case gold) and non certified nodes
- Check for policy without farming certification but certified nodes
- Last check for a policy without any kind of certification
Important here is that certification of a node only happens after it comes live for the first time. As such, when a node gets certified, farming certification needs to be re evaluated, but only if the currently attached farming policy on the node is a `Default` policy (as specifically linked policies have priority over default ones). When evaluating again, we first consider if we are eligible for the farming policy linked to the farm, if any.
## Limits on linked policy
When a council member attaches a policy to a farm, limits can be set. These limits define how much a policy can be used for nodes, before it becomes unusable and gets removed. The limits currently are:
- Farming Policy ID: the ID of the farming policy which we want to limit to a farm.
- CU. Every time a node is added in the farm, it's CU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of CU that can be attached to this policy is reached.
- SU. Every time a node is added in the farm, it's SU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of SU that can be attached to this policy is reached.
- End date. After this date the policy is not effective anymore and can't be used. It is removed from the farm and a default policy is used.
- Certification. If set, only certified nodes can get this policy. Non certified nodes get a default policy.
Once a limit is reached, the farming policy is removed from the farm, so new nodes will get one of the default policies until a new policy is attached to the farm.
## Creating a Policy
A council member can create a Farming Policy (DAO) in the following way:
1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics`
2: Now select the account to propose from (should be an account that's a council member).
3: Select as action `dao` -> `propose`
5: Set a `threshold` (amount of farmers to vote)
6: Select an actions `tfgridModule` -> `createFarmingPolicy` and fill in all the fields.
7: Create a forum post with the details of the farming policy and fill in the link of that post in the `link` field
8: Give it some good `description`.
9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`.
10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate).
All (su, cu, nu, ipv4) values should be expressed in units USD. Minimal uptime should be expressed as integer that represents an percentage (example: `95`).
Policy end is optional (0 or some block number in the future). This is used for expiration.
For reference:
![image](./img/create_policy.png)
## Linking a policy to a Farm
First identify the policy ID to link to a farm. You can check for farming policies in [chainstate](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate) -> `tfgridModule` -> `farmingPolciesMap`, start with ID 1 and increment with 1 until you find the farming policy which was created when the proposal was expired and closed.
1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics`
2: Now select the account to propose from (should be an account that's a council member).
3: Select as proposal `dao` -> `propose`
4: Set a `threshold` (amount of farmers to vote)
5: Select an actions `tfgridModule` -> `attachPolicyToFarm` and fill in all the fields (FarmID and Limits).
6: Limits contains a `farming_policy_id` (Required) and cu, su, end, node count (which are all optional). It also contains `node_certification`, if this is set to true only certified nodes can have this policy.
7: Create a forum post with the details of why we want to link that farm to that policy and fill in the link of that post in the `link` field
8: Give it some good `description`.
9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`.
10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate).
For reference:
![image](./img/attach.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

View File

@ -1,57 +0,0 @@
<h1>ThreeFold Chain</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deployed instances](#deployed-instances)
- [Create a TFChain twin](#create-a-tfchain-twin)
- [Get your twin ID](#get-your-twin-id)
***
## Introduction
ThreeFold blockchain (aka TFChain) serves as a registry for Nodes, Farms, Digital Twins and Smart Contracts.
It is the backbone of [ZOS](https://github.com/threefoldtech/zos) and other components.
## Deployed instances
- Development network (Devnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer)
- Websocket url: `wss://tfchain.dev.grid.tf`
- GraphQL UI: [https://graphql.dev.grid.tf/graphql](https://graphql.dev.grid.tf/graphql)
- QA testing network (QAnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer)
- Websocket url: `wss://tfchain.qa.grid.tf`
- GraphQL UI: [https://graphql.qa.grid.tf/graphql](https://graphql.qa.grid.tf/graphql)
- Test network (Testnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer)
- Websocket url: `wss://tfchain.test.grid.tf`
- GraphQL UI: [https://graphql.test.grid.tf/graphql](https://graphql.test.grid.tf/graphql)
- Production network (Mainnet):
- Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer)
- Websocket url: `wss://tfchain.grid.tf`
- GraphQL UI: [https://graphql.grid.tf/graphql](https://graphql.grid.tf/graphql)
## Create a TFChain twin
A twin is a unique identifier linked to a specific account on a given TFChain network.
Actually there are 2 ways to create a twin:
- With the [Dashboard](../../dashboard/wallet_connector.md)
- a twin is automatically generated while creating a TFChain account
- With the TFConnect app
- a twin is automatically generated while creating a farm (in this case the twin will be created on mainnet)
## Get your twin ID
One can retrieve the twin ID associated to his account going to `Developer` -> `Chain state` -> `tfgridModule` -> `twinIdByAccountID()`.
![service_contract_twin_from_account](img/service_contract_twin_from_account.png)

View File

@ -1,95 +0,0 @@
<h1> ThreeFold Chain <h1>
<h2> Table of Contents </h2>
- [Introduction](#introduction)
- [Twins](#twins)
- [Farms](#farms)
- [Nodes](#nodes)
- [Node Contract](#node-contract)
- [Rent Contract](#rent-contract)
- [Name Contract](#name-contract)
- [Contract billing](#contract-billing)
- [Contract locking](#contract-locking)
- [Contract grace period](#contract-grace-period)
- [DAO](#dao)
- [Farming Policies](#farming-policies)
- [Node Connection price](#node-connection-price)
- [Node Certifiers](#node-certifiers)
***
## Introduction
ThreeFold Chain (TFChain) is the base layer for everything that interacts with the grid. Nodes, farms, users are registered on the chain. It plays the central role in achieving decentralised consensus between a user and Node to deploy a certain workload. A contract can be created on the chain that is essentially an agreement between a node and user.
## Twins
A twin is the central Identity object that is used for every entity that lives on the grid. A twin optionally has an IPV6 planetary network address which can be used for communication between twins no matter of the location they are in. A twin is coupled to a private/public keypair on chain. This keypair can hold TFT on TF Chain.
## Farms
A farm must be created before a Node can be booted. Every farms needs to have an unique name and is linked to the Twin that creates the farm. Once a farm is created, a unique ID is generated. This ID can be used to provide to the boot image of a Node.
## Nodes
When a node is booted for the first time, it registers itself on the chain and a unique identity is generated for this Node.
## Node Contract
A node contract is a contract between a user and a Node to deploy a certain workload. The contract is specified as following:
```
{
"contract_id": auto generated,
"node_id": unique id of the node,
"deployment_data": some additional deployment data
"deployment_hash": hash of the deployment definition signed by the user
"public_ips": number of public ips to attach to the deployment contract
}
```
We don't save the raw workload definition on the chain but only a hash of the definition. After the contract is created, the user must send the raw deployment to the specified node in the contract. He can find where to send this data by looking up the Node's twin and contacting that twin over the planetary network.
## Rent Contract
A rent contract is also a contract between a user and a Node, but instead of being able to reserve a part of the node's capacity, the full capacity is rented. Once a rent contract is created on a Node by a user, only this user can deploy node contracts on this specific node. A discount of 50% is given if a the user wishes to rent the full capacity of a node by creating a rent contract. All node contracts deployed on a node where a user has a rent contract are free of use expect for the public ip's which can be added on a node contract.
## Name Contract
A name contract is a contract that specifies a unique name to be used on the grid's webgateways. Once a name contract is created, this name can be used as and entrypoint for an application on the grid.
## Contract billing
Every contract is billed every 1 hour on the chain, the amount that is due is deducted from the user's wallet every 24 hours or when the user cancels his contract. The total amount acrued in those 24 hours gets send to following destinations:
- 10% goes to the threefold foundation
- 5% goes to staking pool wallet (to be implemented in a later phase)
- 50% goes to certified sales channel
- 35% TFT gets burned
See [pricing](../../../knowledge_base/cloud/pricing/pricing.md) for more information on how the cost for a contract is calculated.
## Contract locking
To not overload the chain with transfer events and others we choose to lock the amount due for a contract every hour and after 24 hours unlock the amount and deduct it in one go. This lock is saved on a user's account, if the user has multiple contracts the locked amount will be stacked.
## Contract grace period
When the owner of a contract runs out funds on his wallet to pay for his deployment, the contract goes in to a Grace Period state. The deployment, whatever that might be, will be unaccessible during this period to the user. When the wallet is funded with TFT again, the contract goes back to a normal operating state. If the grace period runs out (by default 2 weeks) the user's deployment and data will be deleted from the node.
## DAO
See [DAO](../../dashboard/tfchain/tf_dao.md) for more information on the DAO on TF Chain.
## Farming Policies
See [farming_policies](farming_policies.md) for more information on the farming policies on TF Chain.
## Node Connection price
A connection price is set to every new Node that boots on the Grid. This connection price influences the amount of TFT farmed in a period. The connection price set on a node is permanent. The DAO can propose the increase / decrease of the connection price. At the time of writing the connection price is set to $ 0.08. When the DAO proposes a connection price and the vote is passed, new nodes will attach to the new connection price.
## Node Certifiers
Node certifiers are entities who are allowed to set a node's certification level to `Certified`. The DAO can propose to add / remove entities that can certify nodes. This is usefull for allowing approved resellers of Threefold nodes to mark nodes as Certified. A certified node farms 25% more tokens than `Diy` a node.

View File

@ -1,142 +0,0 @@
<h1>External Service Contract: How to set and execute</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Step 1: Create the contract and get its unique ID](#step-1-create-contract--get-unique-id)
- [Step 2: Fill contract](#step-2-fill-contract)
- [Step 3: Both parties approve contract](#step-3-both-parties-approve-contract)
- [Step 4: Bill for the service](#step-4-bill-for-the-service)
- [Step 5: Cancel the contract](#step-5-cancel-the-contract)
***
# Introduction
It is now possible to create a generic contract between two TFChain users (without restriction of account type) for some external service and bill for it.
The initial scenario is when two parties, a service provider and a consumer of the service, want to use TFChain to automatically handle the billing/payment process for an agreement (in TFT) they want to make for a service which is external from the grid.
This is actually a more direct and generic feature if we compare to the initial rewarding model where a service provider (or solution provider) is receiving TFT from a rewards distribution process, linked to a node contract and based on a cloud capacity consumption, which follows specific billing rules.
The initial requirements are:
- Both service and consumer need to have their respective twin created on TFChain (if not, see [here](tfchain.md#create-a-tfchain-twin) how to do it)
- Consumer account needs to be funded (lack of funds will simply result in the contract cancelation while billed)
In the following steps we detail the sequence of extrinsics that need to be called in TFChain Polkadot portal for setting up and executing such contract.
<!-- We also show how to check if everything is going the right way via the TFChain GraphQL interface. -->
Make sure to use right [links](tfchain.md#deployed-instances) depending on the targeted network.
# Step 1: Create contract / Get unique ID
## Create service contract
The contract creation can be initiated by both service or consumer.
In TFChain Polkadot portal, the one who iniciates the contract should go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCreate()`, using the account he pretends to use in the contract, and select the corresponding service and consumer accounts before submiting the transaction.
![service_contract_create](img/service_contract_create.png)
Once executed the service contract is `Created` between the two parties and a unique ID is generated.
## Last service contract ID
To get the last generated service contract ID go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContractID()`.
![service_contract_id](img/service_contract_id.png)
## Parse service contract
To get the corresponding contract details, go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContracts()` and provide the contract ID.
You should see the following details:
![service_contract_state](img/service_contract_state.png)
Check if the contract fields are correct, especially the twin ID of both service and consumer, to be sure you get the right contract ID, referenced as `serviceContractId`.
## Wrong contract ID ?
If twin IDs are wrong ([how to get my twin ID?](tfchain.md#get-your-twin-id)) on service contract fields it means the contract does not correspond to the last created contract.
In this case parse the last contracts on stack by decreasing `serviceContractId` and try to identify the right one; or the contract was simply not created so you should repeat the creation process and evaluate the error log.
# Step 2: Fill contract
Once created, the service contract must be filled with its relative `per hour` fees:
- `baseFee` is the constant "per hour" price (in TFT) for the service.
- `variableFee` is the maximum "per hour" amount (in TFT) that can be billed extra.
To provide these values (only service can set fees), go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetFees()` specifying `serviceContractId`.
![service_contract_set_fees](img/service_contract_set_fees.png)
Some metadata (the description of the service for example) must be filled in a similar way (`Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetMetadata()`).
In this case service or consumer can set metadata.
![service_contract_set_metadata](img/service_contract_set_metadata.png)
The agreement will be automatically considered `Ready` when both metadata and fees are set (`metadata` not empty and `baseFee` greater than zero).
Note that whenever this condition is not reached both extrinsics can still be called to modify agreement.
You can check the contract status at each step of flow by parsing it as shown [here](#parse-service-contract).
# Step 3: Both parties approve contract
Now having the agreement ready the contract can be submited for approval.
To approve the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractApprove()` specifying `serviceContractId`.
![service_contract_approve](img/service_contract_approve.png)
To reject the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractReject()` specifying `serviceContractId`.
![service_contract_reject](img/service_contract_reject.png)
The contract needs to be explicitly `Approved` by both service and consumer to be ready for billing.
Before reaching this state, if one of the parties decides to call the rejection extrinsic, it will instantly lead to the cancelation of the contract (and its permanent removal).
# Step 4: Bill for the service
Once the contract is accepted by both it can be billed.
## Send bill to consumer
Only the service can bill the consumer going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractBill()` specifying `serviceContractId` and billing informations such as `variableAmount` and some `metadata`.
![service_contract_bill](img/service_contract_bill.png)
## Billing frequency
⚠️ Important: because a service should not charge the user if it doesn't work, it is required that bills be send in less than 1 hour intervals.
Any bigger interval will result in a bounded 1 hour bill (in other words, extra time will not be billed).
It is the service responsability to bill on right frequency!
## Amount due calculation
When the bill is received, the chain calculates the bill amount based on the agreement values as follows:
~~~
amount = baseFee * T / 3600 + variableAmount
~~~
where `T` is the elapsed time, in seconds and bounded by 3600 (see [above](#billing-frequency)), since last effective billing operation occured.
## Protection against draining
Note that if `variableAmount` is too high (i.e `variableAmount > variableFee * T / 3600`) the billing extrinsic will fail.
The `variableFee` value in the contract is interpreted as being "per hour" and acts as a protection mechanism to avoid consumer draining.
Indeed, as it is technically possible for the service to send a bill every second, there would be no gain for that (unless overloading the chain uselessly).
So it is also the service responsability to set a suitable `variableAmount` according to the billing frequency!
## Billing considerations
Then, if all goes well and no error is dispatched after submitting the transaction, the consumer pays for the due amount calculated from the bill (see calculation detail [above](#amount-due-calculation)).
In practice the amount is transferred from the consumer twin account to the service twin account.
Be aware that if the consumer is out of funds the billing will fail AND the contract will automatically be canceled.
# Step 5: Cancel the contract
At every moment of the flow since the contract is created it can be canceled (and definitively removed).
Only the service or the consumer can do it going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCancel()` specifying `serviceContractId`.
![service_contract_cancel](img/service_contract_cancel.png)

View File

@ -1,81 +0,0 @@
<h1>Solution Provider</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Changes to Contract Creation](#changes-to-contract-creation)
- [Creating a Provider](#creating-a-provider)
- [Council needs to approve a provider before it can be used](#council-needs-to-approve-a-provider-before-it-can-be-used)
***
## Introduction
> Note: While the solution provider program is still active, the plan is to discontinue the program in the near future. We will update the manual as we get more information. We currently do not accept new solution providers.
A "solution" is something running on the grid, created by a community member. This can be brought forward to the council, who can vote on it to recognize it as a solution. On contract creation, a recognized solution can be referenced, in which case part of the payment goes toward the address coupled to the solution. On chain a solution looks as follows:
- Description (should be some text, limited in length. Limit should be rather low, if a longer one is desired a link can be inserted. 160 characters should be enough imo).
- Up to 5 payout addresses, each with a payout percentage. This is the percentage of the payout received by the associated address. The amount is deducted from the payout to the treasury and specified as percentage of the total contract cost. As such, the sum of these percentages can never exceed 50%. If this value is not 50%, the remainder is payed to the treasure. Example: 10% payout percentage to addr 1, 5% payout to addr 2. This means 15% goes to the 2 listed addresses combined and 35% goes to the treasury (instead of usual 50). Rest remains as is. If the cost would be 10TFT, 1TFT goes to the address1, 0.5TFT goes to address 2, 3.5TFT goes to the treasury, instead of the default 5TFT to the treasury
- A unique code. This code is used to link a solution to the contract (numeric ID).
This means contracts need to carry an optional solution code. If the code is not specified (default), the 50% goes entirely to the treasury (as is always the case today).
A solution can be created by calling the extrinsic `smartContractModule` -> `createSolutionProvider` with parameters:
- description
- link (to website)
- list of providers
Provider:
- who (account id)
- take (amount of take this account should get) specified as an integer of max 50. example: 25
A forum post should be created with the details of the created solution provider, the dao can vote to approve this or not. If the solution provider get's approved, it can be referenced on contract creation.
Note that a solution can be deleted. In this case, existing contracts should fall back to the default behavior (i.e. if code not found -> default).
## Changes to Contract Creation
When creating a contract, a `solution_provider_id` can be passed. An error will be returned if an invalid or non-approved solution provider id is passed.
## Creating a Provider
Creating a provider is as easy as going to the [polkadotJS UI](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) (Currently only on devnet)
Select module `SmartContractModule` -> `createSolutionProvider(..)`
Fill in all the details, you can specify up to 5 target accounts which can have a take of the TFT generated from being a provider. Up to a total maximum of 50%. `Take` should be specified as a integer, example (`25`).
Once this object is created, a forum post should be created here: <https://forum.threefold.io/>
![create](./img/create_provider.png)
## Council needs to approve a provider before it can be used
First propose the solution to be approved:
![propose_approve](./img/propose_approve.png)
After submission it should like like this:
![proposed_approved](./img/proposed_approve.png)
Now another member of the council needs to vote:
![vote](./img/vote_proposal.png)
After enough votes are reached, it can be closed:
![close](./img/close_proposal.png)
If the close was executed without error the solution should be approved and ready to be used
Query the solution: `chainstate` -> `SmartContractModule` -> `solutionProviders`
![query](./img/query_provider.png)
Now the solution provider can be referenced on contract creation:
![create](./img/create_contract.png)

View File

@ -1,15 +0,0 @@
<h1>TFCMD</h1>
TFCMD (`tfcmd`) is a command line interface to interact and develop on Threefold Grid using command line.
Consult the [ThreeFoldTech TFCMD repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-cli) for the latest updates. Make sure to read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md).
<h2>Table of Contents</h2>
- [Getting Started](./tfcmd_basics.md)
- [Deploy a VM](./tfcmd_vm.md)
- [Deploy Kubernetes](./tfcmd_kubernetes.md)
- [Deploy ZDB](./tfcmd_zdbs.md)
- [Gateway FQDN](./tfcmd_gateway_fqdn.md)
- [Gateway Name](./tfcmd_gateway_name.md)
- [Contracts](./tfcmd_contracts.md)

View File

@ -1,67 +0,0 @@
<h1>TFCMD Getting Started</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Installation](#installation)
- [Login](#login)
- [Commands](#commands)
- [Using TFCMD](#using-tfcmd)
***
## Introduction
This section covers the basics on how to set up and use TFCMD (`tfcmd`).
TFCMD is available as binaries. Make sure to download the latest release and to stay up to date with new releases.
## Installation
An easy way to use TFCMD is to download and extract the TFCMD binaries to your path.
- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases)
- ```
wget <binaries_url>
```
- Extract the binaries
- ```
tar -xvf <binaries_file>
```
- Move `tfcmd` to any `$PATH` directory:
```bash
mv tfcmd /usr/local/bin
```
## Login
Before interacting with Threefold Grid with `tfcmd` you should login with your mnemonics and specify the grid network:
```console
$ tfcmd login
Please enter your mnemonics: <mnemonics>
Please enter grid network (main,test): <grid-network>
```
This validates your mnemonics and store your mnemonics and network to your default configuration dir.
Check [UserConfigDir()](https://pkg.go.dev/os#UserConfigDir) for your default configuration directory.
## Commands
You can run the command `tfcmd help` at any time to access the help section. This will also display the available commands.
| Command | Description |
| ---------- | ---------------------------------------------------------- |
| cancel | Cancel resources on Threefold grid |
| completion | Generate the autocompletion script for the specified shell |
| deploy | Deploy resources to Threefold grid |
| get | Get a deployed resource from Threefold grid |
| help | Help about any command |
| login | Login with mnemonics to a grid network |
| version | Get latest build tag |
## Using TFCMD
Once you've logged in, you can use commands to deploy workloads on the TFGrid. Read the next sections for more information on different types of workloads available with TFCMD.

View File

@ -1,99 +0,0 @@
<h1>Contracts</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Get](#get)
- [Get Contracts](#get-contracts)
- [Get Contract](#get-contract)
- [Cancel](#cancel)
- [Optional Flags](#optional-flags)
***
## Introduction
We explain how to handle contracts on the TFGrid with `tfcmd`.
## Get
### Get Contracts
Get all contracts
```bash
tfcmd get contracts
```
Example:
```console
$ tfcmd get contracts
5:13PM INF starting peer session=tf-1184566 twin=81
Node contracts:
ID Node ID Type Name Project Name
50977 21 network vm1network vm1
50978 21 vm vm1 vm1
50980 14 Gateway Name gatewaytest gatewaytest
Name contracts:
ID Name
50979 gatewaytest
```
### Get Contract
Get specific contract
```bash
tfcmd get contract <contract-id>
```
Example:
```console
$ tfcmd get contract 50977
5:14PM INF starting peer session=tf-1185180 twin=81
5:14PM INF contract:
{
"contract_id": 50977,
"twin_id": 81,
"state": "Created",
"created_at": 1702480020,
"type": "node",
"details": {
"nodeId": 21,
"deployment_data": "{\"type\":\"network\",\"name\":\"vm1network\",\"projectName\":\"vm1\"}",
"deployment_hash": "21adc91ef6cdc915d5580b3f12732ac9",
"number_of_public_ips": 0
}
}
```
## Cancel
Cancel specified contracts or all contracts.
```bash
tfcmd cancel contracts <contract-id>... [Flags]
```
Example:
```console
$ tfcmd cancel contracts 50856 50857
5:17PM INF starting peer session=tf-1185964 twin=81
5:17PM INF contracts canceled successfully
```
### Optional Flags
- all: cancel all twin's contracts.
Example:
```console
$ tfcmd cancel contracts --all
5:17PM INF starting peer session=tf-1185964 twin=81
5:17PM INF contracts canceled successfully
```

View File

@ -1,87 +0,0 @@
<h1>Gateway FQDN</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
We explain how to use gateway fully qualified domain names on the TFGrid using `tfcmd`.
## Deploy
```bash
tfcmd deploy gateway fqdn [flags]
```
### Required Flags
- name: name for the gateway deployment also used for canceling the deployment. must be unique.
- node: node id to deploy gateway on.
- backends: list of backends the gateway will forward requests to.
- fqdn: FQDN pointing to the specified node.
### Optional Flags
-tls: add TLS passthrough option (default false).
Example:
```console
$ tfcmd deploy gateway fqdn -n gatewaytest --node 14 --backends http://93.184.216.34:80 --fqdn example.com
3:34PM INF deploying gateway fqdn
3:34PM INF gateway fqdn deployed
```
## Get
```bash
tfcmd get gateway fqdn <gateway>
```
gateway is the name used when deploying gateway-fqdn using tfcmd.
Example:
```console
$ tfcmd get gateway fqdn gatewaytest
2:05PM INF gateway fqdn:
{
"NodeID": 14,
"Backends": [
"http://93.184.216.34:80"
],
"FQDN": "awady.gridtesting.xyz",
"Name": "gatewaytest",
"TLSPassthrough": false,
"Description": "",
"NodeDeploymentID": {
"14": 19653
},
"SolutionType": "gatewaytest",
"ContractID": 19653
}
```
## Cancel
```bash
tfcmd cancel <deployment-name>
```
deployment-name is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel gatewaytest
3:37PM INF canceling contracts for project gatewaytest
3:37PM INF gatewaytest canceled
```

View File

@ -1,88 +0,0 @@
<h1>Gateway Name</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
We explain how to use gateway names on the TFGrid using `tfcmd`.
## Deploy
```bash
tfcmd deploy gateway name [flags]
```
### Required Flags
- name: name for the gateway deployment also used for canceling the deployment. must be unique.
- backends: list of backends the gateway will forward requests to.
### Optional Flags
- node: node id gateway should be deployed on.
- farm: farm id gateway should be deployed on, if set choose available node from farm that fits vm specs (default 1). note: node and farm flags cannot be set both.
-tls: add TLS passthrough option (default false).
Example:
```console
$ tfcmd deploy gateway name -n gatewaytest --node 14 --backends http://93.184.216.34:80
3:34PM INF deploying gateway name
3:34PM INF fqdn: gatewaytest.gent01.dev.grid.tf
```
## Get
```bash
tfcmd get gateway name <gateway>
```
gateway is the name used when deploying gateway-name using tfcmd.
Example:
```console
$ tfcmd get gateway name gatewaytest
1:56PM INF gateway name:
{
"NodeID": 14,
"Name": "gatewaytest",
"Backends": [
"http://93.184.216.34:80"
],
"TLSPassthrough": false,
"Description": "",
"SolutionType": "gatewaytest",
"NodeDeploymentID": {
"14": 19644
},
"FQDN": "gatewaytest.gent01.dev.grid.tf",
"NameContractID": 19643,
"ContractID": 19644
}
```
## Cancel
```bash
tfcmd cancel <deployment-name>
```
deployment-name is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel gatewaytest
3:37PM INF canceling contracts for project gatewaytest
3:37PM INF gatewaytest canceled
```

View File

@ -1,147 +0,0 @@
<h1>Kubernetes</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
In this section, we explain how to deploy Kubernetes workloads on the TFGrid using `tfcmd`.
## Deploy
```bash
tfcmd deploy kubernetes [flags]
```
### Required Flags
- name: name for the master node deployment also used for canceling the cluster deployment. must be unique.
- ssh: path to public ssh key to set in the master node.
### Optional Flags
- master-node: node id master should be deployed on.
- master-farm: farm id master should be deployed on, if set choose available node from farm that fits master specs (default 1). note: master-node and master-farm flags cannot be set both.
- workers-node: node id workers should be deployed on.
- workers-farm: farm id workers should be deployed on, if set choose available node from farm that fits master specs (default 1). note: workers-node and workers-farm flags cannot be set both.
- ipv4: assign public ipv4 for master node (default false).
- ipv6: assign public ipv6 for master node (default false).
- ygg: assign yggdrasil ip for master node (default true).
- master-cpu: number of cpu units for master node (default 1).
- master-memory: master node memory size in GB (default 1).
- master-disk: master node disk size in GB (default 2).
- workers-number: number of workers nodes (default 0).
- workers-ipv4: assign public ipv4 for each worker node (default false)
- workers-ipv6: assign public ipv6 for each worker node (default false)
- workers-ygg: assign yggdrasil ip for each worker node (default true)
- workers-cpu: number of cpu units for each worker node (default 1).
- workers-memory: memory size for each worker node in GB (default 1).
- workers-disk: disk size in GB for each worker node (default 2).
Example:
```console
$ tfcmd deploy kubernetes -n kube --ssh ~/.ssh/id_rsa.pub --master-node 14 --workers-number 2 --workers-node 14
4:21PM INF deploying network
4:22PM INF deploying cluster
4:22PM INF master yggdrasil ip: 300:e9c4:9048:57cf:504f:c86c:9014:d02d
```
## Get
```bash
tfcmd get kubernetes <kubernetes>
```
kubernetes is the name used when deploying kubernetes cluster using tfcmd.
Example:
```console
$ tfcmd get kubernetes examplevm
3:14PM INF k8s cluster:
{
"Master": {
"Name": "kube",
"Node": 14,
"DiskSize": 2,
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
"FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723",
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "300:e9c4:9048:57cf:e8a0:662b:4e66:8faa",
"IP": "10.20.2.2",
"CPU": 1,
"Memory": 1024
},
"Workers": [
{
"Name": "worker1",
"Node": 14,
"DiskSize": 2,
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
"FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723",
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "300:e9c4:9048:57cf:66d0:3ee4:294e:d134",
"IP": "10.20.2.2",
"CPU": 1,
"Memory": 1024
},
{
"Name": "worker0",
"Node": 14,
"DiskSize": 2,
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist",
"FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723",
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "300:e9c4:9048:57cf:1ae5:cc51:3ffc:81e",
"IP": "10.20.2.2",
"CPU": 1,
"Memory": 1024
}
],
"Token": "",
"NetworkName": "",
"SolutionType": "kube",
"SSHKey": "",
"NodesIPRange": null,
"NodeDeploymentID": {
"14": 22743
}
}
```
## Cancel
```bash
tfcmd cancel <deployment-name>
```
deployment-name is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel kube
3:37PM INF canceling contracts for project kube
3:37PM INF kube canceled
```

View File

@ -1,171 +0,0 @@
<h1>Deploy a VM</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Flags](#flags)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Examples](#examples)
- [Deploy a VM without GPU](#deploy-a-vm-without-gpu)
- [Deploy a VM with GPU](#deploy-a-vm-with-gpu)
- [Get](#get)
- [Get Example](#get-example)
- [Cancel](#cancel)
- [Cancel Example](#cancel-example)
- [Questions and Feedback](#questions-and-feedback)
***
# Introduction
In this section, we explore how to deploy a virtual machine (VM) on the ThreeFold Grid using `tfcmd`.
# Deploy
You can deploy a VM with `tfcmd` using the following template accompanied by required and optional flags:
```bash
tfcmd deploy vm [flags]
```
## Flags
When you use `tfcmd`, there are two required flags (`name` and `ssh`), while the other remaining flags are optional. Using such optional flags can be used to deploy a VM with a GPU for example or to set an IPv6 address and much more.
### Required Flags
- **name**: name for the VM deployment also used for canceling the deployment. The name must be unique.
- **ssh**: path to public ssh key to set in the VM.
### Optional Flags
- **node**: node ID the VM should be deployed on.
- **farm**: farm ID the VM should be deployed on, if set choose available node from farm that fits vm specs (default `1`). Note: node and farm flags cannot both be set.
- **cpu**: number of cpu units (default `1`).
- **disk**: size of disk in GB mounted on `/data`. If not set, no disk workload is made.
- **entrypoint**: entrypoint for the VM FList (default `/sbin/zinit init`). Note: setting this without the flist option will fail.
- **flist**: FList used in the VM (default `https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist`). Note: setting this without the entrypoint option will fail.
- **ipv4**: assign public ipv4 for the VM (default `false`).
- **ipv6**: assign public ipv6 for the VM (default `false`).
- **memory**: memory size in GB (default `1`).
- **rootfs**: root filesystem size in GB (default `2`).
- **ygg**: assign yggdrasil ip for the VM (default `true`).
- **gpus**: assign a list of gpus' IDs to the VM. Note: setting this without the node option will fail.
## Examples
We present simple examples on how to deploy a virtual machine with or without a GPU using `tfcmd`.
### Deploy a VM without GPU
```console
$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10
12:06PM INF deploying network
12:06PM INF deploying vm
12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821
```
### Deploy a VM with GPU
```console
$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10 --gpus '0000:0e:00.0/1882/543f' --gpus '0000:0e:00.0/1887/593f' --node 12
12:06PM INF deploying network
12:06PM INF deploying vm
12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821
```
# Get
To get the VM, use the following template:
```bash
tfcmd get vm <vm>
```
Make sure to replace `<vm>` with the name of the VM specified using `tfcmd`.
## Get Example
In the following example, the name of the deployment to get is `examplevm`.
```console
$ tfcmd get vm examplevm
3:20PM INF vm:
{
"Name": "examplevm",
"NodeID": 15,
"SolutionType": "examplevm",
"SolutionProvider": null,
"NetworkName": "examplevmnetwork",
"Disks": [
{
"Name": "examplevmdisk",
"SizeGB": 10,
"Description": ""
}
],
"Zdbs": [],
"Vms": [
{
"Name": "examplevm",
"Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist",
"FlistChecksum": "",
"PublicIP": false,
"PublicIP6": false,
"Planetary": true,
"Corex": false,
"ComputedIP": "",
"ComputedIP6": "",
"YggIP": "301:ad3a:9c52:98d1:cd05:1595:9abb:e2f1",
"IP": "10.20.2.2",
"Description": "",
"CPU": 2,
"Memory": 4096,
"RootfsSize": 2048,
"Entrypoint": "/sbin/zinit init",
"Mounts": [
{
"DiskName": "examplevmdisk",
"MountPoint": "/data"
}
],
"Zlogs": null,
"EnvVars": {
"SSH_KEY": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcGrS1RT36rHAGLK3/4FMazGXjIYgWVnZ4bCvxxg8KosEEbs/DeUKT2T2LYV91jUq3yibTWwK0nc6O+K5kdShV4qsQlPmIbdur6x2zWHPeaGXqejbbACEJcQMCj8szSbG8aKwH8Nbi8BNytgzJ20Ysaaj2QpjObCZ4Ncp+89pFahzDEIJx2HjXe6njbp6eCduoA+IE2H9vgwbIDVMQz6y/TzjdQjgbMOJRTlP+CzfbDBb6Ux+ed8F184bMPwkFrpHs9MSfQVbqfIz8wuq/wjewcnb3wK9dmIot6CxV2f2xuOZHgNQmVGratK8TyBnOd5x4oZKLIh3qM9Bi7r81xCkXyxAZbWYu3gGdvo3h85zeCPGK8OEPdYWMmIAIiANE42xPmY9HslPz8PAYq6v0WwdkBlDWrG3DD3GX6qTt9lbSHEgpUP2UOnqGL4O1+g5Rm9x16HWefZWMjJsP6OV70PnMjo9MPnH+yrBkXISw4CGEEXryTvupfaO5sL01mn+UOyE= abdulrahman@AElawady-PC\n"
},
"NetworkName": "examplevmnetwork"
}
],
"QSFS": [],
"NodeDeploymentID": {
"15": 22748
},
"ContractID": 22748
}
```
# Cancel
To cancel your VM deployment, use the following template:
```bash
tfcmd cancel <deployment-name>
```
Make sure to replace `<deployment-name>` with the name of the deployment specified using `tfcmd`.
## Cancel Example
In the following example, the name of the deployment to cancel is `examplevm`.
```console
$ tfcmd cancel examplevm
3:37PM INF canceling contracts for project examplevm
3:37PM INF examplevm canceled
```
# Questions and Feedback
If you have any questions or feedback, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.

View File

@ -1,125 +0,0 @@
<h1>ZDBs</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Deploy](#deploy)
- [Required Flags](#required-flags)
- [Optional Flags](#optional-flags)
- [Get](#get)
- [Cancel](#cancel)
***
## Introduction
In this section, we explore how to use ZDBs related commands using `tfcmd` to interact with the TFGrid.
## Deploy
```bash
tfcmd deploy zdb [flags]
```
### Required Flags
- project_name: project name for the ZDBs deployment also used for canceling the deployment. must be unique.
- size: HDD of zdb in GB.
### Optional Flags
- node: node id zdbs should be deployed on.
- farm: farm id zdbs should be deployed on, if set choose available node from farm that fits zdbs deployment specs (default 1). note: node and farm flags cannot be set both.
- count: count of zdbs to be deployed (default 1).
- names: a slice of names for the number of ZDBs.
- password: password for ZDBs deployed
- description: description for your ZDBs, it's optional.
- mode: the enumeration of the modes 0-db can operate in (default user).
- public: if zdb gets a public ip6 (default false).
Example:
- Deploying ZDBs
```console
$ tfcmd deploy zdb --project_name examplezdb --size=10 --count=2 --password=password
12:06PM INF deploying zdbs
12:06PM INF zdb 'examplezdb0' is deployed
12:06PM INF zdb 'examplezdb1' is deployed
```
## Get
```bash
tfcmd get zdb <zdb-project-name>
```
`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd get zdb examplezdb
3:20PM INF zdb:
{
"Name": "examplezdb",
"NodeID": 11,
"SolutionType": "examplezdb",
"SolutionProvider": null,
"NetworkName": "",
"Disks": [],
"Zdbs": [
{
"name": "examplezdb1",
"password": "password",
"public": false,
"size": 10,
"description": "",
"mode": "user",
"ips": [
"2a10:b600:1:0:c4be:94ff:feb1:8b3f",
"302:9e63:7d43:b742:469d:3ec2:ab15:f75e"
],
"port": 9900,
"namespace": "81-36155-examplezdb1"
},
{
"name": "examplezdb0",
"password": "password",
"public": false,
"size": 10,
"description": "",
"mode": "user",
"ips": [
"2a10:b600:1:0:c4be:94ff:feb1:8b3f",
"302:9e63:7d43:b742:469d:3ec2:ab15:f75e"
],
"port": 9900,
"namespace": "81-36155-examplezdb0"
}
],
"Vms": [],
"QSFS": [],
"NodeDeploymentID": {
"11": 36155
},
"ContractID": 36155,
"IPrange": ""
}
```
## Cancel
```bash
tfcmd cancel <zdb-project-name>
```
`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd.
Example:
```console
$ tfcmd cancel examplezdb
3:37PM INF canceling contracts for project examplezdb
3:37PM INF examplezdb canceled
```

View File

@ -1,13 +0,0 @@
<h1>TFROBOT</h1>
TFROBOT (`tfrobot`) is a command line interface tool that offers simultaneous mass deployment of groups of VMs on the ThreeFold Grid, with support of multiple retries for failed deployments, and customizable configurations, where you can define node groups, VMs groups and other configurations through a YAML or a JSON file.
Consult the [ThreeFoldTech TFROBOT repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/tfrobot) for the latest updates and read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) to get up to speed if needed.
<h2>Table of Contents</h2>
- [Installation](./tfrobot_installation.md)
- [Configuration File](./tfrobot_config.md)
- [Deployment](./tfrobot_deploy.md)
- [Commands and Flags](./tfrobot_commands_flags.md)
- [Supported Configurations](./tfrobot_configurations.md)

View File

@ -1,57 +0,0 @@
<h1> Commands and Flags </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Commands](#commands)
- [Subcommands](#subcommands)
- [Flags](#flags)
***
## Introduction
We present the various commands, subcommands and flags available with TFROBOT.
## Commands
You can run the command `tfrobot help` at any time to access the help section. This will also display the available commands.
| Command | Description |
| ---------- | ---------------------------------------------------------- |
| completion | Generate the autocompletion script for the specified shell |
| help | Help about any command |
| version | Get latest build tag |
Use `tfrobot [command] --help` for more information about a command.
## Subcommands
You can use subcommands to deploy and cancel workloads on the TFGrid.
- **deploy:** used to mass deploy groups of vms with specific configurations
```bash
tfrobot deploy -c path/to/your/config.yaml
```
- **cancel:** used to cancel all vms deployed using specific configurations
```bash
tfrobot cancel -c path/to/your/config.yaml
```
- **load:** used to load all vms deployed using specific configurations
```bash
tfrobot load -c path/to/your/config.yaml
```
## Flags
You can use different flags to configure your deployment.
| Flag | Usage |
| :---: | :---: |
| -c | used to specify path to configuration file |
| -o | used to specify path to output file to store the output info in |
| -d | allow debug logs to appear in the output logs |
| -h | help |
> **Note:** Make sure to use every flag once. If the flag is repeated, it will ignore all values and take the last value of the flag.`

View File

@ -1,131 +0,0 @@
<h1> Configuration File</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Examples](#examples)
- [YAML Example](#yaml-example)
- [JSON Example](#json-example)
- [Create a Configuration File](#create-a-configuration-file)
***
## Introduction
To use TFROBOT, the user needs to create a YAML or a JSON configuration file that will contain the mass deployment information, such as the groups information, number of VMs to deploy how, the compute, storage and network resources needed, as well as the user's credentials, such as the SSH public key, the network (main, test, dev, qa) and the TFChain mnemonics.
## Examples
We present here a configuration file example that deploys 3 nodes with 2 vcores, 16GB of RAM, 100GB of SSD, 50GB of HDD and an IPv4 address. The same deployment is shown with a YAML file and with a JSON file. Parsing is based on file extension, TFROBOT will use JSON format if the file has a JSON extension and YAML format otherwise.
You can use this example for guidance, and make sure to replace placeholders and adapt the groups based on your actual project details. To the minimum, `ssh_key1` should be replaced by the user SSH public key and `example-mnemonic` should be replaced by the user mnemonics.
Note that if no IPs are specified as true (IPv4 or IPv6), an Yggdrasil IP address will automatically be assigned to the VM, as at least one IP should be set to allow an SSH connection to the VM.
### YAML Example
```
node_groups:
- name: group_a
nodes_count: 3
free_cpu: 2
free_mru: 16
free_ssd: 100
free_hdd: 50
dedicated: false
public_ip4: true
public_ip6: false
certified: false
region: europe
vms:
- name: examplevm123
vms_count: 5
node_group: group_a
cpu: 1
mem: 0.25
public_ip4: true
public_ip6: false
ssd:
- size: 15
mount_point: /mnt/ssd
flist: https://hub.grid.tf/tf-official-apps/base:latest.flist
entry_point: /sbin/zinit init
root_size: 0
ssh_key: example1
env_vars:
user: user1
pwd: 1234
ssh_keys:
example1: ssh_key1
mnemonic: example-mnemonic
network: dev
max_retries: 5
```
### JSON Example
```
{
"node_groups": [
{
"name": "group_a",
"nodes_count": 3,
"free_cpu": 2,
"free_mru": 16,
"free_ssd": 100,
"free_hdd": 50,
"dedicated": false,
"public_ip4": true,
"public_ip6": false,
"certified": false,
"region": europe,
}
],
"vms": [
{
"name": "examplevm123",
"vms_count": 5,
"node_group": "group_a",
"cpu": 1,
"mem": 0.25,
"public_ip4": true,
"public_ip6": false,
"ssd": [
{
"size": 15,
"mount_point": "/mnt/ssd"
}
],
"flist": "https://hub.grid.tf/tf-official-apps/base:latest.flist",
"entry_point": "/sbin/zinit init",
"root_size": 0,
"ssh_key": "example1",
"env_vars": {
"user": "user1",
"pwd": "1234"
}
}
],
"ssh_keys": {
"example1": "ssh_key1"
},
"mnemonic": "example-mnemonic",
"network": "dev",
"max_retries": 5
}
```
## Create a Configuration File
You can start with the example above and adjust for your specific deployment needs.
- Create directory
```
mkdir tfrobot_deployments && cd $_
```
- Create configuration file and adjust with the provided example above
```
nano config.yaml
```
Once you've set your configuration file, all that's left is to deploy on the TFGrid. Read the next section for more information on how to deploy with TFROBOT.

View File

@ -1,68 +0,0 @@
<h1> Supported Configurations </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Config File](#config-file)
- [Node Group](#node-group)
- [Vms Groups](#vms-groups)
- [Disk](#disk)
***
## Introduction
When deploying with TFROBOT, you can set different configurations allowing for personalized deployments.
## Config File
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| [node_group](#node-group) | description of all resources needed for each node_group | list of structs of type node_group |
| [vms](#vms-groups) | description of resources needed for deploying groups of vms belong to node_group | list of structs of type vms |
| ssh_keys | map of ssh keys with key=name and value=the actual ssh key | map of string to string |
| mnemonic | mnemonic of the user | should be valid mnemonic |
| network | valid network of ThreeFold Grid networks | main, test, qa, dev |
| max_retries | times of retries of failed node groups | positive integer |
## Node Group
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| name | name of node_group | node group name should be unique |
| nodes_count | number of nodes in node group| nonzero positive integer |
| free_cpu | number of cpu of node | nonzero positive integer max = 32 |
| free_mru | free memory in the node in GB | min = 0.25, max = 256 |
| free_ssd | free ssd storage in the node in GB | positive integer value |
| free_hdd | free hdd storage in the node in GB | positive integer value |
| dedicated | are nodes dedicated | `true` or `false` |
| public_ip4 | should the nodes have free ip v4 | `true` or `false` |
| public_ip6 | should the nodes have free ip v6 | `true` or `false` |
| certified | should the nodes be certified(if false the nodes could be certified or DIY) | `true` or `false` |
| region | region could be the name of the continents the nodes are located in | africa, americas, antarctic, antarctic ocean, asia, europe, oceania, polar |
## Vms Groups
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| name | name of vm group | string value with no special characters |
| vms_count | number of vms in vm group| nonzero positive integer |
| node_group | name of node_group the vm belongs to | should be defined in node_groups |
| cpu | number of cpu for vm | nonzero positive integer max = 32 |
| mem | free memory in the vm in GB | min = 0.25, max 256 |
| planetary | should the vm have yggdrasil ip | `true` or `false` |
| public_ip4 | should the vm have free ip v4 | `true` or `false` |
| public_ip6 | should the vm have free ip v6 | `true` or `false` |
| flist | should be a link to valid flist | valid flist url with `.flist` or `.fl` extension |
| entry_point | entry point of the flist | path to the entry point in the flist |
| ssh_key | key of ssh key defined in the ssh_keys map | should be valid ssh_key defined in the ssh_keys map |
| env_vars | map of env vars | map of type string to string |
| ssd | list of disks | should be of type disk|
| root_size | root size in GB | 0 for default root size, max 10TB |
## Disk
| Field | Description| Supported Values|
| :---: | :---: | :---: |
| size | disk size in GB| positive integer min = 15 |
| mount_point | disk mount point | path to mountpoint |

View File

@ -1,59 +0,0 @@
<h1> Deployment </h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Deploy Workloads](#deploy-workloads)
- [Delete Workloads](#delete-workloads)
- [Logs](#logs)
- [Using TFCMD with TFROBOT](#using-tfcmd-with-tfrobot)
- [Get Contracts](#get-contracts)
***
## Introduction
We present how to deploy workloads on the ThreeFold Grid using TFROBOT.
## Prerequisites
To deploy workloads on the TFGrid with TFROBOT, you first need to [install TFROBOT](./tfrobot_installation.md) on your machine and create a [configuration file](./tfrobot_config.md).
## Deploy Workloads
Once you've installed TFROBOT and created a configuration file, you can deploy on the TFGrid with the following command. Make sure to indicate the path to your configuration file.
```bash
tfrobot deploy -c ./config.yaml
```
## Delete Workloads
To delete the contracts, you can use the following line. Make sure to indicate the path to your configuration file.
```bash
tfrobot cancel -c ./config.yaml
```
## Logs
To ensure a complete log history, append `2>&1 | tee path/to/log/file` to the command being executed.
```bash
tfrobot deploy -c ./config.yaml 2>&1 | tee path/to/log/file
```
## Using TFCMD with TFROBOT
### Get Contracts
The TFCMD tool works well with TFROBOT, as it can be used to query the TFGrid, for example you can see the contracts created by TFROBOT by running the TFCMD command, taking into consideration that you are using the same mnemonics and are on the same network:
```bash
tfcmd get contracts
```
For more information on TFCMD, [read the documentation](../tfcmd/tfcmd.md).

View File

@ -1,36 +0,0 @@
<h1>Installation</h1>
<h2>Table of Contents</h2>
- [Introduction](#introduction)
- [Installation](#installation)
***
## Introduction
This section covers the basics on how to install TFROBOT (`tfrobot`).
TFROBOT is available as binaries. Make sure to download the latest release and to stay up to date with new releases.
## Installation
To install TFROBOT, simply download and extract the TFROBOT binaries to your path.
- Create a new directory for `tfgrid-sdk-go`
```
mkdir tfgrid-sdk-go
cd tfgrid-sdk-go
```
- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases)
- ```
wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v0.14.4/tfgrid-sdk-go_Linux_x86_64.tar.gz
```
- Extract the binaries
- ```
tar -xvf tfgrid-sdk-go_Linux_x86_64.tar.gz
```
- Move `tfrobot` to any `$PATH` directory:
```bash
mv tfrobot /usr/local/bin
```