Merge branch 'development' of git.ourworld.tf:tfgrid/info_tfgrid into development

This commit is contained in:
2024-10-08 10:35:08 +04:00
236 changed files with 1750 additions and 365 deletions

View File

@@ -62,7 +62,7 @@ To use a dedicated node, you will have to reserve a 3node for yourself in your a
- See [here for more info about planet positive certification](certified_farming)
- Pricing is done based on cloud units, see [cloudunits](cloudunits)
!!!include:staking_discount_levels
!!wiki.include page:staking_discount_levels
!!!tfpriceinfo
!!tfpriceinfo

View File

@@ -70,7 +70,7 @@ su = hru / 1200 + sru * 0.8 / 200
- 1 SU costs 200 / 8 = 25 for SSD
<!-- !!!include:staking_farmed_tft -->
<!-- !!wiki.include page:staking_farmed_tft -->
## Change Log

View File

@@ -6,7 +6,7 @@ The ThreeFold DAO allows autonomous operation of the TFChain and TFGrid .
Amongst others the DAO needs to arrange
!!!wiki.include:utility_token_model
!!wiki.include page:utility_token_model
As well as
@@ -21,7 +21,7 @@ As well as
- rewards for sales channels, solution providers (v3.2+)
!!!wiki.include:dao_more_info
!!wiki.include page:dao_more_info
!!wiki.def alias:tf_dao,tfdao

View File

@@ -1,4 +1,4 @@
!!!wiki.include:token_what
!!wiki.include page:token_what
!!wiki.def alias:TFT,TFToken,Threefold_Token,TFGrid_Token,threefold_token name:TFT

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 68 KiB

View File

@@ -170,7 +170,7 @@ In this section, we cover how to make a BorgBackup on the Nextcloud VM and we al
In the section **Backup and restore**, you can set a [BorgBackup](https://www.borgbackup.org/) of your Nextcloud instance.
* Add a mount point and a directory name for your backup (e.g. **/mnt/backup**) and click **Submit backup location**.
* Add a mount point and a directory name for your backup (e.g. **/mnt/data/backup**) and click **Submit backup location**.
* After the creation of the backup location, write down the **encryption password for backups** somewhere safe and offline.
* Click **Create backup** to create a BorgBackup of your Nextcloud instance.
* This will stop all containers, run the backup container and create the backup.
@@ -188,7 +188,7 @@ After the first manual backup of your Nextcloud instance is complete, you can se
To allow for another layer of redundancy, you can set a secondary VM on the grid and make a daily backup from the BorgBackup of your Nextcloud instance to the secondary VM. The following shows how to do this. It is based on the [File Transfer section](system_administrators@@file_transfer) of the manual.
For the following, we take into account that the BorgBackup is located at `/mnt/backup` on the VM running Nextcloud.
For the following, we take into account that the BorgBackup is located at `/mnt/data/backup` on the VM running Nextcloud.
You will need to deploy a full VM on the TFGrid and SSH into this secondary VM.
@@ -249,7 +249,7 @@ nano /root/rsync_nextcloud_backup.sh
```
#!/bin/bash
sudo rsync -avz --progress --delete --log-file=/root/nextcloud_backup/rsync_nextcloud_storage.log /root/nextcloud_backup/ root@<Nextcloud_VM_IP_Address>:/mnt/backup
sudo rsync -avz --progress --delete --log-file=/root/nextcloud_backup/rsync_nextcloud_storage.log root@<Nextcloud_VM_IP_Address>:/mnt/data/backup /root/nextcloud_backup/
```
* Give permission to execute the script
```

View File

@@ -71,26 +71,30 @@ We show the steps to prepare the VM to run the network instance.
If you are deploying on testnet or devnet, simply replace `mainnet` by the proper network in the following lines.
- Set the prerequisites
```
apt update && apt install -y git nano ufw
```
- Download the ThreeFold Tech `grid_deployment` repository
```
git clone https://github.com/threefoldtech/grid_deployment
cd grid_deployment/docker-compose/mainnet
```
```
git clone https://github.com/threefoldtech/grid_deployment
cd grid_deployment/docker-compose/mainnet
```
- Generate a TFChain node key with `subkey`
- Note: If you deploy the 3 network instances, you can use the same node key for all 3 networks. But it is recommended to use 3 different keys to facilitate management.
```
echo .subkey_mainnet >> .gitignore
../subkey generate-node-key > .nodekey_mainnet
cat .nodekey_mainnet
```
```
echo .nodekey_mainnet >> .gitignore
../../apps/subkey generate-node-key > .nodekey_mainnet
cat .nodekey_mainnet
```
- Create and the set environment variables file
```
cp .secrets.env-example .secrets.env
```
```
cp .secrets.env-example .secrets.env
```
- Adjust the environment file
```
nano .secrets.env
```
```
nano .secrets.env
```
- To adjust the `.secrets.env` file, take into account the following:
- **DOMAIN**="example.com"
- Write your own domain

View File

@@ -1,4 +1,4 @@
# Certified Farming
!!!include:farming_certification_benefits
!!wiki.include page:farming_certification_benefits

View File

@@ -44,7 +44,7 @@
#### Terms and Conditions need to be signed
!!!include:farming_certification_terms_conditions
!!wiki.include page:farming_certification_terms_conditions
### Bandwidth Requirement for archive/storage usecase example.

View File

@@ -16,5 +16,5 @@
- Certification report given by TFTech or partners to describe farming situation (H2 2021).
- see [farming certified requirements](farming_certified_requirements)
!!!include:farming_certification_benefits
!!wiki.include page:farming_certification_benefits

View File

@@ -61,4 +61,4 @@ By participating in the expansion of the ThreeFold Grid, Farmers earn TFT on a m
Learn more about Farming Rewards [here](@farming_reward). -->
!!!alias become_a_farmer
!!alias become_a_farmer

View File

@@ -44,7 +44,7 @@ Radical Self-Expression arises from the unique Gifts of the Individual.
No one other than the individual or a collaborating group can determine its content. It is offered as a Gift to others. In this spirit, the Giver should respect the rights and liberties of the recipient.
"We are to take Individual Self-Expression as far as WE possibly can and Dream into Reality a much better World!!! Express yourselves. Start doing the things you love. Life is simple. Open your heart, mind and arms to new things and new people. WE are United in our differences..."
"We are to take Individual Self-Expression as far as WE possibly can and Dream into Reality a much better World!! Express yourselves. Start doing the things you love. Life is simple. Open your heart, mind and arms to new things and new people. WE are United in our differences..."
## Community

View File

@@ -1,4 +1,4 @@
- [**Manual Home**](@manual3_home_new)
---------
**Get Started**
!!!include:getstarted_toc
<!-- !!wiki.include page:getstarted_toc -->

View File

@@ -32,8 +32,8 @@ You can deploy infrastructure-as-code with Pulumi and Terraform/OpenTofu.
You can use our Go and Javascript/TypeScript command line interface tools to deploy workloads on the grid.
- [Go Grid Client](developers@@grid3_go_readme)
- [TFCMD](developers@@tfcmd/tfcmd)
- [TFRobot](developers@@tfrobot/tfrobot)
- [TFCMD](developers@@tfcmd)
- [TFRobot](developers@@tfrobot)
- [TypeScript Grid Client](developers@@grid3_javascript_readme)
## GPU Workloads

View File

@@ -1 +1 @@
!!!include:terraform_readme
!!wiki.include page:terraform_readme

View File

@@ -3,10 +3,10 @@
<h2> Table of Contents </h2>
- [Using Scheduler](terraform_scheduler.md)
- [Virtual Machine](./terraform_vm.html)
- [Web Gateway](./terraform_vm_gateway.html)
- [Kubernetes Cluster](./terraform_k8s.html)
- [ZDB](./terraform_zdb.html)
- [Zlogs](./terraform_zlogs.md)
- [Virtual Machine](terraform_vm.md)
- [Web Gateway](terraform_vm_gateway.md)
- [Kubernetes Cluster](terraform_k8s.md)
- [ZDB](terraform_zdb.md)
- [Zlogs](terraform_zlogs.md)
- [Quantum Safe Filesystem](terraform_qsfs.md)
- [CapRover](./terraform_caprover.html)
- [CapRover](terraform_caprover.md)

View File

@@ -0,0 +1,34 @@
# Bootstrap
```mermaid
flowchart TD
subgraph TFV1[TF Validator Stack]
TFH(TFBoot Server)
TFR(TFRegistrar)
FARMP1(FarmingPool Coordinator)
end
subgraph TFV2[TF Validator Stack]
TFH2(TFBoot Server)
TFR2(TFRegistrar)
FARMP2(FarmingPool Coordinator)
end
subgraph ZOS1[Zero OS]
Kernel --- ZINIT(ZInit)
ZOS(Bootstrap on ZOS) --- Kernel
MY(Mycelium Agent)
ZINIT --> MY
TFH ---|Internet| ZOS
TFH2 --- ZOS
ZINIT -->|Mycelium| FARMP1
ZINIT --> |Mycelium| FARMP2
ZINIT -->TFR
ZINIT --> TFR2
end
```

View File

@@ -0,0 +1,61 @@
# Bootstrap ZOS
- automated build process for arm64, amd64
- recent kernel
- the configuration of the zinit is done on Bootstrap server (which registrars to connect to, the config file)
- download the intitial needed binaries and config files from the bootstrap server
- when downloading the files check an md5 and check the signature with the specified pub key of bootstrap server
- is done by means of a list (text file) which describes the files which need to be downloaded and hash
- the list is signed with priv key of the bootstrap server
- binaries we need are zinit, mycelium, youki, ttyd and other minimum required components
when booting the node we specify,
is then part part of the iso or image created
- md5 of bootstrap config file
- public key of the bootstrap server (to verify the list of binary files)
- url of Bootstrap server (can be http(s), ...)
- farmerid (no longer going to tfchain, for now just used and put a value in zos we can query)
- farmingpoolid (for now just used and put a value in zos we can query)
### the config file
content
- registrars see zinit config
- size of root partition (if 0 then ramdisk)
- which mycelium nodes to connect to
- debug mode, see ttyd,if debug mode specify which mycelium nodes can connect as debugger
- root ttyd (for debug purposes, is not enabled by default, needs passwd and mycelium only)
- description (so the user using bootstrap can see what is going on)
- different location (by name) of the binaries e.g. useful for debug
- ?
the person who sets up (admin) a bootstrap server can specify different config files,
when creating the image the user can chose which config file, if only 1 then no
choice.
the admin can create more than 1 dir with the binary files needed.
there is a tool which creates a list of the dir and signs this list with the private key of the bootstrap server.
## requirements
- good decent logging to screen if user asks to see
- no need to download an flist to get through the bootstrap (the intial used files are downloaded directly from bootstrap server)
- small image
- everyone an build their own bootstrap server
## remarks
- maybe zinit should be part of the bootstrap in the image, and zinit can download the required binaries like ttyd, mycelium,youki
- maybe a small 1 GB root parition should be created to have just these intital files, so it doesn't have to be downloaded all the time.
## alternatives
- use flist approach, want to try to keep all as simple as possible though, this
widens the components needed, if we use flists then need to be part of same
bootstrap server, so we basically have it all embedded in one easy to use
component.

View File

@@ -0,0 +1,26 @@
# Implementation Details for the components
## TFBoot Server
- existing boot server with required add ons
## TFRegistrar
- openrpc/vlang
- first version uses postgresql replicated cluster
## FarmingPool Coordinator
- openrpc/vlang
- first version uses postgresql replicated cluster
## Zinit
- started as part of bootstrap
- connects back to TFRegistrar, to see what needs to be done if anything
## Mycelium Agent
- starts from zinit

View File

@@ -0,0 +1,30 @@
## TFBoot Server
- abilty to generate an ISO, PXE bootable image, ... to boot a node
- the chosen server will boot over the network and connect back to the TFBoot server which created the image.
## TFRegistrar
- a service that will register the node on the TFGrid 4.x network
- is new for 4.x, no longer uses TFChain
- initial functions
- identity management (users, farmers, kyc/aml, peerreview... )
- farming pool registrar (which farming pools exist, ...)
- bootstrap init (initial bootstrap of the network, also failback mechanism in case of disaster)
## FarmingPool Coordinator
- node management (register, unregister, ...)
- reward distribution (to farmers who are part of pool)
- marketplace, find cloud,AI, ... slices or other services
- yield management (make sure nodes are filled up as good as possible)
- fiat to TFT/INCA conversion (optional)
- monitoring for the nodes in the farmingpool
- service level management (SLA, ...)
## Mycelium Agent
- our overlay network which gives end2end encryption, connectivity, ...

View File

@@ -1,2 +1,18 @@
# Specifications TFGrid 4
# Specs for TFGrid 4 Bootstap process
Our aim is to simplify the way how we bootstrap our TFGrid 4,
The bootstrap can run on existing linux as well as on ZOS 4.
## WHY
- more modular development
- ready for slices, ...
- easier to debug
- no more tfchain
- ready for serverless functions
- ready for billing, ...
- ready to run on more platforms (windows, osx, linux, zos, ...) through zinit as base workload manager
this will allow us to also roll out agents (hero) much faster and easier.

View File

@@ -0,0 +1,45 @@
# OpenRPC
we use OpenRPC on the Registrar and other services.
The OpenRPC is exposed over rest (and later over other protocols).
- rpc_in
- rpc_id (string) : a unique id for the rpc call
- method_name
- params (json as text, encrypted with the mycelium pubkey of the rpc server)
- pubkey (string) : the pubkey of the caller mycelium agent
- signature (rpc_id+method_name+params+return_url+returl_topic signed with Mycelium Agent Priv key of the sender)
- async (bool) : if the call should be async, if async will send as message back over mycelium to source, if return url will use that one
- return_url (string) : the url to return the result, optional to async return the result to sender (source)
- return_topic (string): for the sender to know what the return is for
- rpc_return (for async return)
- rpc_id (string) : a unique id for the rpc call needs to correspond to the rpc_in and source caller
- method_name
- pubkey (string) : the pubkey of the rpc server
- signature (the result is signed with Mycelium Agent of the server for: rpc_id+method_name+params+result)
- topic (string)
- result (json as text, encrypted with the mycelium pubkey of the source caller)
- rpc_check returns the status of the rpc call which is done, error, running or pending
- rpc_id
- signature (of rpc_id from the caller)
- rpc_kill stop the rpc if it is running and would e.g. take too long
- rpc_id
- signature (of rpc_id from the caller)
Because of signatures on the caller can stop a call or check status
if return_url is provided:
- the server will process and if async send result back as json over mycelium to the source or over http to the return_url
the rpc_return is called by an rpc_server to return results to the caller
see [openrpc_openapi_spec](openrpc_openapi_spec.md) for the OpenRPC spec on top of Rest.
## Implementation Details
- the state of the calls on server is kept in redis
- timeouts need to be implemented on the server
- the encryption & signing is used to provide security

View File

@@ -0,0 +1,286 @@
## openapi spec as endpoint for OpenRPC Rest Server
copy paste in https://editor.swagger.io/
```json
{
"openapi": "3.0.0",
"info": {
"title": "Mycelium Agent RPC API",
"description": "An API for Mycelium Agent for handling asynchronous tasks, returning results, and managing RPC call statuses.",
"version": "1.0.0"
},
"paths": {
"/rpc_in": {
"post": {
"summary": "Initiates an RPC call",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/RPCIn"
}
}
}
},
"responses": {
"200": {
"description": "RPC initiated",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"rpc_id": {
"type": "string",
"description": "Unique identifier of the initiated RPC call"
}
}
}
}
}
}
}
}
},
"/rpc_return": {
"post": {
"summary": "Handles an asynchronous return of an RPC call result",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/RPCReturn"
}
}
}
},
"responses": {
"200": {
"description": "RPC return received",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"confirmation": {
"type": "string",
"example": "RPC return successfully received"
}
}
}
}
}
}
}
}
},
"/rpc_check": {
"get": {
"summary": "Checks the status of an RPC call",
"parameters": [
{
"name": "rpc_id",
"in": "query",
"required": true,
"schema": {
"type": "string"
},
"description": "The unique identifier of the RPC call"
},
{
"name": "signature",
"in": "query",
"required": true,
"schema": {
"type": "string"
},
"description": "Signature of the rpc_id, signed by the caller"
}
],
"responses": {
"200": {
"description": "Status of the RPC call",
"content": {
"application/json": {
"schema": {
"type": "string",
"enum": ["done", "error", "running", "pending"]
}
}
}
}
}
}
},
"/rpc_kill": {
"post": {
"summary": "Stops an RPC call that is running",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/RPCKill"
}
}
}
},
"responses": {
"200": {
"description": "RPC call stopped",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"confirmation": {
"type": "string",
"example": "RPC call 12345 has been stopped"
}
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"RPCIn": {
"type": "object",
"required": [
"rpc_id",
"method_name",
"params",
"pubkey",
"signature",
"async"
],
"properties": {
"rpc_id": {
"type": "string",
"description": "A unique identifier for the RPC call."
},
"method_name": {
"type": "string",
"description": "The name of the method being called."
},
"params": {
"type": "object",
"description": "The parameters for the method call in JSON format, is encrypted."
},
"pubkey": {
"type": "string",
"description": "The public key of the Mycelium Agent making the call."
},
"signature": {
"type": "string",
"description": "A signature of rpc_id + method_name + params + return_url + return_topic, signed with the Mycelium Agent private key."
},
"async": {
"type": "boolean",
"description": "Indicates whether the call should be asynchronous."
},
"return_url": {
"type": "string",
"nullable": true,
"description": "The URL to return the result. Optional, used when async is true."
},
"return_topic": {
"type": "string",
"description": "The topic for the sender to know what the return result is related to."
}
},
"example": {
"rpc_id": "12345",
"method_name": "get_data",
"params": { "key": "value" },
"pubkey": "abc123",
"signature": "signeddata",
"async": true,
"return_url": "http://callback.url/result",
"return_topic": "my_topic"
}
},
"RPCReturn": {
"type": "object",
"required": [
"rpc_id",
"method_name",
"params",
"pubkey",
"signature",
"topic",
"result"
],
"properties": {
"rpc_id": {
"type": "string",
"description": "The unique identifier of the RPC call, corresponding to the original call."
},
"method_name": {
"type": "string",
"description": "The name of the method being returned."
},
"params": {
"type": "object",
"description": "The parameters of the original method in JSON format."
},
"pubkey": {
"type": "string",
"description": "The public key of the RPC server."
},
"signature": {
"type": "string",
"description": "Signature of rpc_id + method_name + params + result, signed with the server's private key."
},
"topic": {
"type": "string",
"description": "The topic to identify the return message."
},
"result": {
"type": "object",
"description": "The result of the RPC call in JSON format."
}
},
"example": {
"rpc_id": "12345",
"method_name": "get_data",
"params": { "key": "value" },
"pubkey": "server_pubkey",
"signature": "signed_result_data",
"topic": "my_topic",
"result": { "data": "returned_value" }
}
},
"RPCKill": {
"type": "object",
"required": [
"rpc_id",
"signature"
],
"properties": {
"rpc_id": {
"type": "string",
"description": "The unique identifier of the RPC call to stop."
},
"signature": {
"type": "string",
"description": "The signature of the rpc_id, signed by the caller."
}
},
"example": {
"rpc_id": "12345",
"signature": "signed_rpc_id"
}
}
}
}
}
```

View File

@@ -1,4 +0,0 @@
# test
- link [link](tfgrid4:architecture.md)

View File

@@ -0,0 +1,47 @@
# Zinit 2
- zinit will register over openrpc with TFRegistrar(s)
- zinit needs support for flists
- zinit needs support for runc
- zinit can modify its zinit unit files for properly signed instructions from TFRegistrar(s)
## multiplatform
- can run in ZOS4
- can run on top of Ubuntu 24.04 and Arch Linux (probably more later)
## config file
zinit2 can be started with following config file, this will tell zinit2 to talk to a registrar and take instructions.
```json
{
"TFRegistrarServers": [
{
"name": "Registrar1",
"url": "http://192.168.1.1:8080",
"pub_key": "abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890"
},
{
"name": "Registrar2",
"url": "http://192.168.1.2:8081",
"pub_key": "fedcba0987654321fedcba0987654321fedcba0987654321fedcba0987654321"
},
{
"name": "Registrar3",
"url": "http://192.168.1.3:8082",
"pub_key": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
}
],
"min_servers_required_for_signing": 2,
"debug": false
}
```
url can be ipv6 or a name, also https
## implementation
Have a driver which uses zinit, keep zinit small, do in V.

View File

@@ -0,0 +1,15 @@
# Zinit Flist
- add ability to download flists and use as rootfs for the zinit process
- md5 of flist can be set, and zinit will check if the flist is still valid
- we need a workdir into zinit, which then is the root of the flist
- specify if its read only or we allow write (with overlayfs)
## implementation
Have a driver which uses zinit, keep zinit small, do in V.
## todo first
- make tutorial how to mount flist, how to use the new flist hub, how to mount it readonly-readwrite
- check how to use runc to mount an flist

View File

@@ -0,0 +1,62 @@
## registration
Zinit needs tools to get this info and report it to the registrar.
```go
struct Registration {
pub_key string //public key of the node mycelium
mycelium_address string //the ipv6 addr
capacity Capacity
pub_key_signature string //signed pubkey with TPM on motherboard
}
struct Capacity {
memory_gb f64 // Memory size in GB
disks []Disk // List of disks (both SSDs and HDDs)
cpu CPU // Enum for CPU type
}
struct CPU {
cpu_type CPUType // Size of the disk in GB
description string
cpu_vcores int // Number of CPU virtual cores
}
struct Disk {
size_gb f64 // Size of the disk in GB
disk_type DiskType // Enum for disk type (SSD or HDD)
}
// Enum for disk types
enum DiskType {
ssd
hdd
}
// Enum for CPU types
enum CPUType {
intel_xeon
amd_epyc
intel_core9
}
```
the registration is done to all known registrars using openrpc
- register
- json payload
- ... see the openrpc spec rest server
failsafe
- zinit does this every hour on each know registrar, will be used for watchdog
- zinit at start keeps on trying for at least 1h on all servers each 5 sec, once a registrar found go to maintenance mode (which is once an hour)
## implementation
Have a driver which uses zinit, keep zinit small, do in V.

View File

@@ -0,0 +1,24 @@
# Zinit RunC
- json as defined in https://github.com/opencontainers/runtime-spec
- use https://github.com/containers/youki (maybe even better integrate it in our zinit binary) to execute
- allow flist in mounts (so we can mount an flist in a runc container)
- allow to attach https://github.com/tsl0922/ttyd to the runc container (can be separate process in zinit)
- set passwd, set listening port & host interface (e.g. mycelium)
## todo first
- tutorial how to use runc spec and mount it
- make example how to use ttyd with runc
- make example e.g. postgresql running in runc starting from our podman/buildah (hercontainers). we have that installer
try ai prompt
```
can we export the runc json from podman, and how can we run it manually using runc ourselves
```
## implementation
Have a driver which uses zinit, keep zinit small, do in V.

View File

@@ -0,0 +1,40 @@
## Units Config
- zinit is using unit files (which describes a service)
- the functionality described here allows zinit to reconfigure itself so we can always get out of issues
there are 3 main reasons to have this functionality
1. debug/development (only if debug flag is set in config)
2. fallback/disaster recovery
3. bootstrap, for the first initial setup, a node will receive the required setup in zinit, e.g. connect to farmingpool...
in normal mode the zinit will contact the registrar once an hour and check if there is something to be done.
there are 2 modes
- maintenance: check every 1h
- active: check every 5 sec
some prinpciples
- each instruction given to zero-os needs to be properly signed by the required amount of registrars.
- each instruction is a openrpc instruction where we use the openrpc return mechanism as described in [openrpc.md](openrpc.md).
## available instructions
- method_name: mode_active
- params:
- enddate : int (epoch time when to switch to maintenance mode, can never be longer than 2h)
- nothing to return
- method_name: zinit_list
- returns list of all units (service files) and their status
- method_name: zinit_set
- set a zinit unit file (service file) with all possibilities (see zinit specs)
- method_name: zinit_delete
- name of the zinit unit file
## implementation
Have a driver which uses zinit, keep zinit small, do in V.

View File

@@ -2,4 +2,4 @@
!!!tfgridsimulation_farming.node_wiki name:'1u'
!!tfgridsimulation_farming.node_wiki name:'1u'

View File

@@ -2,4 +2,4 @@
## Regional Internets
!!!tfgridsimulation_farming.regionalinternet_wiki name:'znz'
!!tfgridsimulation_farming.regionalinternet_wiki name:'znz'