updated smaller collections for manual
This commit is contained in:
273
collections/developers/internals/zos/manual/api.md
Normal file
273
collections/developers/internals/zos/manual/api.md
Normal file
@@ -0,0 +1,273 @@
|
||||
<h1>API</h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Deployments](#deployments)
|
||||
- [Deploy](#deploy)
|
||||
- [Update](#update)
|
||||
- [Get](#get)
|
||||
- [Changes](#changes)
|
||||
- [Delete](#delete)
|
||||
- [Statistics](#statistics)
|
||||
- [Storage](#storage)
|
||||
- [List separate pools with capacity](#list-separate-pools-with-capacity)
|
||||
- [Network](#network)
|
||||
- [List Wireguard Ports](#list-wireguard-ports)
|
||||
- [Supports IPV6](#supports-ipv6)
|
||||
- [List Public Interfaces](#list-public-interfaces)
|
||||
- [List Public IPs](#list-public-ips)
|
||||
- [Get Public Config](#get-public-config)
|
||||
- [Admin](#admin)
|
||||
- [List Physical Interfaces](#list-physical-interfaces)
|
||||
- [Get Public Exit NIC](#get-public-exit-nic)
|
||||
- [Set Public Exit NIC](#set-public-exit-nic)
|
||||
- [System](#system)
|
||||
- [Version](#version)
|
||||
- [DMI](#dmi)
|
||||
- [Hypervisor](#hypervisor)
|
||||
- [GPUs](#gpus)
|
||||
- [List Gpus](#list-gpus)
|
||||
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
This document should list all the actions available on the node public API. which is available over [RMB](https://github.com/threefoldtech/rmb-rs)
|
||||
|
||||
The node is always reachable over the node twin id as per the node object on tfchain. Once node twin is known, a [client](https://github.com/threefoldtech/zos/blob/main/client/node.go) can be initiated and used to talk to the node.
|
||||
|
||||
## Deployments
|
||||
|
||||
### Deploy
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.deployment.deploy` | [Deployment](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/deployment.go)|-|
|
||||
|
||||
Deployment need to have valid signature, the contract must exist on chain with the correct contract hash as the deployment.
|
||||
|
||||
### Update
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.deployment.update` | [Deployment](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/deployment.go)|-|
|
||||
|
||||
The update call, will update (modify) an already existing deployment with new definition. The deployment must already exist on the node, the contract must have the new hash as the provided deployment, plus valid versions.
|
||||
|
||||
> TODO: need more details over the deployment update calls how to handle the version
|
||||
|
||||
### Get
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.deployment.get` | `{contract_id: <id>}`|[Deployment](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/deployment.go)|
|
||||
|
||||
### Changes
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.deployment.changes` | `{contract_id: <id>}`| `[]Workloads` |
|
||||
|
||||
Where:
|
||||
|
||||
- [workload](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/workload.go)
|
||||
|
||||
The list will contain all deployment workloads (changes) means a workload can (will) appear
|
||||
multiple times in this list for each time a workload state will change.
|
||||
|
||||
This means a workload will first appear in `init` state, then next time it will show the state change (with time) to the next state which can be success or failure, and so on.
|
||||
This will happen for each workload in the deployment.
|
||||
|
||||
### Delete
|
||||
>
|
||||
> You probably never need to call this command yourself, the node will delete the deployment once the contract is cancelled on the chain.
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.deployment.get` | `{contract_id: <id>}`|-|
|
||||
|
||||
## Statistics
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.statistics.get` | - |`{total: Capacity, used: Capacity, system: Capacity}`|
|
||||
|
||||
Where:
|
||||
|
||||
```json
|
||||
Capacity {
|
||||
"cur": "uint64",
|
||||
"sru": "bytes",
|
||||
"hru": "bytes",
|
||||
"mru": "bytes",
|
||||
"ipv4u": "unit64",
|
||||
}
|
||||
```
|
||||
|
||||
> Note that, `used` capacity equal the full workload reserved capacity PLUS the system reserved capacity
|
||||
so `used = user_used + system`, while `system` is only the amount of resourced reserved by `zos` itself
|
||||
|
||||
## Storage
|
||||
|
||||
### List separate pools with capacity
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.storage.pools` | - |`[]Pool`|
|
||||
|
||||
List all node pools with their types, size and used space
|
||||
where
|
||||
|
||||
```json
|
||||
Pool {
|
||||
"name": "pool-id",
|
||||
"type": "(ssd|hdd)",
|
||||
"size": <size in bytes>,
|
||||
"used": <used in bytes>
|
||||
}
|
||||
```
|
||||
|
||||
## Network
|
||||
|
||||
### List Wireguard Ports
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.list_wg_ports` | - |`[]uint16`|
|
||||
|
||||
List all `reserved` ports on the node that can't be used for network wireguard. A user then need to find a free port that is not in this list to use for his network
|
||||
|
||||
### Supports IPV6
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.has_ipv6` | - |`bool`|
|
||||
|
||||
### List Public Interfaces
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.interfaces` | - |`map[string][]IP` |
|
||||
|
||||
list of node IPs this is a public information. Mainly to show the node yggdrasil IP and the `zos` interface.
|
||||
|
||||
### List Public IPs
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.list_public_ips` | - |`[]IP` |
|
||||
|
||||
List all user deployed public IPs that are served by this node.
|
||||
|
||||
### Get Public Config
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.public_config_get` | - |`PublicConfig` |
|
||||
|
||||
Where
|
||||
|
||||
```json
|
||||
PublicConfig {
|
||||
"type": "string", // always vlan
|
||||
"ipv4": "CIDR",
|
||||
"ipv6": "CIDR",
|
||||
"gw4": "IP",
|
||||
"gw6": "IP",
|
||||
"domain": "string",
|
||||
}
|
||||
```
|
||||
|
||||
returns the node public config or error if not set. If a node has public config
|
||||
it means it can act like an access node to user private networks
|
||||
|
||||
## Admin
|
||||
|
||||
The next set of commands are ONLY possible to be called by the `farmer` only.
|
||||
|
||||
### List Physical Interfaces
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.admin.interfaces` | - |`map[string]Interface` |
|
||||
|
||||
Where
|
||||
|
||||
```json
|
||||
Interface {
|
||||
"ips": ["ip"],
|
||||
"mac": "mac-address",
|
||||
}
|
||||
```
|
||||
|
||||
Lists ALL node physical interfaces.
|
||||
Those interfaces then can be used as an input to `set_public_nic`
|
||||
|
||||
### Get Public Exit NIC
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.admin.get_public_nic` | - |`ExitDevice` |
|
||||
|
||||
Where
|
||||
|
||||
```json
|
||||
ExitInterface {
|
||||
"is_single": "bool",
|
||||
"is_dual": "bool",
|
||||
"dual_interface": "name",
|
||||
}
|
||||
```
|
||||
|
||||
returns the interface used by public traffic (for user workloads)
|
||||
|
||||
### Set Public Exit NIC
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.network.admin.set_public_nic` | `name` |- |
|
||||
|
||||
name must be one of (free) names returned by `zos.network.admin.interfaces`
|
||||
|
||||
## System
|
||||
|
||||
### Version
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.system.version` | - | `{zos: string, zinit: string}` |
|
||||
|
||||
### DMI
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.system.dmi` | - | [DMI](https://github.com/threefoldtech/zos/blob/main/pkg/capacity/dmi/dmi.go) |
|
||||
|
||||
### Hypervisor
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.system.hypervisor` | - | `string` |
|
||||
|
||||
## GPUs
|
||||
|
||||
### List Gpus
|
||||
|
||||
| command |body| return|
|
||||
|---|---|---|
|
||||
| `zos.gpu.list` | - | `[]GPU` |
|
||||
|
||||
Where
|
||||
|
||||
```json
|
||||
GPU {
|
||||
"id": "string"
|
||||
"vendor": "string"
|
||||
"device": "string",
|
||||
"contract": "uint64",
|
||||
}
|
||||
```
|
||||
|
||||
Lists all available node GPUs if exist
|
@@ -0,0 +1,5 @@
|
||||
# `gateway-fqdn-proxy` type
|
||||
|
||||
This create a proxy with the given fqdn to the given backends. In this case the user then must configure his dns server (i.e name.com) to point to the correct node public IP.
|
||||
|
||||
Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_fqdn.go)
|
@@ -0,0 +1,5 @@
|
||||
# `gateway-name-proxy` type
|
||||
|
||||
This create a proxy with the given name to the given backends. The `name` of the proxy must be owned by a name contract on the grid. The idea is that a user can reserve a name (i.e `example`). Later he can deploy a gateway work load with name `example` on any gateway node that points to specified backends. The name then is prefix by the gateway name. For example if the gateway domain is `gent0.freefarm.com` then your full QFDN is goint to be called `example.gen0.freefarm.com`
|
||||
|
||||
Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_name.go)
|
11
collections/developers/internals/zos/manual/ip/readme.md
Normal file
11
collections/developers/internals/zos/manual/ip/readme.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# `ip` type
|
||||
The IP workload type reserves an IP from the available contract IPs list. Which means on contract creation the user must specify number of public IPs it needs to use. The contract then will allocate this number of IPs from the farm and will kept on the contract.
|
||||
|
||||
When the user then add the IP workload to the deployment associated with this contract, each IP workload will pick and link to one IP from the contract.
|
||||
|
||||
In minimal form, `IP` workload does not require any data. But in reality it has 2 flags to pick which kind of public IP do you want
|
||||
|
||||
- `ipv4` (`bool`): pick one from the contract public Ipv4
|
||||
- `ipv6` (`bool`): pick an IPv6 over SLAAC. Ipv6 are not reserved with a contract. They are basically free if the farm infrastructure allows Ipv6 over SLAAC.
|
||||
|
||||
Full `IP` workload definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/ipv4.go)
|
187
collections/developers/internals/zos/manual/manual.md
Normal file
187
collections/developers/internals/zos/manual/manual.md
Normal file
@@ -0,0 +1,187 @@
|
||||
<h1> ZOS Manual</h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Farm? Network? What are these?](#farm-network-what-are-these)
|
||||
- [Creating a farm](#creating-a-farm)
|
||||
- [Interaction](#interaction)
|
||||
- [Deployment](#deployment)
|
||||
- [Workload](#workload)
|
||||
- [Types](#types)
|
||||
- [API](#api)
|
||||
- [Raid Controller Configuration](#raid-controller-configuration)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
This document explain the usage of `ZOS`. `ZOS` usually pronounced (zero OS), got it's name from the idea of zero configuration. Since after the initial `minimal` configuration which only include which `farm` to join and what `network` (`development`, `testing`, or `production`) the owner of the node does not has to do anything more, and the node work fully autonomous.
|
||||
|
||||
The farmer himself cannot control the node, or access it by any mean. The only way you can interact with a node is via it's public API.
|
||||
|
||||
## Farm? Network? What are these?
|
||||
|
||||
Well, `zos` is built to allow people to run `workloads` around the world this simply is enabled by allowing 3rd party data-centers to run `ZOS` on their hardware. Then a user can then find any nearby `farm` (is what we call a cluster of nodes that belong to the same `farmer`) and then they can choose to deploy capacity on that node/farm. A `farm` can consist of one or more nodes.
|
||||
|
||||
So what is `network`.Well, to allow developers to build and `zos` itself and make it available during the early stages of development for testers and other enthusiastic people to try it out. To allow this we created 3 `networks`
|
||||
- `development`: This is used mainly by developers to test their work. This is still available for users to deploy their capacity on (for really really cheap prices), but at the same time there is no grantee that it's stable or that data loss or corruption will happen. Also the entire network can be reset with no heads up.
|
||||
- `testing`: Once new features are developed and well tested on `development` network they are released to `testing` environment. This also available for users to use with a slightly higher price than `development` network. But it's much more stable. In theory this network is stable, there should be no resets of the network, issues on this network usually are not fatal, but partial data loss can still occurs.
|
||||
- `production`: Well, as the name indicates this is the most stable network (also full price) once new features are fully tested on `testing` network they are released on `production`.
|
||||
|
||||
## Creating a farm
|
||||
|
||||
While this is outside the scope of this document here you are a [link](https://library.threefold.me/info/manual/#/manual__create_farm)
|
||||
|
||||
## Interaction
|
||||
|
||||
`ZOS` provide a simple `API` that can be used to:
|
||||
- Query node runtime information
|
||||
- Network information
|
||||
- Free `wireguard` ports
|
||||
- Get public configuration
|
||||
- System version
|
||||
- Other (check client for details)
|
||||
- Deployment management (more on that later)
|
||||
- Create
|
||||
- Update
|
||||
- Delete
|
||||
|
||||
Note that `zos` API is available over `rmb` protocol. `rmb` which means `reliable message bus` is a simple messaging protocol that enables peer to peer communication over `yggdrasil` network. Please check [`rmb`](https://github.com/threefoldtech/rmb) for more information.
|
||||
|
||||
Simply put, `RMB` allows 2 entities two communicate securely knowing only their `id` an id is linked to a public key on the blockchain. Hence messages are verifiable via a signature.
|
||||
|
||||
To be able to contact the node directly you need to run
|
||||
- `yggdrasil`
|
||||
- `rmb` (correctly configured)
|
||||
|
||||
Once you have those running you can now contact the node over `rmb`. For a reference implementation (function names and parameters) please refer to [RMB documentation](../../rmb/rmb_toc.md)
|
||||
|
||||
Here is a rough example of how low level creation of a deployment is done.
|
||||
|
||||
```go
|
||||
cl, err := rmb.Default()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
then create an instance of the node client
|
||||
```go
|
||||
node := client.NewNodeClient(NodeTwinID, cl)
|
||||
```
|
||||
define your deployment object
|
||||
```go
|
||||
dl := gridtypes.Deployment{
|
||||
Version: Version,
|
||||
TwinID: Twin, //LocalTwin,
|
||||
// this contract id must match the one on substrate
|
||||
Workloads: []gridtypes.Workload{
|
||||
network(), // network workload definition
|
||||
zmount(), // zmount workload definition
|
||||
publicip(), // public ip definition
|
||||
zmachine(), // zmachine definition
|
||||
},
|
||||
SignatureRequirement: gridtypes.SignatureRequirement{
|
||||
WeightRequired: 1,
|
||||
Requests: []gridtypes.SignatureRequest{
|
||||
{
|
||||
TwinID: Twin,
|
||||
Weight: 1,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
compute hash
|
||||
```go
|
||||
hash, err := dl.ChallengeHash()
|
||||
if err != nil {
|
||||
panic("failed to create hash")
|
||||
}
|
||||
fmt.Printf("Hash: %x\n", hash)
|
||||
```
|
||||
create the contract on `substrate` and get the `contract id` then you can link the deployment to the contract, then send to the node.
|
||||
|
||||
```go
|
||||
dl.ContractID = 11 // from substrate
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
err = node.DeploymentDeploy(ctx, dl)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
Once the node receives the deployment. It will then fetch the contract (using the contract id) from the node recompute the deployment hash and compare with the one set on the contract. If matches, the node proceeds to process the deployment.
|
||||
|
||||
## Deployment
|
||||
|
||||
A deployment is a set of workloads that are contextually related. Workloads in the same deployment can reference to other workloads in the same deployment. But can't be referenced from another deployment. Well, except the network workload which can be referenced from a different deployment as long it belongs to the same user.
|
||||
|
||||
Workloads has unique IDs (per deployment) that are set by the user, hence he can create multiple workloads then reference to them with the given IDs (`names`)
|
||||
|
||||
For example, a deployment can define
|
||||
- A private network with id `net`
|
||||
- A disk with id `data`
|
||||
- A public IP with id `ip`
|
||||
- A container that uses:
|
||||
- The container can mount the disk like `mount: {data: /mount/path}`.
|
||||
- The container can get assign the public IP to itself like by referencing the IP with id `ip`.
|
||||
- etc.
|
||||
|
||||
### Workload
|
||||
Each workload has a type which is associated with some data. So minimal definition of a workload contains:
|
||||
- `name`: unique per deployment (id)
|
||||
- `type`: workload type
|
||||
- `data`: workload data that is proper for the selected type.
|
||||
|
||||
```go
|
||||
|
||||
// Workload struct
|
||||
type Workload struct {
|
||||
// Version is version of reservation object. On deployment creation, version must be 0
|
||||
// then only workloads that need to be updated must match the version of the deployment object.
|
||||
// if a deployment update message is sent to a node it does the following:
|
||||
// - validate deployment version
|
||||
// - check workloads list, if a version is not matching the new deployment version, the workload is untouched
|
||||
// - if a workload version is same as deployment, the workload is "updated"
|
||||
// - if a workload is removed, the workload is deleted.
|
||||
Version uint32 `json:"version"`
|
||||
//Name is unique workload name per deployment (required)
|
||||
Name Name `json:"name"`
|
||||
// Type of the reservation (container, zdb, vm, etc...)
|
||||
Type WorkloadType `json:"type"`
|
||||
// Data is the reservation type arguments.
|
||||
Data json.RawMessage `json:"data"`
|
||||
// Metadata is user specific meta attached to deployment, can be used to link this
|
||||
// deployment to other external systems for automation
|
||||
Metadata string `json:"metadata"`
|
||||
//Description human readale description of the workload
|
||||
Description string `json:"description"`
|
||||
// Result of reservation, set by the node
|
||||
Result Result `json:"result"`
|
||||
}
|
||||
```
|
||||
|
||||
### Types
|
||||
- Virtual machine related
|
||||
- [`network`](./workload_types.md#network-type)
|
||||
- [`ip`](./workload_types.md#ip-type)
|
||||
- [`zmount`](./workload_types.md#zmount-type)
|
||||
- [`zmachine`](./workload_types.md#zmachine-type)
|
||||
- [`zlogs`](./workload_types.md#zlogs-type)
|
||||
- Storage related
|
||||
- [`zdb`](./workload_types.md#zdb-type)
|
||||
- [`qsfs`](./workload_types.md#qsfs-type)
|
||||
- Gateway related
|
||||
- [`gateway-name-proxy`](./workload_types.md#gateway-name-proxy-type)
|
||||
- [`gateway-fqdn-proxy`](./workload_types.md#gateway-fqdn-proxy-type)
|
||||
|
||||
### API
|
||||
Node is always connected to the RMB network with the node `twin`. Means the node is always reachable over RMB with the node `twin-id` as an address.
|
||||
|
||||
The [node client](https://github.com/threefoldtech/zos/blob/main/client/node.go) should have a complete list of all available functions. documentations of the API can be found [here](./api.md)
|
||||
|
||||
## Raid Controller Configuration
|
||||
|
||||
0-OS goal is to expose raw capacity. So it is best to always try to give it access to the most raw access to the disks. In case of raid controllers, the best is to try to set it up in [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD) mode if available.
|
@@ -0,0 +1,14 @@
|
||||
# `network` type
|
||||
Private network can span multiple nodes at the same time. Which means workloads (`VMs`) that live (on different node) but part of the same virtual network can still reach each other over this `private` network.
|
||||
|
||||
If one (or more) nodes are `public access nodes` you can also add your personal laptop to the nodes and be able to reach your `VMs` over `wireguard` network.
|
||||
|
||||
In the simplest form a network workload consists of:
|
||||
- network range
|
||||
- sub-range available on this node
|
||||
- private key
|
||||
- list of peers
|
||||
- each peer has public key
|
||||
- sub-range
|
||||
|
||||
Full network definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/network.go)
|
@@ -0,0 +1,5 @@
|
||||
# `qsfs` type
|
||||
|
||||
`qsfs` short for `quantum safe file system` is a FUSE filesystem which aim to be able to support unlimited local storage with remote backend for offload and backup which cannot be broke even by a quantum computer. Please read about it [here](https://github.com/threefoldtech/quantum-storage)
|
||||
|
||||
To create a `qsfs` workload you need to provide the workload type as [here](https://github.com/threefoldtech/zos/blob/main/pkg/qsfsd/qsfs.go)
|
108
collections/developers/internals/zos/manual/workload_types.md
Normal file
108
collections/developers/internals/zos/manual/workload_types.md
Normal file
@@ -0,0 +1,108 @@
|
||||
<h1> Workload Types </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Virtual Machine](#virtual-machine)
|
||||
- [`network` type](#network-type)
|
||||
- [`ip` type](#ip-type)
|
||||
- [`zmount` type](#zmount-type)
|
||||
- [`zmachine` type](#zmachine-type)
|
||||
- [Building your `flist`](#building-your-flist)
|
||||
- [`zlogs` type](#zlogs-type)
|
||||
- [Storage](#storage)
|
||||
- [`zdb` type](#zdb-type)
|
||||
- [`qsfs` type](#qsfs-type)
|
||||
- [Gateway](#gateway)
|
||||
- [`gateway-name-proxy` type](#gateway-name-proxy-type)
|
||||
- [`gateway-fqdn-proxy` type](#gateway-fqdn-proxy-type)
|
||||
|
||||
## Introduction
|
||||
|
||||
Each workload has a type which is associated with some data. We present here the different types of workload associated with Zero-OS.
|
||||
|
||||
## Virtual Machine
|
||||
|
||||
### `network` type
|
||||
Private network can span multiple nodes at the same time. Which means workloads (`VMs`) that live (on different node) but part of the same virtual network can still reach each other over this `private` network.
|
||||
|
||||
If one (or more) nodes are `public access nodes` you can also add your personal laptop to the nodes and be able to reach your `VMs` over `wireguard` network.
|
||||
|
||||
In the simplest form a network workload consists of:
|
||||
- network range
|
||||
- sub-range available on this node
|
||||
- private key
|
||||
- list of peers
|
||||
- each peer has public key
|
||||
- sub-range
|
||||
|
||||
Full network definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/network.go)
|
||||
|
||||
### `ip` type
|
||||
The IP workload type reserves an IP from the available contract IPs list. Which means on contract creation the user must specify number of public IPs it needs to use. The contract then will allocate this number of IPs from the farm and will kept on the contract.
|
||||
|
||||
When the user then add the IP workload to the deployment associated with this contract, each IP workload will pick and link to one IP from the contract.
|
||||
|
||||
In minimal form, `IP` workload does not require any data. But in reality it has 2 flags to pick which kind of public IP do you want
|
||||
|
||||
- `ipv4` (`bool`): pick one from the contract public Ipv4
|
||||
- `ipv6` (`bool`): pick an IPv6 over SLAAC. Ipv6 are not reserved with a contract. They are basically free if the farm infrastructure allows Ipv6 over SLAAC.
|
||||
|
||||
Full `IP` workload definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/ipv4.go)
|
||||
|
||||
### `zmount` type
|
||||
A `zmount` is a local disk that can be attached directly to a container or a virtual machine. `zmount` only require `size` as input as defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmount.go) this workload type is only utilized via the `zmachine` workload.
|
||||
|
||||
### `zmachine` type
|
||||
|
||||
`zmachine` is a unified container/virtual machine type. This can be used to start a virtual machine on a `zos` node give the following:
|
||||
- `flist`, this what provide the base `vm` image or container image.
|
||||
- the `flist` content is what changes the `zmachine` mode. An `flist` built from a docker image or has files, or executable binaries will run in a container mode. `ZOS` will inject it's own `kernel+initramfs` to run the workload and kick start the defined `flist` `entrypoint`
|
||||
- private network to join (with assigned IP)
|
||||
- optional public `ipv4` or `ipv6`
|
||||
- optional disks. But at least one disk is required in case running `zmachine` in `vm` mode, which is used to hold the `vm` root image.
|
||||
|
||||
For more details on all parameters needed to run a `zmachine` please refer to [`zmachine` data](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmachine.go)
|
||||
|
||||
#### Building your `flist`
|
||||
|
||||
Please refer to [this document](./manual.md) here about how to build an compatible `zmachine flist`
|
||||
|
||||
### `zlogs` type
|
||||
|
||||
Zlogs is a utility workload that allows you to stream `zmachine` logs to a remote location.
|
||||
|
||||
The `zlogs` workload needs to know what `zmachine` to stream logs of and also the `target` location to stream the logs to. `zlogs` uses internally the [`tailstream`](https://github.com/threefoldtech/tailstream) so it supports any streaming url that is supported by this utility.
|
||||
|
||||
`zlogs` workload runs inside the same private network as the `zmachine` instance. Which means zlogs can stream logs to other `zmachines` that is running inside the same private network (possibly on different nodes).
|
||||
|
||||
For example, you can run [`logagg`](https://github.com/threefoldtech/logagg) which is a web-socket server that can work with `tailstream` web-socket protocol.
|
||||
|
||||
Check `zlogs` configuration [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zlogs.go)
|
||||
|
||||
## Storage
|
||||
|
||||
### `zdb` type
|
||||
`zdb` is a storage primitives that gives you a persisted key value store over RESP protocol. Please check [`zdb` docs](https://github.com/threefoldtech/0-db)
|
||||
|
||||
Please check [here](https://github.com/threefoldtech/zos/blob/main/pkg/zdb/zdb.go) for workload data.
|
||||
|
||||
### `qsfs` type
|
||||
|
||||
`qsfs` short for `quantum safe file system` is a FUSE filesystem which aim to be able to support unlimited local storage with remote backend for offload and backup which cannot be broke even by a quantum computer. Please read about it [here](https://github.com/threefoldtech/quantum-storage)
|
||||
|
||||
To create a `qsfs` workload you need to provide the workload type as [here](https://github.com/threefoldtech/zos/blob/main/pkg/qsfsd/qsfs.go)
|
||||
|
||||
## Gateway
|
||||
|
||||
### `gateway-name-proxy` type
|
||||
|
||||
This create a proxy with the given name to the given backends. The `name` of the proxy must be owned by a name contract on the grid. The idea is that a user can reserve a name (i.e `example`). Later he can deploy a gateway work load with name `example` on any gateway node that points to specified backends. The name then is prefix by the gateway name. For example if the gateway domain is `gent0.freefarm.com` then your full QFDN is goint to be called `example.gen0.freefarm.com`
|
||||
|
||||
Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_name.go)
|
||||
|
||||
### `gateway-fqdn-proxy` type
|
||||
|
||||
This create a proxy with the given fqdn to the given backends. In this case the user then must configure his dns server (i.e name.com) to point to the correct node public IP.
|
||||
|
||||
Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_fqdn.go)
|
@@ -0,0 +1,4 @@
|
||||
# `zdb` type
|
||||
`zdb` is a storage primitives that gives you a persisted key value store over RESP protocol. Please check [`zdb` docs](https://github.com/threefoldtech/0-db)
|
||||
|
||||
Please check [here](https://github.com/threefoldtech/zos/blob/main/pkg/zdb/zdb.go) for workload data.
|
11
collections/developers/internals/zos/manual/zlogs/readme.md
Normal file
11
collections/developers/internals/zos/manual/zlogs/readme.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# `zlogs` type
|
||||
|
||||
Zlogs is a utility workload that allows you to stream `zmachine` logs to a remote location.
|
||||
|
||||
The `zlogs` workload needs to know what `zmachine` to stream logs of and also the `target` location to stream the logs to. `zlogs` uses internally the [`tailstream`](https://github.com/threefoldtech/tailstream) so it supports any streaming url that is supported by this utility.
|
||||
|
||||
`zlogs` workload runs inside the same private network as the `zmachine` instance. Which means zlogs can stream logs to other `zmachines` that is running inside the same private network (possibly on different nodes).
|
||||
|
||||
For example, you can run [`logagg`](https://github.com/threefoldtech/logagg) which is a web-socket server that can work with `tailstream` web-socket protocol.
|
||||
|
||||
Check `zlogs` configuration [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zlogs.go)
|
@@ -0,0 +1,14 @@
|
||||
# Cloud console
|
||||
|
||||
- `cloud-console` is a tool to view machine logging and interact with the machine you have deployed
|
||||
- It always runs on the machine's private network ip and port number equla to `20000 +last octect` of machine private IP
|
||||
- For example if the machine ip is `10.20.2.2/24` this means
|
||||
- `cloud-console` is running on `10.20.2.1:20002`
|
||||
- For the cloud-console to run we need to start the cloud-hypervisor with option "--serial pty" instead of tty, this allows us to interact with the vm from another process `cloud-console` in our case
|
||||
- To be able to connect to the web console you should first start wireguard to connect to the private network
|
||||
|
||||
```
|
||||
wg-quick up wireguard.conf
|
||||
```
|
||||
|
||||
- Then go to your browser with the network router IP `10.20.2.1:20002`
|
@@ -0,0 +1,13 @@
|
||||
# `zmachine` type
|
||||
|
||||
`zmachine` is a unified container/virtual machine type. This can be used to start a virtual machine on a `zos` node give the following:
|
||||
- `flist`, this what provide the base `vm` image or container image.
|
||||
- the `flist` content is what changes the `zmachine` mode. An `flist` built from a docker image or has files, or executable binaries will run in a container mode. `ZOS` will inject it's own `kernel+initramfs` to run the workload and kick start the defined `flist` `entrypoint`
|
||||
- private network to join (with assigned IP)
|
||||
- optional public `ipv4` or `ipv6`
|
||||
- optional disks. But at least one disk is required in case running `zmachine` in `vm` mode, which is used to hold the `vm` root image.
|
||||
|
||||
For more details on all parameters needed to run a `zmachine` please refer to [`zmachine` data](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmachine.go)
|
||||
|
||||
# Building your `flist`.
|
||||
Please refer to [this document](../manual.md) here about how to build an compatible `zmachine flist`
|
410
collections/developers/internals/zos/manual/zmachine/zmachine.md
Normal file
410
collections/developers/internals/zos/manual/zmachine/zmachine.md
Normal file
@@ -0,0 +1,410 @@
|
||||
# Zmachine
|
||||
|
||||
A `Zmachine` is an instance of virtual compute capacity. There are 2 kinds of Zmachines.
|
||||
One is a `VM`, standard in cloud environments. Next to this it can also be a `container`.
|
||||
On the Zos level, both of these are implemented as virtual machines. Depending on
|
||||
the context, it will be considered to be either a VM or a container. In either
|
||||
scenario, the `Zmachine` is started from an `Flist`.
|
||||
|
||||
> Note, both VM and Container on ZOS are actually served as Virtual Machines. The
|
||||
only difference is that if you are running in VM mode, you only need to provide a **raw**
|
||||
disk image (image.raw) in your flist.
|
||||
|
||||
## Container
|
||||
|
||||
A container is meant to host `microservice`. The `microservice` architecture generally
|
||||
dictates that each service should be run in it's own container (therefore providing
|
||||
a level of isolation), and communicate with other containers it depends on over the
|
||||
network.
|
||||
|
||||
Similar to docker. In Zos, a container is actually also run in a virtualized environment.
|
||||
Similar to containers, some setup is done on behalf of the user. After setup this is done,
|
||||
the users `entrypoint` is started.
|
||||
|
||||
It should be noted that a container has no control over the kernel
|
||||
used to run it, if this is required, a `VM` should be used instead. Furthermore,
|
||||
a container should ideally only have 1 process running. A container can be a single
|
||||
binary, or a complete filesystem. In general, the first should be preferred, and
|
||||
if you need the latter, it might be an indication that you actually want a `VM`.
|
||||
|
||||
For containers, the network setup will be created for you. Your init process can
|
||||
assume that it will be fully set up (according to the config you provided) by the
|
||||
time it is started. Mountpoints will also be setup for you. The environment variables
|
||||
passed will be available inside the container.
|
||||
|
||||
## VM
|
||||
|
||||
In container mode, zos provide a minimal kernel that is used to run a light weight VM
|
||||
and then run your app from your flist. If you need control over the kernel you can actually
|
||||
still provide it inside the flist as follows:
|
||||
|
||||
- /boot/vmlinuz
|
||||
- /boot/initrd.img [optional]
|
||||
|
||||
**NOTE**: the vmlinuz MUST be an EFI kernel (not compressed) if building your own kernel, or you can use the [extract-vmlinux](https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux) script to extract the EFI kernel. To test if your kernel is a valid elf kernel run command
|
||||
`readelf -n <path/to/vmlinuz>`
|
||||
|
||||
Any of those files can be a symlink to another file in the flist.
|
||||
|
||||
If ZOS found the `/boot/vmlinuz` file, it will use this with the initrd.img if also exists. otherwise zos will use the built-in minimal kernel and run in `container` mode.
|
||||
|
||||
### Building an ubuntu VM flist
|
||||
|
||||
This is a guide to help you build a working VM flist.
|
||||
|
||||
This guide is for ubuntu `jammy`
|
||||
|
||||
prepare rootfs
|
||||
|
||||
```bash
|
||||
mkdir ubuntu:jammy
|
||||
```
|
||||
|
||||
bootstrap ubuntu
|
||||
|
||||
```bash
|
||||
sudo debootstrap jammy ubuntu:jammy http://archive.ubuntu.com/ubuntu
|
||||
```
|
||||
|
||||
this will create and download the basic rootfs for ubuntu jammy in the directory `ubuntu:jammy`.
|
||||
After its done we can then chroot into this directory to continue installing the necessary packages needed and configure
|
||||
few things.
|
||||
|
||||
> I am using script called `arch-chroot` which is available by default on arch but you can also install on ubuntu to continue
|
||||
the following steps
|
||||
|
||||
```bash
|
||||
sudo arch-chroot ubuntu:jammy
|
||||
```
|
||||
|
||||
> This script (similar to the `chroot` command) switch root to that given directory but also takes care of mounting /dev /sys, etc.. for you
|
||||
and clean it up on exit.
|
||||
|
||||
Next just remove this link and re-create the file with a valid name to be able to continue
|
||||
|
||||
```bash
|
||||
# make sure to set the path correctly
|
||||
export PATH=/usr/local/sbin/:/usr/local/bin/:/usr/sbin/:/usr/bin/:/sbin:/bin
|
||||
|
||||
rm /etc/resolv.conf
|
||||
echo 'nameserver 1.1.1.1' > /etc/resolv.conf
|
||||
```
|
||||
|
||||
Install cloud-init
|
||||
|
||||
```bash
|
||||
apt-get update
|
||||
apt-get install cloud-init openssh-server curl
|
||||
# to make sure we have clean setup
|
||||
cloud-init clean
|
||||
```
|
||||
|
||||
Also really important that we install a kernel
|
||||
|
||||
```bash
|
||||
apt-get install linux-modules-extra-5.15.0-25-generic
|
||||
```
|
||||
|
||||
> I choose this package because it will also install extra modules for us and a generic kernel
|
||||
|
||||
Next make sure that virtiofs is part of the initramfs image
|
||||
|
||||
```bash
|
||||
echo 'fs-virtiofs' >> /etc/initramfs-tools/modules
|
||||
update-initramfs -c -k all
|
||||
```
|
||||
|
||||
clean up cache
|
||||
|
||||
```bash
|
||||
apt-get clean
|
||||
```
|
||||
|
||||
Last thing we do inside the container before we actually upload the flist
|
||||
is to make sure the kernel is in the correct format
|
||||
|
||||
This step does not require that we stay in the chroot so hit `ctr+d` or type `exit`
|
||||
|
||||
you should be out of the arch-chroot now
|
||||
|
||||
```bash
|
||||
curl -O https://raw.githubusercontent.com/torvalds/linux/master/scripts/extract-vmlinux
|
||||
chmod +x extract-vmlinux
|
||||
|
||||
sudo ./extract-vmlinux ubuntu:jammy/boot/vmlinuz | sudo tee ubuntu:jammy/boot/vmlinuz-5.15.0-25-generic.elf > /dev/null
|
||||
# then replace original kernel
|
||||
sudo mv ubuntu:jammy/boot/vmlinuz-5.15.0-25-generic.elf ubuntu:jammy/boot/vmlinuz-5.15.0-25-generic
|
||||
```
|
||||
|
||||
To verify you can do this:
|
||||
|
||||
```bash
|
||||
ls -l ubuntu:jammy/boot
|
||||
```
|
||||
|
||||
and it should show something like
|
||||
|
||||
```bash
|
||||
total 101476
|
||||
-rw-r--r-- 1 root root 260489 Mar 30 2022 config-5.15.0-25-generic
|
||||
drwxr-xr-x 1 root root 54 Jun 28 15:35 grub
|
||||
lrwxrwxrwx 1 root root 28 Jun 28 15:35 initrd.img -> initrd.img-5.15.0-25-generic
|
||||
-rw-r--r-- 1 root root 41392462 Jun 28 15:39 initrd.img-5.15.0-25-generic
|
||||
lrwxrwxrwx 1 root root 28 Jun 28 15:35 initrd.img.old -> initrd.img-5.15.0-25-generic
|
||||
-rw------- 1 root root 6246119 Mar 30 2022 System.map-5.15.0-25-generic
|
||||
lrwxrwxrwx 1 root root 25 Jun 28 15:35 vmlinuz -> vmlinuz-5.15.0-25-generic
|
||||
-rw-r--r-- 1 root root 55988436 Jun 28 15:50 vmlinuz-5.15.0-25-generic
|
||||
lrwxrwxrwx 1 root root 25 Jun 28 15:35 vmlinuz.old -> vmlinuz-5.15.0-25-generic
|
||||
```
|
||||
|
||||
Now package the tar for upload
|
||||
|
||||
```bash
|
||||
sudo rm -rf ubuntu:jammy/dev/*
|
||||
sudo tar -czf ubuntu-jammy.tar.gz -C ubuntu:jammy .
|
||||
```
|
||||
|
||||
Upload to the hub, and use it to create a Zmachine
|
||||
|
||||
## VM Image [deprecated]
|
||||
|
||||
In a VM image mode, you run your own operating system (for now only linux is supported)
|
||||
The image provided must be
|
||||
|
||||
- EFI bootable
|
||||
- Cloud-init enabled.
|
||||
|
||||
You can find later in this document how to create your own bootable image.
|
||||
|
||||
A VM reservations must also have at least 1 volume, as the boot image
|
||||
will be copied to this volume. The size of the root disk will be the size of this
|
||||
volume.
|
||||
|
||||
The image used to the boot the VM must has cloud-init enabled on boot. Cloud-init
|
||||
receive its config over the NoCloud source. This takes care of setting up networking, hostname
|
||||
, root authorized_keys.
|
||||
|
||||
> This method of building a full VM from a raw image is not recommended and will get phased out in
|
||||
the future. It's better to use either the container method to run containerized Apps. Another option
|
||||
is to run your own kernel from an flist (explained below)
|
||||
|
||||
### Expected Flist structure
|
||||
|
||||
An `Zmachine` will be considered a `VM` if it contains an `/image.raw` file.
|
||||
|
||||
`/image.raw` is used as "boot disk". This `/image.raw` is copied to the first attached
|
||||
volume of the `VM`. Cloud-init will take care of resizing the filesystem on the image
|
||||
to take the full disk size allocated in the deployment.
|
||||
|
||||
Note if the `image.raw` size is larger than the allocated disk. the workload for the VM
|
||||
will fail.
|
||||
|
||||
### Expected Flist structure
|
||||
|
||||
Any Flist will boot as a container, **UNLESS** is has a `/image.raw` file. There is
|
||||
no need to specify a kernel yourself (it will be provided).
|
||||
|
||||
### Known issues
|
||||
|
||||
- We need to do proper performance testing for `virtio-fs`. There seems to be some
|
||||
suboptimal performance right now.
|
||||
- It's not currently possible to get container logs.
|
||||
- TODO: more testing
|
||||
|
||||
## Creating VM image
|
||||
|
||||
This is a simple tutorial on how to create your own VM image
|
||||
> Note: Please consider checking the official vm images repo on the hub before building your own
|
||||
image. this can save you a lot of time (and network traffic) here <https://hub.grid.tf/tf-official-vms>
|
||||
|
||||
### Use one of ubuntu cloud-images
|
||||
|
||||
If the ubuntu images in the official repo are not enough, you can simply upload one of the official images as follows
|
||||
|
||||
- Visit <https://cloud-images.ubuntu.com/>
|
||||
- Select the version you want (let's assume bionic)
|
||||
- Go to bionic, then click on current
|
||||
- download the amd64.img file like this one <https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img>
|
||||
- This is a `Qcow2` image, this is not supported by zos. So we need to convert this to a raw disk image using the following command
|
||||
|
||||
```bash
|
||||
qemu-img convert -p -f qcow2 -O raw bionic-server-cloudimg-amd64.img image.raw
|
||||
```
|
||||
|
||||
- now we have the raw image (image.raw) time to compress and upload to the hub
|
||||
|
||||
```bash
|
||||
tar -czf ubuntu-18.04-lts.tar.gz image.raw
|
||||
```
|
||||
|
||||
- now visit the hub <https://hub.grid.tf/> and login or create your own account, then click on upload my file button
|
||||
- Select the newly created tar.gz file
|
||||
- Now you should be able to use this flist to create Zmachine workloads
|
||||
|
||||
### Create an image from scratch
|
||||
|
||||
This is an advanced scenario and you will require some prior knowledge of how to create local VMs and how to prepare the installation medium,
|
||||
and installing your OS of choice.
|
||||
|
||||
Before we continue you need to have some hypervisor that you can use locally. Libvirt/Qemu are good choices. Hence we skip on what you need to do to install and configure your system correctly not how to create the VM
|
||||
|
||||
#### VM Requirements
|
||||
|
||||
Create a VM with enough CPU and Memory to handle the installation process note that this does not relate on what your choices for CPU and Memory are going to be for the actual VM running on the grid.
|
||||
|
||||
We going to install arch linux image. So we will have to create a VM with
|
||||
|
||||
- Disk of about 2GB (note this also is not related to the final VM running on the grid, on installation the OS image will expand to use the entire allocated disk attached to the VM eventually). The smaller the disk is better this can be different for each OS.
|
||||
- Add the arch installation iso or any other installation medium
|
||||
|
||||
#### Boot the VM (locally)
|
||||
|
||||
Boot the VM to start installation. The boot must support EFI booting because ZOS only support images with esp partition. So make sure that both your hypervisor and boot/installation medium supports this.
|
||||
|
||||
For example in Libvirt Manager make sure you are using the right firmware (UEFI)
|
||||
|
||||
#### Installation
|
||||
|
||||
We going to follow the installation manual for Arch linux but with slight tweaks:
|
||||
|
||||
- Make sure VM is booted with UEFI, run `efivar -l` command see if you get any output. Otherwise the machine is probably booted in BIOS mode.
|
||||
- With `parted` create 2 partitions
|
||||
- an esp (boot) partition of 100M
|
||||
- a root partition that spans the remaining of the disk
|
||||
|
||||
```bash
|
||||
DISK=/dev/vda
|
||||
# First, create a gpt partition table
|
||||
parted $DISK mklabel gpt
|
||||
# Secondly, create the esp partition of 100M
|
||||
parted $DISK mkpart primary 1 100M
|
||||
# Mark first part as esp
|
||||
parted $DISK set 1 esp on
|
||||
# Use the remaining part as root that takes the remaining
|
||||
# space on disk
|
||||
parted $DISK mkpart primary 100M 100%
|
||||
|
||||
# To verify everything is correct do
|
||||
parted $DISK print
|
||||
|
||||
# this should 2 partitions the first one is slightly less that 100M and has flags (boot, esp), the second one takes the remaining space
|
||||
```
|
||||
|
||||
We need to format the partitions as follows:
|
||||
|
||||
```bash
|
||||
# this one has to be vfat of size 32 as follows
|
||||
mkfs.vfat -F 32 /dev/vda1
|
||||
# This one can be anything based on your preference as long as it's supported by you OS kernel. we going with ext4 in this tutorial
|
||||
mkfs.ext4 -L cloud-root /dev/vda2
|
||||
```
|
||||
|
||||
Note the label assigned to the /dev/vda2 (root) partition this can be anything but it's needed to configure the boot later when installing the boot loader. Otherwise you can use the partition UUID.
|
||||
|
||||
Next, we need to mount the disks
|
||||
|
||||
```bash
|
||||
mount /dev/vda2 /mnt
|
||||
mkdir /mnt/boot
|
||||
mount /dev/vda1 /mnt/boot
|
||||
```
|
||||
|
||||
After disks are mounted as above, we need to start the installation
|
||||
|
||||
```bash
|
||||
pacstrap /mnt base linux linux-firmware vim openssh cloud-init cloud-guest-utils
|
||||
```
|
||||
|
||||
This will install basic arch linux but will also include cloud-init, cloud-guest-utils, openssh, and vim for convenience.
|
||||
|
||||
Following the installation guid to generate fstab file
|
||||
|
||||
```
|
||||
genfstab -U /mnt >> /mnt/etc/fstab
|
||||
```
|
||||
|
||||
And arch-chroot into /mnt `arch-chroot /mnt` to continue the setup. please follow all steps in the installation guide to set timezone, and locales as needed.
|
||||
|
||||
- You don't have to set the hostname, this will be setup later on zos when the VM is deployed via cloud-init
|
||||
- let's drop the root password all together since login to the VM over ssh will require key authentication only, you can do this by running
|
||||
|
||||
```bash
|
||||
passwd -d root
|
||||
```
|
||||
|
||||
We make sure required services are enabled
|
||||
|
||||
```bash
|
||||
systemctl enable sshd
|
||||
systemctl enable systemd-networkd
|
||||
systemctl enable systemd-resolved
|
||||
systemctl enable cloud-init
|
||||
systemctl enable cloud-final
|
||||
|
||||
# make sure we using resolved
|
||||
rm /etc/resolv.conf
|
||||
ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
|
||||
```
|
||||
|
||||
Finally installing the boot loader as follows
|
||||
> Only grub2 has been tested and known to work.
|
||||
|
||||
```bash
|
||||
pacman -S grub
|
||||
```
|
||||
|
||||
Then we need to install grub
|
||||
|
||||
```
|
||||
grub-install --target=x86_64-efi --efi-directory=esp --removable
|
||||
```
|
||||
|
||||
Change default values as follows
|
||||
|
||||
```
|
||||
vim /etc/default/grub
|
||||
```
|
||||
|
||||
And make sure to change `GRUB_CMDLINE_LINUX_DEFAULT` as follows
|
||||
|
||||
```
|
||||
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 console=tty console=ttyS0"
|
||||
```
|
||||
|
||||
> Note: we removed the `quiet` and add the console flags.
|
||||
|
||||
Also set the `GRUB_TIMEOUT` to 0 for a faster boot
|
||||
|
||||
```
|
||||
GRUB_TIMEOUT=0
|
||||
```
|
||||
|
||||
Then finally generating the config
|
||||
|
||||
```
|
||||
grub-mkconfig -o /boot/grub/grub.cfg
|
||||
```
|
||||
|
||||
Last thing we need to do is clean up
|
||||
|
||||
- pacman cache by running `rm -rf /var/cache/pacman/pkg`
|
||||
- cloud-init state by running `cloud-init clean`
|
||||
|
||||
Click `Ctrl+D` to exit the change root, then power off by running `poweroff` command.
|
||||
|
||||
> NOTE: if you booted the machine again you always need to do `cloud-init clean` as long as it's not yet deployed on ZOS this to make sure the image has a clean state
|
||||
>
|
||||
#### Converting the disk
|
||||
|
||||
Based on your hypervisor of choice you might need to convert the disk to a `raw` image same way we did with ubuntu image.
|
||||
|
||||
```bash
|
||||
# this is an optional step in case you used a qcoq disk for the installation. If the disk is already `raw` you can skip this
|
||||
qemu-img convert -p -f qcow2 -O raw /path/to/vm/disk.img image.raw
|
||||
```
|
||||
|
||||
Compress and tar the image.raw as before, and upload to the hub.
|
||||
|
||||
```
|
||||
tar -czf arch-linux.tar.gz image.raw
|
||||
```
|
@@ -0,0 +1,2 @@
|
||||
# `zmount` type
|
||||
A `zmount` is a local disk that can be attached directly to a container or a virtual machine. `zmount` only require `size` as input as defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmount.go) this workload type is only utilized via the `zmachine` workload.
|
Reference in New Issue
Block a user