finalized info_fgrid sync
This commit is contained in:
parent
a75e59d42c
commit
0c33119601
@ -17,9 +17,9 @@
|
||||
- [Add a Domain to a VM](dashboard/solutions/add_domain.md)
|
||||
- [Orchestrators](dashboard/deploy/orchestrators.md)
|
||||
- [Kubernetes](dashboard/solutions/k8s.md)
|
||||
- [Caprover](dashboard/solutions/caprover.md)
|
||||
- [Caprover Admin](dashboard/solutions/caprover_admin.md)
|
||||
- [Caprover Worker](dashboard/solutions/caprover_worker.md)
|
||||
- [CapRover](dashboard/solutions/caprover.md)
|
||||
- [CapRover Admin](dashboard/solutions/caprover_admin.md)
|
||||
- [CapRover Worker](dashboard/solutions/caprover_worker.md)
|
||||
- [Applications](dashboard/deploy/applications.md)
|
||||
- [Algorand](dashboard/solutions/algorand.md)
|
||||
- [CasperLabs](dashboard/solutions/casper.md)
|
||||
@ -66,6 +66,7 @@
|
||||
- [Installation](developers/javascript/grid3_javascript_installation.md)
|
||||
- [Loading Client](developers/javascript/grid3_javascript_loadclient.md)
|
||||
- [Deploy a VM](developers/javascript/grid3_javascript_vm.md)
|
||||
- [Deploy a VM with Mycelium Network](developers/javascript/grid3_javascript_vm_with_mycelium_network.md)
|
||||
- [Capacity Planning](developers/javascript/grid3_javascript_capacity_planning.md)
|
||||
- [Deploy Multiple VMs](developers/javascript/grid3_javascript_vms.md)
|
||||
- [Deploy CapRover](developers/javascript/grid3_javascript_caprover.md)
|
||||
@ -197,23 +198,38 @@
|
||||
- [Getting Started](system_administrators/getstarted/tfgrid3_getstarted.md)
|
||||
- [SSH Remote Connection](system_administrators/getstarted/ssh_guide/ssh_guide.md)
|
||||
- [SSH with OpenSSH](system_administrators/getstarted/ssh_guide/ssh_openssh.md)
|
||||
- [SSH with PuTTY](system_administrators/getstarted/ssh_guide/ssh_putty.md)
|
||||
- [SSH with WSL](system_administrators/getstarted/ssh_guide/ssh_wsl.md)
|
||||
- [WireGuard Access](system_administrators/getstarted/ssh_guide/ssh_wireguard.md)
|
||||
- [Remote Desktop and GUI](system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md)
|
||||
- [Cockpit: a Web-based Interface for Servers](system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md)
|
||||
- [XRDP: an Open-Source Remote Desktop Protocol](system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md)
|
||||
- [Apache Guacamole: a Clientless Remote Desktop Gateway](system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md)
|
||||
- [Planetary Network](system_administrators/getstarted/planetarynetwork.md)
|
||||
- [Advanced Methods](system_administrators/getstarted/ssh_guide/advanced_methods/advanced_methods.md)
|
||||
- [SSH with PuTTY](system_administrators/getstarted/ssh_guide/ssh_putty.md)
|
||||
- [SSH with WSL](system_administrators/getstarted/ssh_guide/ssh_wsl.md)
|
||||
- [WireGuard](system_administrators/getstarted/ssh_guide/advanced_methods/ssh_wireguard.md)
|
||||
- [Planetary Network](system_administrators/getstarted/ssh_guide/advanced_methods/planetarynetwork.md)
|
||||
- [TFGrid Deployments](system_administrators/getstarted/tfgrid_deployments.md)
|
||||
- [TFGrid Services](system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md)
|
||||
- [Mycelium](system_administrators/mycelium/mycelium_toc.md)
|
||||
- [Overview](system_administrators/mycelium/overview.md)
|
||||
- [Installation](system_administrators/mycelium/installation.md)
|
||||
- [Additional Information](system_administrators/mycelium/information.md)
|
||||
- [Message](system_administrators/mycelium/message.md)
|
||||
- [Packet](system_administrators/mycelium/packet.md)
|
||||
- [Data Packet](system_administrators/mycelium/data_packet.md)
|
||||
- [API YAML](system_administrators/mycelium/api_yaml.md)
|
||||
- [Pulumi](system_administrators/pulumi/pulumi_readme.md)
|
||||
- [Introduction to Pulumi](system_administrators/pulumi/pulumi_intro.md)
|
||||
- [Installing Pulumi](system_administrators/pulumi/pulumi_install.md)
|
||||
- [Deployment Examples](system_administrators/pulumi/pulumi_examples.md)
|
||||
- [Deployment Details](system_administrators/pulumi/pulumi_deployment_details.md)
|
||||
- [Complete Guides](system_administrators/pulumi/pulumi_complete_guides/pulumi_complete_guides_toc.md)
|
||||
- [Pulumi and YAML](system_administrators/pulumi/pulumi_complete_guides/pulumi_yaml.md)
|
||||
- [Pulumi and Python](system_administrators/pulumi/pulumi_complete_guides/pulumi_python.md)
|
||||
- [Pulumi and Go](system_administrators/pulumi/pulumi_complete_guides/pulumi_go.md)
|
||||
- [GPU](system_administrators/gpu/gpu_toc.md)
|
||||
- [GPU Support](system_administrators/gpu/gpu.md)
|
||||
- [Terraform](system_administrators/terraform/terraform_toc.md)
|
||||
- [Overview](system_administrators/terraform/terraform_readme.md)
|
||||
- [Introduction to Terraform](system_administrators/terraform/terraform_readme.md)
|
||||
- [Installing Terraform](system_administrators/terraform/terraform_install.md)
|
||||
- [Terraform Basics](system_administrators/terraform/terraform_basics.md)
|
||||
- [Full VM Deployment](system_administrators/terraform/terraform_full_vm.md)
|
||||
- [GPU Support](system_administrators/terraform/terraform_gpu_support.md)
|
||||
- [Terrafprm Basics](system_administrators/terraform/terraform_basics.md)
|
||||
- [Resources](system_administrators/terraform/resources/terraform_resources_readme.md)
|
||||
- [Using Scheduler](system_administrators/terraform/resources/terraform_scheduler.md)
|
||||
- [Virtual Machine](system_administrators/terraform/resources/terraform_vm.md)
|
||||
@ -227,6 +243,7 @@
|
||||
- [CapRover](system_administrators/terraform/resources/terraform_caprover.md)
|
||||
- [Advanced](system_administrators/terraform/advanced/terraform_advanced_readme.md)
|
||||
- [Terraform Provider](system_administrators/terraform/advanced/terraform_provider.md)
|
||||
- [GPU Support](system_administrators/terraform/terraform_gpu_support.md)
|
||||
- [Terraform Provisioners](system_administrators/terraform/advanced/terraform_provisioners.md)
|
||||
- [Mounts](system_administrators/terraform/advanced/terraform_mounts.md)
|
||||
- [Capacity Planning](system_administrators/terraform/advanced/terraform_capacity_planning.md)
|
||||
@ -240,19 +257,6 @@
|
||||
- [Nextcloud Single Deployment](system_administrators/terraform/advanced/terraform_nextcloud_single.md)
|
||||
- [Nextcloud Redundant Deployment](system_administrators/terraform/advanced/terraform_nextcloud_redundant.md)
|
||||
- [Nextcloud 2-Node VPN Deployment](system_administrators/terraform/advanced/terraform_nextcloud_vpn.md)
|
||||
- [Pulumi](system_administrators/pulumi/pulumi_readme.md)
|
||||
- [Introduction to Pulumi](system_administrators/pulumi/pulumi_intro.md)
|
||||
- [Installing Pulumi](system_administrators/pulumi/pulumi_install.md)
|
||||
- [Deployment Examples](system_administrators/pulumi/pulumi_examples.md)
|
||||
- [Deployment Details](system_administrators/pulumi/pulumi_deployment_details.md)
|
||||
- [Mycelium](system_administrators/mycelium/mycelium_toc.md)
|
||||
- [Overview](system_administrators/mycelium/overview.md)
|
||||
- [Installation](system_administrators/mycelium/installation.md)
|
||||
- [Additional Information](system_administrators/mycelium/information.md)
|
||||
- [Message](system_administrators/mycelium/message.md)
|
||||
- [Packet](system_administrators/mycelium/packet.md)
|
||||
- [Data Packet](system_administrators/mycelium/data_packet.md)
|
||||
- [API YAML](system_administrators/mycelium/api_yaml.md)
|
||||
- [Computer and IT Basics](system_administrators/computer_it_basics/computer_it_basics.md)
|
||||
- [CLI and Scripts Basics](system_administrators/computer_it_basics/cli_scripts_basics.md)
|
||||
- [Docker Basics](system_administrators/computer_it_basics/docker_basics.md)
|
||||
@ -287,6 +291,10 @@
|
||||
- [HTTPS with Caddy](system_administrators/advanced/https_caddy.md)
|
||||
- [Node Status Bot](system_administrators/advanced/node_status_bot.md)
|
||||
- [Minetest](system_administrators/advanced/minetest.md)
|
||||
- [Remote Desktop and GUI](system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md)
|
||||
- [Cockpit: a Web-based Interface for Servers](system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md)
|
||||
- [XRDP: an Open-Source Remote Desktop Protocol](system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md)
|
||||
- [Apache Guacamole: a Clientless Remote Desktop Gateway](system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md)
|
||||
- [ThreeFold Token](threefold_token/threefold_token.md)
|
||||
- [TFT Bridges](threefold_token/tft_bridges/tft_bridges.md)
|
||||
- [TFChain-Stellar Bridge](threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
@ -8,9 +8,5 @@ To deploy on the ThreeFold Grid, refer to the [System Administrators](system_adm
|
||||
|
||||
- [Cloud Units](cloudunits.md)
|
||||
- [Pricing](pricing_toc.md)
|
||||
- [Pricing Overview](pricing.md)
|
||||
- [Staking Discounts](staking_discount_levels.md)
|
||||
- [Cloud Pricing Compare](cloud_pricing_compare.md)
|
||||
- [Grid Billing](grid_billing.md)
|
||||
- [Resource Units](resource_units_calc_cloudunits.md)
|
||||
- [Resource Units Advanced](resourceunits_advanced.md)
|
@ -23,7 +23,7 @@ Resource units are used to measure and convert capacity on the hardware level in
|
||||
| ------------ | ------------------------------------ | ---- |
|
||||
| Core Unit | 1 Logical Core (Hyperthreaded Core) | CRU |
|
||||
| Mem Unit | 1 GB mem | MRU |
|
||||
| HD Unit | 1 GB | HRU |
|
||||
| HDD Unit | 1 GB | HRU |
|
||||
| SSD Unit | 1 GB | SRU |
|
||||
| Network Unit | 1 GB of bandwidth transmitted in/out | NRU |
|
||||
|
||||
|
@ -15,10 +15,10 @@ The backend for the weblets is introduced with the [Javascript Client](developer
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Wallet Connector](wallet_connector.md)
|
||||
- [TFGrid](tfgrid.md)
|
||||
- [Deploy](deploy.md)
|
||||
- [Farms](farms.md)
|
||||
- [TFChain](tfchain.md)
|
||||
- [TFGrid](tfgrid/tfgrid.md)
|
||||
- [Deploy](deploy/deploy.md)
|
||||
- [Farms](farms/farms.md)
|
||||
- [TFChain](tfchain/tfchain.md)
|
||||
|
||||
## Advantages
|
||||
|
||||
@ -42,11 +42,12 @@ You can access the ThreeFold Dashboard on different TF Chain networks.
|
||||
- Deploys one thing at a time.
|
||||
- Might take sometime to deploy a solution like Peertube, so you should wait a little bit until it's fully running.
|
||||
|
||||
## Dashboard Backups
|
||||
## List of Mainnet Backend Stacks
|
||||
|
||||
If the main Dashboard URLs are not working for any reason, the following URLs can be used. Those Dashboard URLs are fully independent of the main Dashboard URLs shown above.
|
||||
We provide independent mainnet backend stacks. Here is the current list:
|
||||
|
||||
- [https://dashboard.02.dev.grid.tf](https://dashboard.02.dev.grid.tf) for Dev net
|
||||
- [https://dashboard.02.qa.grid.tf](https://dashboard.02.qa.grid.tf) for QA net
|
||||
- [https://dashboard.02.test.grid.tf](https://dashboard.02.test.grid.tf) for Test net
|
||||
- [https://dashboard.02.grid.tf](https://dashboard.02.grid.tf) for Main net
|
||||
- [https://dashboard.grid.tf](https://dashboard.grid.tf)
|
||||
- [https://dashboard.be.grid.tf](https://dashboard.be.grid.tf)
|
||||
- [https://dashboard.fin.grid.tf](https://dashboard.fin.grid.tf)
|
||||
- [https://dashboard.sg.grid.tf](https://dashboard.sg.grid.tf)
|
||||
- [https://dashboard.us.grid.tf](https://dashboard.us.grid.tf)
|
@ -2,4 +2,6 @@
|
||||
|
||||
Find or Publish your Flist from [Zero-OS Hub](https://hub.grid.tf/)
|
||||
|
||||
![](../img/0_hub.png)
|
||||
![](../img/0_hub.png)
|
||||
|
||||
Learn more about the Zero-OS Hub [here](developers@@zos_hub).
|
@ -45,11 +45,14 @@ If your deployment has some minimum requirements, you can easily filter relevant
|
||||
|
||||
## Node Details
|
||||
|
||||
You can see all of the node details when you click on its row.
|
||||
You can see all of the node details when you click on its row:
|
||||
|
||||
![](../img/dashboard_node_finder_node_view.png)
|
||||
|
||||
Note that the network speed test displayed in the Node Finder is updated every 6 hours.
|
||||
> Note: The network speed test displayed in the Node Finder is updated every 6 hours.
|
||||
To access the public Grafana page displaying additional information, click on `Check Node Health`:
|
||||
|
||||
![](../img/node_finder_grafana.png)
|
||||
|
||||
## Gateway Nodes
|
||||
|
||||
|
@ -2,24 +2,24 @@
|
||||
|
||||
This comprehensive guide aims to provide users with detailed instructions and insights into efficiently managing their _Farms_. Farms encompass servers and storage devices contributing computational and storage capabilities to the grid, empowering users to oversee, maintain, and optimize their resources effectively.
|
||||
|
||||
- [Getting started](#getting-started)
|
||||
- [Create a new Farm](#create-a-new-farm)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Create a New Farm](#create-a-new-farm)
|
||||
- [Manage Your Farms](#manage-your-farms)
|
||||
- [Add a public IP to your Farm](#add-a-public-ip-to-your-farm)
|
||||
- [Add a Stellar address for payout](#add-a-stellar-address-for-payout)
|
||||
- [Generate your node bootstrap image](#generate-your-node-bootstrap-image)
|
||||
- [Additional information](#additional-information)
|
||||
- [Add a Public IP to Your Farm](#add-a-public-ip-to-your-farm)
|
||||
- [Add a Stellar Address for Payout](#add-a-stellar-address-for-payout)
|
||||
- [Generate Your Node Bootstrap Image](#generate-your-node-bootstrap-image)
|
||||
- [Additional Information](#additional-information)
|
||||
- [Manage Your Nodes](#manage-your-nodes)
|
||||
- [Node information](#node-information)
|
||||
- [Node Information](#node-information)
|
||||
- [Extra Fees](#extra-fees)
|
||||
- [Public Configuration](#public-configuration)
|
||||
- [The Difference Between IPs Assigned to Nodes Versus a Farm](#the-difference-between-ips-assigned-to-nodes-versus-a-farm)
|
||||
|
||||
## Getting started
|
||||
## Getting Started
|
||||
|
||||
After logging in to the TF Dashboard, on the sidebar click on **Dashboard** then _Your Farms_ .
|
||||
|
||||
## Create a new Farm
|
||||
## Create a New Farm
|
||||
|
||||
If you want to start farming, you need a farmID, the ID of the farm that is owning the hardware node(s) you connect to the TFGrid.
|
||||
|
||||
@ -47,7 +47,7 @@ You can browse your Farms in _Farms_ table; Farms table contains all your own fa
|
||||
|
||||
![](../img/dashboard_farms_farms_table.png)
|
||||
|
||||
### Add a public IP to your Farm
|
||||
### Add a Public IP to Your Farm
|
||||
|
||||
If you have public IPv4 addresses available that can be used for usage on the TFGrid, you can add them in your farm.
|
||||
Click `ADD IP`, specify the addresses, the gateway and click `CREATE`.
|
||||
@ -69,7 +69,7 @@ Deleting IPv4 addresses is also possible here. The `Deployed Contract ID` gives
|
||||
|
||||
![ ](../img/dashboard_farms_ip_details.png)
|
||||
|
||||
### Add a Stellar address for payout
|
||||
### Add a Stellar Address for Payout
|
||||
|
||||
In a first phase, farming of tokens still results in payout on the Stellar network. So to get the farming reward, a Stellar address needs to be provided.
|
||||
|
||||
@ -79,7 +79,7 @@ In a first phase, farming of tokens still results in payout on the Stellar netwo
|
||||
|
||||
You can read about different ways to store TFT [here](threefold_token@@storing_tft). Make sure to use a Stellar wallet for your farming rewards.
|
||||
|
||||
### Generate your node bootstrap image
|
||||
### Generate Your Node Bootstrap Image
|
||||
|
||||
Once you know your farmID, you can set up your node on TFGrid3. Click on `Bootstrap Node Image`.
|
||||
|
||||
@ -87,7 +87,7 @@ Once you know your farmID, you can set up your node on TFGrid3. Click on `Bootst
|
||||
|
||||
Read more Zero-OS bootstrap image [here](farmers@@2_bootstrap_image).
|
||||
|
||||
### Additional information
|
||||
### Additional Information
|
||||
|
||||
After booting a node, the info will become available in `Your Nodes` table, including the status info along with the minting and fixup receipts.
|
||||
|
||||
@ -103,7 +103,7 @@ You can also download a single node's receipts using the `Download Receipts` but
|
||||
|
||||
as in farms table _Nodes_ table contains all your own nodes and its your entry point to manage your farm as in the following sections.
|
||||
|
||||
### Node information
|
||||
### Node Information
|
||||
|
||||
Expand your node information by clicking on the expand button in the target node row.
|
||||
|
||||
|
BIN
collections/dashboard/img/node_finder_grafana.png
Normal file
BIN
collections/dashboard/img/node_finder_grafana.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 162 KiB |
@ -5,7 +5,7 @@
|
||||
- [Introduction](#introduction)
|
||||
- [Requirements](#requirements)
|
||||
- [Configs Tab](#configs-tab)
|
||||
- [Admin and Workers Tabs](#admin-and-workers-tabs)
|
||||
- [Leader and Workers Tabs](#leader-and-workers-tabs)
|
||||
- [The Domain Name](#the-domain-name)
|
||||
- [Domain Name Example](#domain-name-example)
|
||||
- [How to Know the IP Address](#how-to-know-the-ip-address)
|
||||
@ -31,10 +31,10 @@ Caprover is a very cool management app for containers based on Docker Swarm.
|
||||
|
||||
It has following benefits :
|
||||
|
||||
- easy to deploy apps (in seconds)
|
||||
- easy to create new apps
|
||||
- super good monitoring
|
||||
- can be extended over the TFGrid
|
||||
- Easy to deploy apps (in seconds)
|
||||
- Easy to create new apps
|
||||
- Super good monitoring
|
||||
- Can be extended over the TFGrid
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -46,23 +46,51 @@ It has following benefits :
|
||||
|
||||
![ ](./img/solutions_caprover.png)
|
||||
|
||||
- Enter domain for you Caprover instance, Be very careful about the domain name: it needs to be a wildcard domain name you can configure in your chosen domain name system.
|
||||
- Enter domain for you Caprover instance.
|
||||
- Be very careful about the domain name: it needs to be a wildcard domain name you can configure in your chosen domain name system.
|
||||
- Enter password for you Caprover instance.
|
||||
|
||||
If you have more than one SSH keys set, you can click on `Manage SSH keys` to select which one to use for this deployment.
|
||||
|
||||
## Admin and Workers Tabs
|
||||
## Leader and Workers Tabs
|
||||
|
||||
Each deployment will have one leader and there can be many workers. By default, CapRover is deployed on nodes with IPv4.
|
||||
|
||||
![ ](./img/solutions_caprover_leader.png)
|
||||
|
||||
![ ](./img/solutions_caprover_workers.png)
|
||||
|
||||
Use the Leader and Workers tabs to add nodes to your deployment.
|
||||
|
||||
- Enter a name for the deployment or keep the default name
|
||||
- Select a capacity package:
|
||||
- **Small**: {cpu: 1, memory: 2, diskSize: 25 }
|
||||
- **Medium**: {cpu: 2, memory: 4, diskSize: 50 }
|
||||
- **Large**: {cpu: 4, memory: 16, diskSize: 100 }
|
||||
- Or choose a **Custom** plan
|
||||
- Choose the network
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the node
|
||||
- Automated
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
- `Country`
|
||||
- `Farm Name`
|
||||
- Click on `Load Nodes`
|
||||
- Click on the node you want to deploy on
|
||||
- Manual selection
|
||||
- Select a specific node ID
|
||||
- Click `Deploy`
|
||||
|
||||
Note: Worker nodes only accept SSH keys of RSA format.
|
||||
|
||||
Deployment will take couple of minutes.
|
||||
|
||||
## The Domain Name
|
||||
|
||||
As per the [CapRover documentation](https://caprover.com/docs/get-started.html), you need to point a wildcard DNS entry to the VM IP address of your CapRover instance. You have to do this after having deployed the CapRover instance, otherwise you won't have access to the VM IP address.
|
||||
As per the [CapRover documentation](https://caprover.com/docs/get-started.html), you need to point a wildcard DNS entry to the VM IP address of your CapRover Leader instance. You have to do this after having deployed the CapRover instance, otherwise you won't have access to the VM IP address.
|
||||
|
||||
Let’s say your domain is **example.com** and your subdomain is **subdomain**. You can set **\*.subdomain.example.com** as an A record in your DNS settings to point to the VM IP address of the server hosting the CapRover instance, where **\*** acts as the wildcard. To do this, go to the DNS settings of your domain name registrar, and set a wild card A record entry.
|
||||
|
||||
@ -97,54 +125,14 @@ Go back to your CapRover weblet and go to the deployment list. Click on `Show De
|
||||
![ ](./img/solution_caprover_list.png)
|
||||
|
||||
- The public IPv4 address is visible in here
|
||||
|
||||
![](./img/solutions_caprover_ipaddress.png)
|
||||
|
||||
- Now you can configure the domain name (see above, don't forget to point the wildcard domain to the public IP address)
|
||||
|
||||
Click on details if you want to see more details
|
||||
Go to the `JSON` tab to see the Json ouput:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"version": 0,
|
||||
"name": "caprover_leader_cr_156e44f0",
|
||||
"created": 1637843368,
|
||||
"status": "ok",
|
||||
"message": "",
|
||||
"flist": "https://hub.grid.tf/samehabouelsaad.3bot/tf-caprover-main-a4f186da8d.flist",
|
||||
"publicIP": {
|
||||
"ip": "185.206.122.136/24",
|
||||
"gateway": "185.206.122.1"
|
||||
},
|
||||
"planetary": false,
|
||||
"yggIP": "",
|
||||
"interfaces": [
|
||||
{
|
||||
"network": "caprover_network_cr_156e44f0",
|
||||
"ip": "10.200.4.2"
|
||||
}
|
||||
],
|
||||
"capacity": {
|
||||
"cpu": 4,
|
||||
"memory": 8192
|
||||
},
|
||||
"mounts": [
|
||||
{
|
||||
"name": "data0",
|
||||
"mountPoint": "/var/lib/docker",
|
||||
"size": 107374182400,
|
||||
"state": "ok",
|
||||
"message": ""
|
||||
}
|
||||
],
|
||||
"env": {
|
||||
"SWM_NODE_MODE": "leader",
|
||||
"CAPROVER_ROOT_DOMAIN": "apps.openly.life",
|
||||
"PUBLIC_KEY": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/9RNGKRjHvViunSOXhBF7EumrWvmqAAVJSrfGdLaVasgaYK6tkTRDzpZNplh3Tk1aowneXnZffygzIIZ82FWQYBo04IBWwFDOsCawjVbuAfcd9ZslYEYB3QnxV6ogQ4rvXnJ7IHgm3E3SZvt2l45WIyFn6ZKuFifK1aXhZkxHIPf31q68R2idJ764EsfqXfaf3q8H3u4G0NjfWmdPm9nwf/RJDZO+KYFLQ9wXeqRn6u/mRx+u7UD+Uo0xgjRQk1m8V+KuLAmqAosFdlAq0pBO8lEBpSebYdvRWxpM0QSdNrYQcMLVRX7IehizyTt+5sYYbp6f11WWcxLx0QDsUZ/J"
|
||||
},
|
||||
"entrypoint": "/sbin/zinit init",
|
||||
"metadata": "",
|
||||
"description": "caprover leader machine/node"
|
||||
}
|
||||
```
|
||||
![](./img/solutions_caprover_json.png)
|
||||
|
||||
## How to Access the Admin Interface
|
||||
|
||||
|
@ -30,8 +30,8 @@ __Process__ :
|
||||
- Or choose a **Custom** plan
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
@ -46,8 +46,8 @@ If you have more than one SSH keys set, you can click on `Manage SSH keys` to se
|
||||
|
||||
After that is done you can see a list of all of your deployed instances
|
||||
|
||||
![ ](./img/casper4.png)
|
||||
![](./img/casper4.png)
|
||||
|
||||
Click on ***Visit*** to go to the homepage of your Casperlabs instance! The node takes a long time in order for the RPC service to be ready so be patient!
|
||||
|
||||
![ ](./img/casper5.png)
|
||||
![](./img/casper5.png)
|
@ -27,8 +27,10 @@
|
||||
- **Medium**: {cpu: 2, memory: 4, diskSize: 50 }
|
||||
- **Large**: {cpu: 4, memory: 16, diskSize: 100 }
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -40,7 +40,7 @@ Deploy a new full virtual machine on the Threefold Grid
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Public IPv6` flag gives the virtual machine a Public IPv6
|
||||
- `Planetary Network` to connect the Virtual Machine to Planetary network
|
||||
- `Myceluim` to enable mycelium on the virtual machine
|
||||
- `Mycelium` to enable Mycelium on the virtual machine
|
||||
- `Wireguard Access` to add a wireguard access to the Virtual Machine
|
||||
- `GPU` flag to add GPU to the Virtual machine
|
||||
- To deploy a Full VM with GPU, you first need to [rent a dedicated node](node_finder.md#dedicated-nodes)
|
||||
|
@ -37,8 +37,8 @@ __Process__ :
|
||||
- Or choose a **Custom** plan
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -39,7 +39,7 @@ __Process__ :
|
||||
- `Public IPv6` flag gives the virtual machine a Public IPv6
|
||||
- `Planetary Network` to connect the Virtual Machine to Planetary network
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -41,7 +41,7 @@ On the TF grid, Kubernetes clusters can be deployed out of the box. We have impl
|
||||
|
||||
## Kubeconfig
|
||||
Once the cluster is ready, you can SSH into the cluster using `ssh root@IP`
|
||||
> IP can be the public IP or the planetary network IP
|
||||
> IP can be the public IP, Mycelium or the Planetary Network IP
|
||||
|
||||
Onced connected via SSH, you can execute commands on the cluster like `kubectl get nodes`, and to get the kubeconfig, you can find it in `/root/.kube/config`
|
||||
|
||||
|
@ -29,7 +29,10 @@
|
||||
- **Medium**: {cpu: 2, memory: 4, diskSize: 50 }
|
||||
- **Large**: {cpu: 4, memory: 16, diskSize: 100 }
|
||||
- Or choose a **Custom** plan
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -68,7 +68,7 @@ If you're not sure and just want the easiest, most affordable option, skip the p
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -29,7 +29,7 @@ This is a simple instance of upstream [Node Pilot](https://nodepilot.tech).
|
||||
- 256 MB of memory
|
||||
- 15 GB of storage
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
|
||||
- Choose the location of the node
|
||||
@ -40,7 +40,7 @@ This is a simple instance of upstream [Node Pilot](https://nodepilot.tech).
|
||||
|
||||
> Or you can select a specific node with manual selection.
|
||||
|
||||
- When using the [flist](https://hub.grid.tf/tf-official-vms/node-pilot-zdbfs.flist) you get a node pilot instance ready out-of-box. You need to get a public ipv4 to get it to works.
|
||||
- When using the [flist](https://hub.grid.tf/tf-official-vms/node-pilot-zdbfs.flist) you get a node pilot instance ready out-of-box. You need to get a public IPv4 to get it to works.
|
||||
|
||||
After that is done you can see a list of all of your deployed instances
|
||||
|
||||
|
@ -30,13 +30,10 @@
|
||||
- **Medium**: { cpu: 2, memory: 4, diskSize: 100 }
|
||||
- **Large**: { cpu: 4, memory: 16, diskSize: 250 }
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Public IPv6` flag gives the virtual machine a Public IPv6
|
||||
- `Planetary Network` to connect the Virtual Machine to Planetary network
|
||||
- `Wiregaurd Access` to add a wiregaurd acces to the Virtual Machine
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the network
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
- `Country`
|
||||
|
@ -33,8 +33,9 @@
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Planetary Network` to connect the Virtual Machine to Planetary network
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -34,7 +34,9 @@ Static Website is an application where a user provides a GitHub repository URL f
|
||||
- **Medium**: {cpu: 2, memory: 4, diskSize: 100 }
|
||||
- **Large**: {cpu: 4, memory: 16, diskSize: 250 }
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
|
@ -32,8 +32,9 @@
|
||||
- **Medium**: {cpu: 2, memory: 4, diskSize: 100 }
|
||||
- **Large**: {cpu: 4, memory: 16, diskSize: 250 }
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- Choose the network
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -31,8 +31,10 @@
|
||||
- **Medium**: {cpu: 4, memory: 8, diskSize: 150 }
|
||||
- **Large**: {cpu: 4, memory: 16, diskSize: 250 }
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -41,10 +41,10 @@
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Public IPv6` flag gives the virtual machine a Public IPv6
|
||||
- `Planetary Network` to connect the Virtual Machine to Planetary network
|
||||
- `Mycelium` to enable Mycelium on the virtual machine
|
||||
- `Wireguard Access` to add a wireguard access to the Virtual Machine
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- `Planetary Network` flag gives the virtual machine an Yggdrasil address
|
||||
- `Mycelium` flag gives the virtual machine a Mycelium address
|
||||
- `Wireguard Access` to add a WireGuard acces to the Virtual Machine
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -31,8 +31,12 @@
|
||||
- **Medium**: { cpu: 2, memory: 4 , diskSize: 50 }
|
||||
- **Large**: { cpu: 4, memory: 16 , diskSize: 100 }
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- `Dedicated` flag to retrieve only dedeicated nodes
|
||||
- Choose the network
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Planetary Network` to connect the Virtual Machine to Planetary network
|
||||
- `Mycelium` to enable Mycelium on the virtual machine
|
||||
- `Wireguard Access` to add a wireguard acces to the Virtual Machine
|
||||
- `Dedicated` flag to retrieve only dedicated nodes
|
||||
- `Certified` flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
- `Region`
|
||||
|
@ -88,8 +88,8 @@ In this section, we cover the steps to deploy a WordPress instance on the Playgr
|
||||
- Or choose a **Custom** plan
|
||||
|
||||
- Choose the network
|
||||
- **Public IPv4** flag gives the virtual machine a Public IPv4
|
||||
|
||||
- `Public IPv4` flag gives the virtual machine a Public IPv4
|
||||
- `Mycelium` to enable Mycelium on the virtual machine
|
||||
- **Dedicated** flag to retrieve only dedicated nodes
|
||||
- **Certified** flag to retrieve only certified nodes
|
||||
- Choose the location of the node
|
||||
|
@ -11,6 +11,7 @@ Please make sure to check the [basics](system_administrators@@tfgrid3_getstarted
|
||||
- [Installation](grid3_javascript_installation.md)
|
||||
- [Loading Client](grid3_javascript_loadclient.md)
|
||||
- [Deploy a VM](grid3_javascript_vm.md)
|
||||
- [Deploy a VM with Mycelium Network](grid3_javascript_vm_with_mycelium_network.md)
|
||||
- [Capacity Planning](grid3_javascript_capacity_planning.md)
|
||||
- [Deploy Multiple VMs](grid3_javascript_vms.md)
|
||||
- [Deploy CapRover](grid3_javascript_caprover.md)
|
||||
|
@ -0,0 +1,202 @@
|
||||
<h1> Deploying a VM with Mycelium Network</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
- [Detailed Explanation](#detailed-explanation)
|
||||
- [What is the Mycelium Network](#what-is-the-mycelium-network)
|
||||
- [How to Deploy a Machine with the Mycelium Network](#how-to-deploy-a-machine-with-the-mycelium-network)
|
||||
- [Summary](#summary)
|
||||
- [Mycelium Flag Behavior](#mycelium-flag-behavior)
|
||||
- [Mycelium Machine Seed](#mycelium-machine-seed)
|
||||
- [Mycelium Network Seed](#mycelium-network-seed)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present information on how to deploy a VM with `Mycelium network` by the Javascript client with concrete examples.
|
||||
|
||||
Consult the [official Mycelium repo](https://github.com/threefoldtech/mycelium) to learn more.
|
||||
|
||||
## Example
|
||||
|
||||
Here is a simple example on how to use Mycelium with the Javascript client:
|
||||
|
||||
```ts
|
||||
import { generateRandomHexSeed, GridClient, MachinesDeleteModel, MachinesModel } from "../src";
|
||||
import { config, getClient } from "./client_loader";
|
||||
import { log } from "./utils";
|
||||
|
||||
async function deploy(client: GridClient, vms: MachinesModel) {
|
||||
const res = await client.machines.deploy(vms);
|
||||
log("================= Deploying VM =================");
|
||||
log(res);
|
||||
log("================= Deploying VM =================");
|
||||
}
|
||||
|
||||
async function getDeployment(client: GridClient, name: string) {
|
||||
const res = await client.machines.getObj(name);
|
||||
log("================= Getting deployment information =================");
|
||||
log(res);
|
||||
log("================= Getting deployment information =================");
|
||||
}
|
||||
|
||||
async function cancel(client: GridClient, options: MachinesDeleteModel) {
|
||||
const res = await client.machines.delete(options);
|
||||
log("================= Canceling the deployment =================");
|
||||
log(res);
|
||||
log("================= Canceling the deployment =================");
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const name = "newMY";
|
||||
const grid3 = await getClient(`vm/${name}`);
|
||||
|
||||
const vms: MachinesModel = {
|
||||
name,
|
||||
network: {
|
||||
name: "hellotest",
|
||||
ip_range: "10.249.0.0/16",
|
||||
myceliumSeeds: [
|
||||
{
|
||||
nodeId: 168,
|
||||
/**
|
||||
* ### Mycelium Network Seed:
|
||||
* - The `seed` is an optional field used to provide a specific seed for the Mycelium network.
|
||||
* - If not provided, the `GridClient` will generate a seed automatically when the `mycelium` flag is enabled.
|
||||
* - **Use Case:** If you need the new machine to have the same IP address as a previously deleted machine, you can reuse the old seed by setting the `myceliumSeed` field.
|
||||
*/
|
||||
seed: generateRandomHexSeed(32),
|
||||
},
|
||||
],
|
||||
},
|
||||
machines: [
|
||||
{
|
||||
name: "testvmMY",
|
||||
node_id: 168,
|
||||
disks: [
|
||||
{
|
||||
name: "wedDisk",
|
||||
size: 8,
|
||||
mountpoint: "/testdisk",
|
||||
},
|
||||
],
|
||||
public_ip: false,
|
||||
public_ip6: false,
|
||||
planetary: true,
|
||||
/**
|
||||
* ### Mycelium Flag Behavior:
|
||||
* - When the `mycelium` flag is enabled, there’s no need to manually provide the `myceliumSeed` flag.
|
||||
* - The `GridClient` will automatically generate the necessary seed for you.
|
||||
* - **However**, if you have **an existing seed** from a previously deleted machine and wish to deploy a new machine that retains the same IP address,
|
||||
* - **you can simply pass in the old seed during deployment instead of calling the `generateRandomHexSeed()` function**.
|
||||
*/
|
||||
mycelium: true,
|
||||
/**
|
||||
* ### Mycelium Seed:
|
||||
* - The `myceliumSeed` is an optional field used to provide a specific seed for the Mycelium network.
|
||||
* - If not provided, the `GridClient` will generate a seed automatically when the `mycelium` flag is enabled.
|
||||
* - **Use Case:** If you need the new machine to have the same IP address as a previously deleted machine, you can reuse the old seed by setting the `myceliumSeed` field.
|
||||
*/
|
||||
myceliumSeed: generateRandomHexSeed(3), // (HexSeed of length 6)
|
||||
cpu: 1,
|
||||
memory: 1024 * 2,
|
||||
rootfs_size: 0,
|
||||
flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist",
|
||||
entrypoint: "/sbin/zinit init",
|
||||
env: {
|
||||
SSH_KEY: config.ssh_key,
|
||||
},
|
||||
},
|
||||
],
|
||||
metadata: "",
|
||||
description: "test deploying single VM with mycelium via ts grid3 client",
|
||||
};
|
||||
|
||||
//Deploy VMs
|
||||
await deploy(grid3, vms);
|
||||
|
||||
//Get the deployment
|
||||
await getDeployment(grid3, name);
|
||||
|
||||
//Uncomment the line below to cancel the deployment
|
||||
// await cancel(grid3, { name });
|
||||
|
||||
await grid3.disconnect();
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
## Detailed Explanation
|
||||
|
||||
### What is the Mycelium Network
|
||||
|
||||
Mycelium is an IPv6 overlay network written in Rust. Each node that joins the overlay network will receive an overlay network IP in the 400::/7 range.
|
||||
|
||||
### How to Deploy a Machine with the Mycelium Network
|
||||
|
||||
You just need to enable `mycelium`: set it to true as we did in the example above.
|
||||
|
||||
```ts
|
||||
const machines = [
|
||||
{
|
||||
// Other attrs
|
||||
mycelium: true,
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
### Mycelium Flag Behavior
|
||||
|
||||
```ts
|
||||
const machines = [
|
||||
{
|
||||
// Other attrs
|
||||
mycelium: true,
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
- When the `mycelium` flag is enabled, there’s no need to manually provide the `myceliumSeed` flag.
|
||||
- The `GridClient` will automatically generate the necessary seed for you.
|
||||
- **However**, if you have **an existing seed** from a previously deleted machine and wish to deploy a new machine that retains the same IP address,
|
||||
- **you can simply pass in the old seed during deployment instead of calling the `generateRandomHexSeed()` function**.
|
||||
|
||||
### Mycelium Machine Seed
|
||||
|
||||
```ts
|
||||
const machines = [
|
||||
{
|
||||
// Other attrs
|
||||
myceliumSeed: generateRandomHexSeed(3),
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
- The `myceliumSeed` is an optional field used to provide a specific seed for the Mycelium network.
|
||||
- If not provided, the `GridClient` will generate a seed automatically when the `mycelium` flag is enabled.
|
||||
- **Use Case:** If you need the new machine to have the same IP address as a previously deleted machine, you can reuse the old seed by setting the `myceliumSeed` field.
|
||||
|
||||
### Mycelium Network Seed
|
||||
|
||||
```ts
|
||||
const network = {
|
||||
// Other attrs
|
||||
myceliumSeeds: [
|
||||
{
|
||||
nodeId: 1,
|
||||
seed: generateRandomHexSeed(32),
|
||||
}
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
- The `seed` is an optional field used to provide a specific seed for the Mycelium network.
|
||||
- If not provided, the `GridClient` will generate a seed automatically when the `mycelium` flag is enabled.
|
||||
- **Use Case:** If you need the new machine to have the same IP address as a previously deleted machine, you can reuse the old seed by setting the `myceliumSeed` field.
|
@ -5,6 +5,7 @@
|
||||
- [Introduction](#introduction)
|
||||
- [1. Booting the 3Node with Zero-OS](#1-booting-the-3node-with-zero-os)
|
||||
- [2. Check the 3Node Status Online](#2-check-the-3node-status-online)
|
||||
- [Check Node Health](#check-node-health)
|
||||
- [3. Receive the Farming Rewards](#3-receive-the-farming-rewards)
|
||||
- [Advanced Booting Methods (Optional)](#advanced-booting-methods-optional)
|
||||
- [PXE Booting with OPNsense](#pxe-booting-with-opnsense)
|
||||
@ -42,6 +43,9 @@ You can use the ThreeFold [Node Finder](node_finder.md) to verify that your 3Nod
|
||||
* [ThreeFold Dev Net Dashboard](https://dashboard.dev.grid.tf/)
|
||||
* [ThreeFold QA Net Dashboard](https://dashboard.qa.grid.tf/)
|
||||
|
||||
### Check Node Health
|
||||
|
||||
It is also possible to check the node health via the Node Finder. Read [this section](dashboard@@node_finder) for more information.
|
||||
|
||||
## 3. Receive the Farming Rewards
|
||||
|
||||
|
@ -0,0 +1,8 @@
|
||||
# Advanced Methods
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [SSH with PuTTY](ssh_putty.md)
|
||||
- [SSH with WSL](ssh_wsl.md)
|
||||
- [WireGuard](ssh_wireguard.md)
|
||||
- [Planetary Network](planetarynetwork.md)
|
@ -0,0 +1,224 @@
|
||||
|
||||
<h1> Planetary Network </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Install](#install)
|
||||
- [Run](#run)
|
||||
- [Linux](#linux)
|
||||
- [MacOS](#macos)
|
||||
- [Test Connectivity](#test-connectivity)
|
||||
- [Firewalls](#firewalls)
|
||||
- [Linux](#linux-1)
|
||||
- [MacOS](#macos-1)
|
||||
- [Get Yggdrasil IP](#get-yggdrasil-ip)
|
||||
- [Add Peers](#add-peers)
|
||||
- [Peers](#peers)
|
||||
- [Central Europe](#central-europe)
|
||||
- [Ghent](#ghent)
|
||||
- [Austria](#austria)
|
||||
- [Planetary Network Clients](#planetary-network-clients)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In a first phase, to get started, you need to launch the planetary network by running [Yggdrasil](https://yggdrasil-network.github.io) from the command line.
|
||||
|
||||
Yggdrasil is an implementation of a fully end-to-end encrypted IPv6 network. It is lightweight, self-arranging, supported on multiple platforms, and allows pretty much any IPv6-capable application to communicate securely with other nodes on the network. Yggdrasil does not require you to have IPv6 Internet connectivity - it also works over IPv4.
|
||||
|
||||
## Install
|
||||
|
||||
Yggdrasil is necessary for communication between your local machine and the nodes on the Grid that you deploy to. Binaries and packages are available for all major operating systems, or it can be built from source. Find installation instructions here.
|
||||
|
||||
After installation, you'll need to add at least one publicly available peer to your Yggdrasil configuration file. By default on Unix based systems, you'll find the file at `/etc/yggdrasil.conf`. To find peers, check this site, which compiles and displays the peer information available on Github.
|
||||
|
||||
Add peers to your configuration file like so:
|
||||
|
||||
```
|
||||
Peers: ["PEER_URL:PORT", "PEER_URL:PORT", ...]
|
||||
```
|
||||
|
||||
Please consult [yggdrasil installation page](https://yggdrasil-network.github.io/installation.html) for more information and clients
|
||||
|
||||
## Run
|
||||
|
||||
### Linux
|
||||
|
||||
On Linux with `systemd`, Yggdrasil can be started and enabled as a service, or run manually from the command line:
|
||||
|
||||
```
|
||||
sudo yggdrasil -useconffile /etc/yggdrasil.conf
|
||||
```
|
||||
|
||||
Get your IPv6 address with following command :
|
||||
|
||||
```
|
||||
yggdrasilctl getSelf
|
||||
```
|
||||
|
||||
### MacOS
|
||||
|
||||
The MacOS package will automatically install and start the `launchd` service. After adding peers to your config file, restart Yggdrasil by stopping the service (it will be restarted automatically):
|
||||
|
||||
```
|
||||
sudo launchctl stop yggdrasil
|
||||
```
|
||||
|
||||
Get your IPv6 address with following command :
|
||||
|
||||
```
|
||||
sudo yggdrasilctl getSelf
|
||||
```
|
||||
|
||||
## Test Connectivity
|
||||
|
||||
To ensure that you have successfully connected to the Yggdrasil network, try loading the site in your browser:
|
||||
|
||||
```
|
||||
http://[319:3cf0:dd1d:47b9:20c:29ff:fe2c:39be]/
|
||||
```
|
||||
|
||||
## Firewalls
|
||||
|
||||
Creating deployments on the Grid also requires that nodes can reach your machine as well. This means that a local firewall preventing inbound connections will cause deployments to fail.
|
||||
|
||||
### Linux
|
||||
|
||||
On systems using `iptables`, check:
|
||||
```
|
||||
sudo ip6tables -S INPUT
|
||||
```
|
||||
|
||||
If the first line is `-P INPUT DROP`, then all inbound connections over IPv6 will be blocked. To open inbound connections, run:
|
||||
|
||||
```
|
||||
sudo ip6tables -P INPUT ACCEPT
|
||||
```
|
||||
|
||||
To make this persist after a reboot, run:
|
||||
|
||||
```
|
||||
sudo ip6tables-save
|
||||
```
|
||||
|
||||
If you'd rather close the firewall again after you're done, use:
|
||||
|
||||
```
|
||||
sudo ip6tables -P INPUT DROP
|
||||
```
|
||||
|
||||
### MacOS
|
||||
|
||||
The MacOS system firewall is disabled by default. You can check your firewall settings according to instructions here.
|
||||
|
||||
## Get Yggdrasil IP
|
||||
|
||||
Once Yggdrasil is installed, you can find your Yggdrasil IP address using this command on both Linux and Mac:
|
||||
|
||||
```
|
||||
yggdrasil -useconffile /etc/yggdrasil.conf -address
|
||||
```
|
||||
|
||||
You'll need this address when registering your twin on TFChain later.
|
||||
|
||||
|
||||
## Add Peers
|
||||
|
||||
|
||||
- Add the needed [peers](https://publicpeers.neilalexander.dev/) in the config file generated under Peers.
|
||||
|
||||
**example**:
|
||||
```
|
||||
Peers:
|
||||
[
|
||||
tls://54.37.137.221:11129
|
||||
]
|
||||
```
|
||||
- Restart yggdrasil by
|
||||
|
||||
systemctl restart yggdrasil
|
||||
|
||||
## Peers
|
||||
|
||||
### Central Europe
|
||||
|
||||
#### Ghent
|
||||
|
||||
- tcp://gent01.grid.tf:9943
|
||||
- tcp://gent02.grid.tf:9943
|
||||
- tcp://gent03.grid.tf:9943
|
||||
- tcp://gent04.grid.tf:9943
|
||||
- tcp://gent01.test.grid.tf:9943
|
||||
- tcp://gent02.test.grid.tf:9943
|
||||
- tcp://gent01.dev.grid.tf:9943
|
||||
- tcp://gent02.dev.grid.tf:9943
|
||||
|
||||
### Austria
|
||||
|
||||
- tcp://gw291.vienna1.greenedgecloud.com:9943
|
||||
- tcp://gw293.vienna1.greenedgecloud.com:9943
|
||||
- tcp://gw294.vienna1.greenedgecloud.com:9943
|
||||
- tcp://gw297.vienna1.greenedgecloud.com:9943
|
||||
- tcp://gw298.vienna1.greenedgecloud.com:9943
|
||||
- tcp://gw299.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw300.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw304.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw306.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw307.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw309.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw313.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw324.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw326.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw327.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw328.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw330.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw331.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw333.salzburg1.greenedgecloud.com:9943
|
||||
- tcp://gw422.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw423.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw424.vienna2.greenedgecloud.com:9943
|
||||
- tcp://gw425.vienna2.greenedgecloud.com:9943
|
||||
|
||||
## Planetary Network Clients
|
||||
|
||||
```
|
||||
Peers:
|
||||
[
|
||||
# Threefold Lochrist
|
||||
tcp://gent01.grid.tf:9943
|
||||
tcp://gent02.grid.tf:9943
|
||||
tcp://gent03.grid.tf:9943
|
||||
tcp://gent04.grid.tf:9943
|
||||
tcp://gent01.test.grid.tf:9943
|
||||
tcp://gent02.test.grid.tf:9943
|
||||
tcp://gent01.dev.grid.tf:9943
|
||||
tcp://gent02.dev.grid.tf:9943
|
||||
# GreenEdge
|
||||
tcp://gw291.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw293.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw294.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw297.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw298.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw299.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw300.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw304.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw306.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw307.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw309.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw313.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw324.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw326.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw327.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw328.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw330.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw331.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw333.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw422.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw423.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw424.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw425.vienna2.greenedgecloud.com:9943
|
||||
]
|
||||
```
|
||||
|
@ -1,4 +1,4 @@
|
||||
<h1> WireGuard Access </h1>
|
||||
<h1> WireGuard </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
@ -11,7 +11,6 @@
|
||||
- [Windows](#windows)
|
||||
- [Test the WireGuard Connection](#test-the-wireguard-connection)
|
||||
- [SSH into the Deployment with Wireguard](#ssh-into-the-deployment-with-wireguard)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
@ -19,15 +18,14 @@
|
||||
|
||||
In this Threefold Guide, we show how to set up [WireGuard](https://www.wireguard.com/) to access a 3Node deployment with an SSH connection.
|
||||
|
||||
Note that WireGuard provides the connection to the 3Node deployment. It is up to you to decide which SSH client you want to use. This means that the steps to SSH into a 3Node deployment will be similar to the steps proposed in the guides for [Open-SSH](ssh_openssh.md), [PuTTy](ssh_putty.md) and [WSL](ssh_wsl.md). Please refer to [this documentation](ssh_guide.md) if you have any questions concerning SSH clients. The main difference will be that we connect to the 3Node deployment using a WireGuard connection instead of an IPv4 or a Planetary Network connection.
|
||||
|
||||
Note that WireGuard provides the connection to the 3Node deployment. It is up to you to decide which SSH client you want to use. This means that the steps to SSH into a 3Node deployment will be similar to the steps proposed in the guides for [Open-SSH](ssh_openssh.md), [PuTTy](ssh_putty.md) and [WSL](ssh_wsl.md). The main difference will be that we connect to the 3Node deployment using a WireGuard connection instead of an IPv4 or a [Mycelium](mycelium_toc.md) connection.
|
||||
|
||||
|
||||
# Prerequisites
|
||||
|
||||
Make sure to [read the introduction](tfgrid3_getstarted.md#get-started---your-first-deployment) before going further.
|
||||
|
||||
* SSH client of your choice
|
||||
* SSH client of your choice:
|
||||
* [Open-SSH](ssh_openssh.md)
|
||||
* [PuTTy](ssh_putty.md)
|
||||
* [WSL](ssh_wsl.md)
|
||||
@ -36,7 +34,7 @@ Make sure to [read the introduction](tfgrid3_getstarted.md#get-started---your-fi
|
||||
|
||||
# Deploy a Weblet with WireGuard Access
|
||||
|
||||
For this guide on WireGuard access, we deploy a [Full VM](dashboard@fullvm). Note that the whole process is similar with other types of ThreeFold weblets on the Dashboard.
|
||||
For this guide on WireGuard access, we deploy a [Full VM](dashboard@@fullVm). Note that the whole process is similar with other types of ThreeFold weblets on the Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
@ -69,19 +67,19 @@ To set the WireGuard connection on Linux or MAC, create a WireGuard configuratio
|
||||
|
||||
* Copy the content **WireGuard Config** from the Dashboard **Details** window
|
||||
* Paste the content to a file with the extension `.conf` (e.g. **wg.conf**) in the directory `/etc/wireguard`
|
||||
```
|
||||
* ```
|
||||
sudo nano /etc/wireguard/wg.conf
|
||||
```
|
||||
* Start WireGuard with the command **wg-quick** and, as a parameter, pass the configuration file without the extension (e.g. *wg.conf -> wg*)
|
||||
```
|
||||
* ```
|
||||
wg-quick up wg
|
||||
```
|
||||
* Note that you can also specify a config file by path, stored in any location
|
||||
```
|
||||
* ```
|
||||
wg-quick up /etc/wireguard/wg.conf
|
||||
```
|
||||
* If you want to stop the WireGuard service, you can write the following in the terminal
|
||||
```
|
||||
* ```
|
||||
wg-quick down wg
|
||||
```
|
||||
|
||||
@ -105,7 +103,7 @@ To set the WireGuard connection on Windows, add and activate a tunnel with the W
|
||||
As a test, you can [ping](cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the WireGuard connection is properly established. Make sure to replace `VM_WireGuard_IP` with the proper WireGuard IP address:
|
||||
|
||||
* Ping the deployment
|
||||
```
|
||||
* ```
|
||||
ping VM_WireGuard_IP
|
||||
```
|
||||
|
||||
@ -116,14 +114,8 @@ As a test, you can [ping](cli_scripts_basics.md#test-the-network-connectivity-of
|
||||
To SSH into the deployment with Wireguard, use the **WireGuard IP** shown in the Dashboard **Details** window.
|
||||
|
||||
* SSH into the deployment
|
||||
```
|
||||
* ```
|
||||
ssh root@VM_WireGuard_IP
|
||||
```
|
||||
|
||||
You now have access to the deployment over a WireGuard SSH connection.
|
||||
|
||||
|
||||
|
||||
# Questions and Feedback
|
||||
|
||||
If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
@ -5,6 +5,4 @@ SSH is a secure protocol used as the primary means of connecting to Linux server
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [SSH with OpenSSH](ssh_openssh.md)
|
||||
- [SSH with PuTTY](ssh_putty.md)
|
||||
- [SSH with WSL](ssh_wsl.md)
|
||||
- [WireGuard Access](ssh_wireguard.md)
|
||||
- [Advanced Methods](advanced_methods.md)
|
@ -3,195 +3,104 @@
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Main Steps and Prerequisites](#main-steps-and-prerequisites)
|
||||
- [Step-by-Step Process with OpenSSH](#step-by-step-process-with-openssh)
|
||||
- [Linux](#linux)
|
||||
- [SSH into a 3Node with IPv4 on Linux](#ssh-into-a-3node-with-ipv4-on-linux)
|
||||
- [SSH into a 3Node with the Planetary Network on Linux](#ssh-into-a-3node-with-the-planetary-network-on-linux)
|
||||
- [MAC](#mac)
|
||||
- [SSH into a 3Node with IPv4 on MAC](#ssh-into-a-3node-with-ipv4-on-mac)
|
||||
- [SSH into a 3Node with the Planetary Network on MAC](#ssh-into-a-3node-with-the-planetary-network-on-mac)
|
||||
- [Windows](#windows)
|
||||
- [SSH into a 3Node with IPv4 on Windows](#ssh-into-a-3node-with-ipv4-on-windows)
|
||||
- [SSH into a 3Node with the Planetary Network on Windows](#ssh-into-a-3node-with-the-planetary-network-on-windows)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
- [Overview](#overview)
|
||||
- [Linux](#linux)
|
||||
- [MacOS](#macos)
|
||||
- [Windows](#windows)
|
||||
|
||||
***
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node with [OpenSSH](https://www.openssh.com/) on Linux, MAC and Windows with both an IPv4 and a Planetary Network connection. To connect to the 3Node with WireGuard, read [this documentation](ssh_wireguard.md).
|
||||
In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node with [OpenSSH](https://www.openssh.com/) on Linux, MacOS and Windows with either an IPv4 or a Mycelium connection.
|
||||
|
||||
To deploy different workloads, the SSH connection process should be very similar.
|
||||
|
||||
If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
## Overview
|
||||
|
||||
|
||||
# Main Steps and Prerequisites
|
||||
|
||||
Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further.
|
||||
Make sure to [read the introduction](tfgrid3_getstarted.md#get-started---your-first-deployment) before going further.
|
||||
|
||||
The main steps for the whole process are the following:
|
||||
|
||||
* Create an SSH Key pair
|
||||
* Deploy a 3Node
|
||||
* Choose IPv4 or the Planetary Network
|
||||
* Create an SSH key pair
|
||||
* Deploy a VM on a 3Node
|
||||
* SSH into the 3Node
|
||||
* For the Planetary Network, download the Planetary Network Connector
|
||||
|
||||
|
||||
|
||||
# Step-by-Step Process with OpenSSH
|
||||
|
||||
## Linux
|
||||
|
||||
### SSH into a 3Node with IPv4 on Linux
|
||||
Here are the steps to SSH into a 3Node with either IPv4 or Mycelium on Linux.
|
||||
|
||||
Here are the steps to SSH into a 3Node with IPv4 on Linux.
|
||||
If you are using Mycelium, make sure to [read this section](mycelium_toc.md).
|
||||
|
||||
* To create the SSH key pair, write in the terminal
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
* To see the public key, write in the terminal
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
* To deploy a full VM
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select IPv4 in `Network`
|
||||
* Select `IPv4` or `Mycelium` in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Copy the IPv4 address
|
||||
* Copy the IP address
|
||||
* Open the terminal, write the following with the deployment address and write **yes** to confirm
|
||||
```
|
||||
ssh root@IPv4_address
|
||||
```
|
||||
```
|
||||
ssh root@IP_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on Linux with IPv4.
|
||||
You now have an SSH connection on Linux.
|
||||
|
||||
## MacOS
|
||||
|
||||
Here are the steps to SSH into a 3Node with either IPv4 or Mycelium on MacOS.
|
||||
|
||||
### SSH into a 3Node with the Planetary Network on Linux
|
||||
|
||||
Here are the steps to SSH into a 3Node with the Planetary Network on Linux.
|
||||
|
||||
* Set a [Planetary Network connection](planetarynetwork.md)
|
||||
* To create the SSH key pair, write in the terminal
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
* To see the public key, write in the terminal
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
* To deploy a full VM
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select Planetary Network in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Copy the Planetary Network address
|
||||
* Open the terminal, write the following with the deployment address and write **yes** to confirm
|
||||
```
|
||||
ssh root@planetary_network_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on Linux with the Planetary Network.
|
||||
|
||||
|
||||
|
||||
## MAC
|
||||
|
||||
### SSH into a 3Node with IPv4 on MAC
|
||||
|
||||
Here are the steps to SSH into a 3Node with IPv4 on MAC.
|
||||
If you are using Mycelium, make sure to [read this section](mycelium_toc.md).
|
||||
|
||||
* To create the SSH key pair, in the terminal write
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
* To see the public key, write in the terminal
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
* To deploy a full VM
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select IPv4 in `Network`
|
||||
* Select `IPv4` or `Mycelium` in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Copy the IPv4 address
|
||||
* Copy the IP address
|
||||
* Open the terminal, write the following with the deployment address and write **yes** to confirm
|
||||
```
|
||||
ssh root@IPv4_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on MAC with IPv4.
|
||||
|
||||
|
||||
|
||||
### SSH into a 3Node with the Planetary Network on MAC
|
||||
|
||||
Here are the steps to SSH into a 3Node with the Planetary Network on MAC.
|
||||
|
||||
* Set a [Planetary Network connection](planetarynetwork.md)
|
||||
* To create the SSH key pair, write in the terminal
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
* To see the public key, write in the terminal
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
* To deploy a full VM
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select Planetary Network in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Copy the Planetary Network address
|
||||
* Open the terminal, write the following with the deployment address and write **yes** to confirm
|
||||
```
|
||||
ssh root@planetary_network_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on MAC with the Planetary Network.
|
||||
|
||||
```
|
||||
ssh root@IP_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on MacOS.
|
||||
|
||||
## Windows
|
||||
|
||||
### SSH into a 3Node with IPv4 on Windows
|
||||
Here are the steps to SSH into a 3Node with either IPv4 or Mycelium on Windows.
|
||||
|
||||
If you are using Mycelium, make sure to [read this section](../../mycelium/mycelium_toc.md).
|
||||
|
||||
* To download OpenSSH client and OpenSSH server
|
||||
* Open the `Settings` and select `Apps`
|
||||
@ -203,79 +112,30 @@ You now have an SSH connection on MAC with the Planetary Network.
|
||||
* Search OpenSSH
|
||||
* Install OpenSSH Client and OpenSSH Server
|
||||
* To create the SSH key pair, open `PowerShell` and write
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
* To see the public key, write in `PowerShell`
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
* To deploy a full VM
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select IPv4 in `Network`
|
||||
* Select `IPv4` or `Mycelium` in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Copy the IPv4 address
|
||||
* Copy the IP address
|
||||
* Open `PowerShell`, write the following with the deployment address and write **yes** to confirm
|
||||
```
|
||||
ssh root@IPv4_address
|
||||
```
|
||||
```
|
||||
ssh root@IP_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on Window with IPv4.
|
||||
|
||||
|
||||
|
||||
### SSH into a 3Node with the Planetary Network on Windows
|
||||
|
||||
* Set a [Planetary Network connection](planetarynetwork.md)
|
||||
* To download OpenSSH client and OpenSSH server
|
||||
* Open the `Settings` and select `Apps`
|
||||
* Click `Apps & Features`
|
||||
* Click `Optional Features`
|
||||
* Verifiy if OpenSSH Client and OpenSSH Server are there
|
||||
* If not
|
||||
* Click `Add a feature`
|
||||
* Search OpenSSH
|
||||
* Install OpenSSH Client and OpenSSH Server
|
||||
* To create the SSH key pair, open `PowerShell` and write
|
||||
```
|
||||
ssh-keygen
|
||||
```
|
||||
* Save in default location
|
||||
* Write a password (optional)
|
||||
* To see the public key, write in `PowerShell`
|
||||
```
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
```
|
||||
* Select and copy the public key when needed
|
||||
* To deploy a full VM
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine
|
||||
* Choose the parameters you want
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select Planetary Network address in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Copy the Planetary Network address
|
||||
* Open `PowerShell`, write the following with the deployment address and write **yes** to confirm
|
||||
```
|
||||
ssh root@planetary_network_address
|
||||
```
|
||||
|
||||
You now have an SSH connection on Window with the Planetary Network.
|
||||
|
||||
|
||||
|
||||
# Questions and Feedback
|
||||
|
||||
If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
You now have an SSH connection on Window.
|
@ -5,7 +5,6 @@
|
||||
- [Introduction](#introduction)
|
||||
- [Main Steps and Prerequisites](#main-steps-and-prerequisites)
|
||||
- [SSH with PuTTY on Windows](#ssh-with-putty-on-windows)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
@ -13,23 +12,22 @@
|
||||
|
||||
In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node on Windows with [PuTTY](https://www.putty.org/).
|
||||
|
||||
To deploy different workloads, the SSH connection process should be very similar.
|
||||
To deploy different workloads, the SSH connection process should be very similar.
|
||||
|
||||
If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
Make sure to read the [Mycelium section](mycelium_toc.md) if you use Mycelium for the network connection.
|
||||
|
||||
|
||||
|
||||
## Main Steps and Prerequisites
|
||||
|
||||
Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further.
|
||||
Make sure to [read the introduction](tfgrid3_getstarted.md#get-started---your-first-deployment) before going further.
|
||||
|
||||
The main steps for the whole process are the following:
|
||||
|
||||
* Create an SSH Key pair
|
||||
* Deploy a 3Node
|
||||
* Choose IPv4 or the Planetary Network
|
||||
* Choose IPv4 or Mycelium
|
||||
* SSH into the 3Node
|
||||
* For the Planetary Network, set a [Planetary Network connection](planetarynetwork.md)
|
||||
|
||||
|
||||
## SSH with PuTTY on Windows
|
||||
@ -54,15 +52,15 @@ Here are the main steps to SSH into a full VM using PuTTY on a Windows machine.
|
||||
* Minimum CPU: 1 vCore
|
||||
* Minimum Memory: 512 Mb
|
||||
* Minimum Disk Size: 15 Gb
|
||||
* Select IPv4 in `Network`
|
||||
* Select IPv4 or Mycelium in `Network`
|
||||
* In `Node Selection`, click on `Load Nodes`
|
||||
* Click `Deploy`
|
||||
* To SSH into the VM once the 3Node is deployed
|
||||
* Take note of the IPv4 address
|
||||
* Take note of the IP address
|
||||
* Connect to the full VM with PuTTY
|
||||
* Open PuTTY
|
||||
* Go to the section `Session`
|
||||
* Add the VM IPv4 address under `Host Name (or IP address)`
|
||||
* Add the VM address under `Host Name (or IP address)`
|
||||
* Make sure `Connection type` is set to `SSH`
|
||||
* Go to the section `Connection` -> `SSH` -> `Auth` -> `Credentials`
|
||||
* Under `Private key file for authentication`, click on `Browse...`
|
||||
@ -71,10 +69,4 @@ Here are the main steps to SSH into a full VM using PuTTY on a Windows machine.
|
||||
* In the PuTTY terminal window, enter `root` as the login parameter
|
||||
* Enter the passphrase for the private key if you set one
|
||||
|
||||
You now have an SSH connection on Windows using PuTTY.
|
||||
|
||||
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
You now have an SSH connection on Windows using PuTTY.
|
@ -6,7 +6,6 @@
|
||||
- [SSH Key Generation](#ssh-key-generation)
|
||||
- [Connect to Remote Host with SSH](#connect-to-remote-host-with-ssh)
|
||||
- [Enable Port 22 in Windows Firewall](#enable-port-22-in-windows-firewall)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
@ -14,11 +13,11 @@
|
||||
|
||||
In this Threefold Guide, we show how easy it is to SSH into a 3node on Windows with [Windows Subsystem for Linux (WSL)](https://ubuntu.com/wsl).
|
||||
|
||||
If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/).
|
||||
Make sure to read the [Mycelium section](mycelium_toc.md) if you use Mycelium for the network connection.
|
||||
|
||||
## SSH Key Generation
|
||||
|
||||
Make sure SSH is installed by entering following command at the command prompt:
|
||||
Make sure SSH is installed by entering the following command at the command prompt:
|
||||
|
||||
```sh
|
||||
sudo apt install openssh-client
|
||||
@ -83,7 +82,3 @@ This is not recommend especially for portable device (Laptop, Tablets) that conn
|
||||
- under `Name`
|
||||
- Name: `SSH Server`
|
||||
- Description: `SSH Server`
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/).
|
@ -1,32 +1,26 @@
|
||||
# TFGrid Manual - Get Started
|
||||
# Getting Started
|
||||
|
||||
## Get Started - Your First Deployment
|
||||
## Your First Deployment
|
||||
|
||||
It's easy to get started on the TFGrid and deploy applications.
|
||||
It's easy to get started on the TFGrid.
|
||||
|
||||
For your first deployment on the grid, we will show you how to deploy a full virtual machine running on a ThreeFold node.
|
||||
|
||||
- [Create a TFChain Account](dashboard@@wallet_connector)
|
||||
- [Get TFT](threefold_token@@buy_sell_tft)
|
||||
- [Bridge TFT to TChain](threefold_token@@tft_bridges)
|
||||
- [Deploy an Application](dashboard@@deploy)
|
||||
- [SSH Remote Connection](ssh_guide.md)
|
||||
- [SSH with OpenSSH](ssh_openssh.md)
|
||||
- [SSH with PuTTY](ssh_putty.md)
|
||||
- [SSH with WSL](ssh_wsl.md)
|
||||
- [SSH and WireGuard](ssh_wireguard.md)
|
||||
- [Deploy and Connect to a VM](system_administrators@@ssh_openssh)
|
||||
|
||||
## Grid Platforms
|
||||
Once you're acquainted with the basics, you can explore all types of ThreeFold deployments.
|
||||
|
||||
- [TF Dashboard](dashboard/dashboard@@)
|
||||
- [TF Flist Hub](developers@@zos_hub)
|
||||
## TFGrid Deployments
|
||||
|
||||
## TFGrid Services and Resources
|
||||
From virtual machines to Kubernetes, to one-click apps to infrastructure-as-code workloads, the TFGrid provides syadmins control and flexibility.
|
||||
|
||||
- [TFGrid Services](tf_grid_services_readme.md)
|
||||
- [ThreeFold Deployments](tfgrid_deployments.md)
|
||||
|
||||
## Advanced Deployment Techniques
|
||||
## TFGrid Services
|
||||
|
||||
- [Advanced Topics](advanced.md)
|
||||
Consult the list of TFGrid services to gain an overview of the ThreeFold ecosystem.
|
||||
|
||||
***
|
||||
|
||||
If you have any question, feel free to ask for help on the [Threefold Forum](https://forum.threefold.io/c/threefold-grid-utilization/support/).
|
||||
- [TFGrid Services](tf_grid_services_readme.md)
|
@ -0,0 +1,43 @@
|
||||
<h1> TFGrid Deployments </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Dashboard UI](#dashboard-ui)
|
||||
- [Infrastrure-As-Code](#infrastrure-as-code)
|
||||
- [Command Line Interfaces](#command-line-interfaces)
|
||||
- [GPU Workloads](#gpu-workloads)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
There are many ways to interact with the ThreeFold Grid to deploy workloads. We present the main ones.
|
||||
|
||||
## Dashboard UI
|
||||
|
||||
We provide an easy-to-use user interface via the ThreeFold Dashboard and its many apps.
|
||||
|
||||
- [ThreeFold Dashboard](dashboard@@dashboard)
|
||||
|
||||
## Infrastrure-As-Code
|
||||
|
||||
You can deploy infrastructure-as-code with Pulumi and Terraform/OpenTofu.
|
||||
|
||||
- [Pulumi](pulumi_readme.md)
|
||||
- [Terraform](terraform_toc.md)
|
||||
|
||||
## Command Line Interfaces
|
||||
|
||||
You can use our Go and Javascript/TypeScript command line interface tools to deploy workloads on the grid.
|
||||
|
||||
- [Go Grid Client](developers@@grid3_go_readme)
|
||||
- [TFCMD](developers@@tfcmd/tfcmd)
|
||||
- [TFRobot](developers@@tfrobot/tfrobot)
|
||||
- [TypeScript Grid Client](developers@@grid3_javascript_readme)
|
||||
|
||||
## GPU Workloads
|
||||
|
||||
There are many ways to deploy GPU workloads on the ThreeFold Grid. Check the GPU section for all the details.
|
||||
|
||||
- [GPU Support](gpu_toc.md)
|
@ -7,13 +7,13 @@
|
||||
- [QAnet](#qanet)
|
||||
- [Testnet](#testnet)
|
||||
- [Mainnet](#mainnet)
|
||||
- [Supported Planetary Network Nodes](#supported-planetary-network-nodes)
|
||||
- [General](#general)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
On this article we have aggregated a list of all of the services running on Threefold Grid 3 infrastructure for your convenience
|
||||
Here is a list of all of the services running on Threefold Grid 3 infrastructure.
|
||||
|
||||
> Note: the usage of `dev` indicates a devnet service.
|
||||
> and usage of `test` indicates a testnet service.
|
||||
@ -53,43 +53,7 @@ On this article we have aggregated a list of all of the services running on Thre
|
||||
- [TFGrid Proxy](https://gridproxy.grid.tf)
|
||||
- [Grid Dashboard](https://dashboard.grid.tf)
|
||||
|
||||
### Supported Planetary Network Nodes
|
||||
## General
|
||||
|
||||
```
|
||||
Peers:
|
||||
[
|
||||
# Threefold Lochrist
|
||||
tcp://gent01.grid.tf:9943
|
||||
tcp://gent02.grid.tf:9943
|
||||
tcp://gent03.grid.tf:9943
|
||||
tcp://gent04.grid.tf:9943
|
||||
tcp://gent01.test.grid.tf:9943
|
||||
tcp://gent02.test.grid.tf:9943
|
||||
tcp://gent01.dev.grid.tf:9943
|
||||
tcp://gent02.dev.grid.tf:9943
|
||||
# GreenEdge
|
||||
tcp://gw291.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw293.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw294.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw297.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw298.vienna1.greenedgecloud.com:9943
|
||||
tcp://gw299.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw300.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw304.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw306.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw307.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw309.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw313.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw324.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw326.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw327.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw328.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw330.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw331.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw333.salzburg1.greenedgecloud.com:9943
|
||||
tcp://gw422.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw423.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw424.vienna2.greenedgecloud.com:9943
|
||||
tcp://gw425.vienna2.greenedgecloud.com:9943
|
||||
]
|
||||
```
|
||||
- [TF Flist Hub](developers@@/zos_hub)
|
||||
- [TF Boot Generator](dashboard@@node_installer)
|
@ -10,8 +10,6 @@ Feel free to explore the different possibilities!
|
||||
- [Node Finder and GPU](dashboard@@node_finder)
|
||||
- [Javascript Client and GPU](developers@@grid3_javascript_gpu_support)
|
||||
- [GPU and Go](developers@@grid3_go_gpu)
|
||||
- [GPU Support](developers@@grid3_go_gpu_support)
|
||||
- [Deploy a VM with GPU](developers@@grid3_go_vm_with_gpu)
|
||||
- [TFCMD and GPU](developers@@tfcmd_vm)
|
||||
- [Terraform and GPU](terraform_gpu_support.md)
|
||||
- [Full VM and GPU](dashboard@@fullvm)
|
||||
|
@ -0,0 +1,9 @@
|
||||
<h1> Complete Guides </h1>
|
||||
|
||||
This section covers complete guides to deploy workloads on the ThreeFold Grid with Pulumi.
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Pulumi and YAML](./pulumi_yaml.md)
|
||||
- [Pulumi and Python](./pulumi_python.md)
|
||||
- [Pulumi and Go](./pulumi_go.md)
|
@ -0,0 +1,90 @@
|
||||
<h1> Pulumi Complete Go Guide</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Steps](#steps)
|
||||
- [Alternative to Make Commands](#alternative-to-make-commands)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide, we cover the complete steps to deploy a virtual machine on the grid with Pulumi via Go.
|
||||
|
||||
To provide a uniform deployment method, we use Docker for this guide. It is optional but will greatly facilitate the deployment as the steps will be similar for Linux, MacOS and Windows.
|
||||
|
||||
This guide is useful to get you started quickly with Pulumi on the TFGrid.
|
||||
|
||||
Once you've successfully deployed a VM, you can try all the different Go examples within the [pulumi-threefold repository](https://github.com/threefoldtech/pulumi-threefold). The examples are available in the subdirectory `/examples/go/`.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](dashboard@@wallet_connector)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](threefold_token@@buy_sell_tft)
|
||||
- [Send TFT to TFChain](threefold_token@@tfchain_stellar_bridge)
|
||||
- [Get Docker](https://docs.docker.com/get-docker/)
|
||||
|
||||
## Steps
|
||||
|
||||
- Deploy a Docker Ubuntu container in interactive mode:
|
||||
```
|
||||
sudo docker run -it --net=host ubuntu:jammy /bin/bash
|
||||
```
|
||||
|
||||
- In Docker Ubuntu, deploy a VM with Pulumi. Make sure to add your `MNEMONIC` and `SSH_KEY` below before running the script. For this deployment we use `main` as the `NETWORK`. Change this if needed.
|
||||
|
||||
```
|
||||
# Install the prerequisites
|
||||
apt update && apt install -y curl git wget make
|
||||
|
||||
# Install Pulumi
|
||||
curl -fsSL https://get.pulumi.com | sh
|
||||
export PATH=$PATH=:/root/.pulumi/bin
|
||||
|
||||
# Clone the ThreeFold Pulumi repo
|
||||
git clone https://github.com/threefoldtech/pulumi-threefold.git
|
||||
cd pulumi-threefold/examples/go/virtual_machine
|
||||
|
||||
# Prepare the Pulumi Go environment
|
||||
# Install Go
|
||||
wget https://go.dev/dl/go1.23.0.linux-amd64.tar.gz
|
||||
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz
|
||||
export PATH=$PATH:/usr/local/go/bin
|
||||
|
||||
# Export the variables
|
||||
export NETWORK="main"
|
||||
export SSH_KEY="<ADD_YOUR_SSH_PUBLIC_KEY>"
|
||||
export MNEMONIC="<ADD_YOUR_MNEMONIC>"
|
||||
|
||||
# Start Pulumi
|
||||
make run
|
||||
```
|
||||
|
||||
- You can now SSH into the deployment from your local machine terminal
|
||||
```
|
||||
ssh root@VM_IP
|
||||
```
|
||||
- To destroy the deployment, run the following line within the Docker Ubuntu terminal.
|
||||
```
|
||||
make destroy
|
||||
```
|
||||
|
||||
## Alternative to Make Commands
|
||||
|
||||
You can use direct Pulumi commands instead of the Make commands above.
|
||||
|
||||
- You can replace `make run` with:
|
||||
```
|
||||
pulumi login --local
|
||||
pulumi up
|
||||
```
|
||||
- You can replace `make destroy` with:
|
||||
```
|
||||
pulumi down
|
||||
pulumi stack rm <stack_name>
|
||||
```
|
||||
|
||||
That being said, the Make commands run additional features. Feel free to explore the possibilities and consult the files within the repo for more information.
|
@ -0,0 +1,89 @@
|
||||
<h1> Pulumi Complete Python Guide</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Steps](#steps)
|
||||
- [Alternative to Make Commands](#alternative-to-make-commands)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide, we cover the complete steps to deploy a virtual machine on the grid with Pulumi via Python.
|
||||
|
||||
To provide a uniform deployment method, we use Docker for this guide. It is optional but will greatly facilitate the deployment as the steps will be similar for Linux, MacOS and Windows.
|
||||
|
||||
This guide is useful to get you started quickly with Pulumi on the TFGrid.
|
||||
|
||||
Once you've successfully deployed a VM, you can try all the different Python examples within the [pulumi-threefold repository](https://github.com/threefoldtech/pulumi-threefold). The examples are available in the subdirectory `/examples/python/`.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](dashboard@@wallet_connector)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](threefold_token@@buy_sell_tft)
|
||||
- [Send TFT to TFChain](threefold_token@@tfchain_stellar_bridge)
|
||||
- [Get Docker](https://docs.docker.com/get-docker/)
|
||||
|
||||
## Steps
|
||||
|
||||
- Deploy a Docker Ubuntu container in interactive mode:
|
||||
```
|
||||
sudo docker run -it --net=host ubuntu:jammy /bin/bash
|
||||
```
|
||||
|
||||
- In Docker Ubuntu, deploy a VM with Pulumi. Make sure to add your `MNEMONIC` and `SSH_KEY` below before running the script. For this deployment we use `main` as the `NETWORK`. Change this if needed.
|
||||
```
|
||||
# Install the prerequisites
|
||||
apt update && apt install -y curl git python3 python-is-python3 python3-venv python3-pip
|
||||
|
||||
# Install Pulumi
|
||||
curl -fsSL https://get.pulumi.com | sh
|
||||
export PATH=$PATH=:/root/.pulumi/bin
|
||||
|
||||
# Clone the ThreeFold Pulumi repo
|
||||
git clone https://github.com/threefoldtech/pulumi-threefold.git
|
||||
cd pulumi-threefold
|
||||
|
||||
# Prepare the Pulumi Python environment
|
||||
cd examples/python
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
cd virtual_machine
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Export the variables
|
||||
export NETWORK="main"
|
||||
export SSH_KEY="<ADD_YOUR_SSH_PUBLIC_KEY>"
|
||||
export MNEMONIC="<ADD_YOUR_MNEMONIC>"
|
||||
|
||||
# Start Pulumi
|
||||
make run
|
||||
```
|
||||
- You can now SSH into the deployment from your local machine terminal
|
||||
```
|
||||
ssh root@VM_IP
|
||||
```
|
||||
- To destroy the deployment, run the following line within the Docker Ubuntu terminal.
|
||||
```
|
||||
make destroy
|
||||
```
|
||||
|
||||
## Alternative to Make Commands
|
||||
|
||||
You can use direct Pulumi commands instead of the Make commands above.
|
||||
|
||||
- You can replace `make run` with:
|
||||
```
|
||||
pulumi login --local
|
||||
pulumi up
|
||||
```
|
||||
- You can replace `make destroy` with:
|
||||
```
|
||||
pulumi down
|
||||
pulumi stack rm <stack_name>
|
||||
```
|
||||
|
||||
That being said, the Make commands run additional features. Feel free to explore the possibilities and consult the files within the repo for more information.
|
@ -0,0 +1,77 @@
|
||||
<h1> Pulumi Complete YAML Guide</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Steps](#steps)
|
||||
- [Alternative to Make Commands](#alternative-to-make-commands)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide, we cover the complete steps to deploy a virtual machine on the grid with Pulumi via a YAML file.
|
||||
|
||||
To provide a uniform deployment method, we use Docker for this guide. It is optional but will greatly facilitate the deployment as the steps will be similar for Linux, MacOS and Windows.
|
||||
|
||||
This guide is useful to get you started quickly with Pulumi on the TFGrid.
|
||||
|
||||
Once you've successfully deployed a VM, you can try all the different YAML examples within the [pulumi-threefold repository](https://github.com/threefoldtech/pulumi-threefold). The examples are available in the subdirectory `/examples/yaml/`.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](dashboard@@wallet_connector)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](threefold_token@@buy_sell_tft)
|
||||
- [Send TFT to TFChain](threefold_token@@tfchain_stellar_bridge)
|
||||
- [Get Docker](https://docs.docker.com/get-docker/)
|
||||
|
||||
## Steps
|
||||
|
||||
- Deploy a Docker Ubuntu container in interactive mode:
|
||||
```
|
||||
sudo docker run -it --net=host ubuntu:jammy /bin/bash
|
||||
```
|
||||
- In Docker Ubuntu, deploy a VM with Pulumi. Make sure to add your `MNEMONIC` and `SSH_KEY` below before running the script. For this deployment we use `main` as the `NETWORK`. Change this if needed.
|
||||
```
|
||||
# Install the prerequisites
|
||||
apt update && apt install -y curl git make
|
||||
curl -fsSL https://get.pulumi.com | sh
|
||||
export PATH=$PATH=:/root/.pulumi/bin
|
||||
git clone https://github.com/threefoldtech/pulumi-threefold.git
|
||||
cd pulumi-threefold/examples/yaml/virtual_machine
|
||||
|
||||
# Export the variables
|
||||
export NETWORK="main"
|
||||
export SSH_KEY="<ADD_YOUR_SSH_PUBLIC_KEY>"
|
||||
export MNEMONIC="<ADD_YOUR_MNEMONIC>"
|
||||
|
||||
# Start Pulumi
|
||||
make run
|
||||
```
|
||||
- You can now SSH into the deployment from your local machine terminal
|
||||
```
|
||||
ssh root@VM_IP
|
||||
```
|
||||
- To destroy the deployment, run the following line within the Docker Ubuntu terminal.
|
||||
```
|
||||
make destroy
|
||||
```
|
||||
|
||||
## Alternative to Make Commands
|
||||
|
||||
You can use direct Pulumi commands instead of the Make commands above.
|
||||
|
||||
- You can replace `make run` with:
|
||||
```
|
||||
pulumi login --local
|
||||
pulumi up
|
||||
```
|
||||
- You can replace `make destroy` with:
|
||||
```
|
||||
pulumi down
|
||||
pulumi stack rm <stack_name>
|
||||
```
|
||||
|
||||
That being said, the Make commands provide additional commands. Feel free to explore the possibilities and consult the files within the repo for more information.
|
@ -98,7 +98,7 @@ We address here how to create a [network](https://github.com/threefoldtech/pulum
|
||||
You can find the original file [here](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/yaml/network/Pulumi.yaml).
|
||||
|
||||
```yml
|
||||
name: pulumi-provider-grid
|
||||
name: pulumi-threefold
|
||||
runtime: yaml
|
||||
|
||||
plugins:
|
||||
@ -113,7 +113,7 @@ resources:
|
||||
mnemonic:
|
||||
|
||||
scheduler:
|
||||
type: grid:internal:Scheduler
|
||||
type: threefold:provider:Scheduler
|
||||
options:
|
||||
provider: ${provider}
|
||||
properties:
|
||||
@ -139,13 +139,14 @@ outputs:
|
||||
|
||||
We will now go through this file section by section to properly understand what is happening.
|
||||
|
||||
Here we set the name for the project. It can be anything. We also set the runtime. It can be some code in YAML, Python, Go, etc.
|
||||
|
||||
```yml
|
||||
name: pulumi-provider-grid
|
||||
runtime: yaml
|
||||
```
|
||||
|
||||
- name is for the project name (can be anything)
|
||||
- runtime: the runtime we are using can be code in yaml, python, go, etc.
|
||||
We then start by initializing the resources. The provider which we loaded in the plugins section is also a resource that has properties (the main one now is just the mnemonic of TFChain).
|
||||
|
||||
```yml
|
||||
plugins:
|
||||
@ -160,13 +161,15 @@ Here, we define the plugins we are using within our project and their locations.
|
||||
```yml
|
||||
resources:
|
||||
provider:
|
||||
type: pulumi:providers:grid
|
||||
type: pulumi:providers:threefold
|
||||
options:
|
||||
pluginDownloadURL: github://api.github.com/threefoldtech/pulumi-threefold # optional
|
||||
properties:
|
||||
mnemonic:
|
||||
|
||||
```
|
||||
|
||||
We then start by initializing the resources. The provider which we loaded in the plugins section is also a resource that has properties (the main one now is just the mnemonic of TCHhain).
|
||||
Then, we create a scheduler `threefold:provider:Scheduler`, that does the planning for us. Instead of being too specific about node IDs, we just give it some generic information. For example, "I want to work against these data centers (farms)". As long as the necessary criteria are provided, the scheduler can be more specific in the planning and select the appropriate resources available on the TFGrid.
|
||||
|
||||
```yaml
|
||||
scheduler:
|
||||
@ -177,7 +180,7 @@ We then start by initializing the resources. The provider which we loaded in the
|
||||
farm_ids: [1]
|
||||
```
|
||||
|
||||
Then, we create a scheduler `grid:internal:Scheduler`, that does the planning for us. Instead of being too specific about node IDs, we just give it some generic information. For example, "I want to work against these data centers (farms)". As long as the necessary criteria are provided, the scheduler can be more specific in the planning and select the appropriate resources available on the TFGrid.
|
||||
Now, that we created the scheduler, we can go ahead and create the network resource `grid:internal:Network`. Please note that the network depends on the scheduler's existence. If we remove it, the scheduler and the network will be created in parallel, that's why we have the `dependsOn` section. We then proceed to specify the network resource properties, e.g. the name, the description, which nodes to deploy our network on, the IP range of the network. In our case, we only choose one node.
|
||||
|
||||
```yaml
|
||||
network:
|
||||
@ -194,8 +197,6 @@ Then, we create a scheduler `grid:internal:Scheduler`, that does the planning fo
|
||||
ip_range: 10.1.0.0/16
|
||||
```
|
||||
|
||||
Now, that we created the scheduler, we can go ahead and create the network resource `grid:internal:Network`. Please note that the network depends on the scheduler's existence. If we remove it, the scheduler and the network will be created in parallel, that's why we have the `dependsOn` section. We then proceed to specify the network resource properties, e.g. the name, the description, which nodes to deploy our network on, the IP range of the network. In our case, we only choose one node.
|
||||
|
||||
To access information related to our deployment, we set the section **outputs**. This will display results that we can use, or reuse, while we develop our infrastructure further.
|
||||
|
||||
```yaml
|
||||
@ -211,7 +212,7 @@ Now, we will check an [example](https://github.com/threefoldtech/pulumi-provider
|
||||
Just like we've seen above, we will have two files `Makefile` and `Pulumi.yaml` where we describe the infrastructure.
|
||||
|
||||
```yml
|
||||
name: pulumi-provider-grid
|
||||
name: pulumi-threefold
|
||||
runtime: yaml
|
||||
|
||||
plugins:
|
||||
@ -221,12 +222,14 @@ plugins:
|
||||
|
||||
resources:
|
||||
provider:
|
||||
type: pulumi:providers:grid
|
||||
type: pulumi:providers:threefold
|
||||
options:
|
||||
pluginDownloadURL: github://api.github.com/threefoldtech/pulumi-threefold # optional
|
||||
properties:
|
||||
mnemonic: <to be filled>
|
||||
|
||||
scheduler:
|
||||
type: grid:internal:Scheduler
|
||||
type: threefold:provider:Scheduler
|
||||
options:
|
||||
provider: ${provider}
|
||||
properties:
|
||||
@ -280,11 +283,11 @@ outputs:
|
||||
planetary_ip: ${deployment.vms_computed[0].planetary_ip}
|
||||
```
|
||||
|
||||
We have a scheduler, and a network just like before. But now, we also have a deployment `grid:internal:Deployment` object that can have one or more disks and virtual machines.
|
||||
We have a scheduler, and a network just like before. But now, we also have a deployment `threefold:provider:Deployment` object that can have one or more disks and virtual machines.
|
||||
|
||||
```yaml
|
||||
deployment:
|
||||
type: grid:internal:Deployment
|
||||
type: threefold:provider:Deployment
|
||||
options:
|
||||
provider: ${provider}
|
||||
dependsOn:
|
||||
@ -301,11 +304,13 @@ deployment:
|
||||
cpu: 2
|
||||
memory: 256
|
||||
planetary: true
|
||||
mycelium: true
|
||||
# mycelium_ip_seed: b60f2b7ec39c # hex encoded 6 bytes [example]
|
||||
mounts:
|
||||
- disk_name: data
|
||||
mount_point: /app
|
||||
env_vars:
|
||||
SSH_KEY: <to be filled>
|
||||
SSH_KEY:
|
||||
|
||||
disks:
|
||||
- name: data
|
||||
@ -325,7 +330,7 @@ We now see how to deploy a [Kubernetes cluster using Pulumi](https://github.com/
|
||||
```yaml
|
||||
content was removed for brevity
|
||||
kubernetes:
|
||||
type: grid:internal:Kubernetes
|
||||
type: threefold:provider:Kubernetes
|
||||
options:
|
||||
provider: ${provider}
|
||||
dependsOn:
|
||||
@ -383,7 +388,7 @@ We present here the file for a simple domain prefix.
|
||||
```yml
|
||||
content was removed for brevity
|
||||
scheduler:
|
||||
type: grid:internal:Scheduler
|
||||
type: threefold:provider:Scheduler
|
||||
options:
|
||||
provider: ${provider}
|
||||
properties:
|
||||
@ -393,7 +398,7 @@ We present here the file for a simple domain prefix.
|
||||
free_ips: 1
|
||||
|
||||
gatewayName:
|
||||
type: grid:internal:GatewayName
|
||||
type: threefold:provider:GatewayName
|
||||
options:
|
||||
provider: ${provider}
|
||||
dependsOn:
|
||||
@ -404,10 +409,6 @@ We present here the file for a simple domain prefix.
|
||||
backends:
|
||||
- "http://69.164.223.208"
|
||||
|
||||
outputs:
|
||||
node_deployment_id: ${gatewayName.node_deployment_id}
|
||||
fqdn: ${gatewayName.fqdn}
|
||||
|
||||
```
|
||||
|
||||
In this example, we create a gateway name resource `grid:internal:GatewayName` for the name `pulumi.gent01.dev.grid.tf`.
|
||||
@ -425,7 +426,7 @@ Here's an [example](https://github.com/threefoldtech/pulumi-provider-grid/blob/d
|
||||
```yml
|
||||
code removed for brevity
|
||||
gatewayFQDN:
|
||||
type: grid:internal:GatewayFQDN
|
||||
type: threefold:provider:GatewayFQDN
|
||||
options:
|
||||
provider: ${provider}
|
||||
dependsOn:
|
||||
@ -444,6 +445,4 @@ Here, we informed the gateway that any request coming for the domain `mydomain.c
|
||||
|
||||
## Conclusion
|
||||
|
||||
We covered in this guide some basic details concerning the use of the ThreeFold Pulumi plugin.
|
||||
|
||||
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
||||
We covered in this guide some basic details concerning the use of the ThreeFold Pulumi plugin.
|
@ -3,11 +3,11 @@
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Check All the Examples](#check-all-the-examples)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Set the Environment Variables](#set-the-environment-variables)
|
||||
- [Test the Plugin](#test-the-plugin)
|
||||
- [Destroy the Deployment](#destroy-the-deployment)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
@ -19,6 +19,12 @@ We present here the basic steps to test the examples within the [ThreeFold Pulum
|
||||
|
||||
Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project.
|
||||
|
||||
## Check All the Examples
|
||||
|
||||
In the manual, we cover some basic examples of Pulumi deployments on the grid.
|
||||
|
||||
You can access all the Pulumi deployment examples on the ThreeFold Pulumi repository [here](https://github.com/threefoldtech/pulumi-threefold/tree/development/examples).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
There are a few things to set up before exploring Pulumi. Since we will be using the examples in the ThreeFold Pulumi repository, we must clone the repository before going further.
|
||||
@ -83,7 +89,3 @@ You can destroy your Pulumi deployment at any time with the following make comma
|
||||
```
|
||||
make destroy
|
||||
```
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
@ -8,13 +8,13 @@ With Pulumi, you can express your infrastructure requirements using the language
|
||||
- [Benefits of Using Pulumi](#benefits-of-using-pulumi)
|
||||
- [Declarative vs. Imperative Programming](#declarative-vs-imperative-programming)
|
||||
- [Declaration Programming Example](#declaration-programming-example)
|
||||
- [Benefits of declarative programming in IaC](#benefits-of-declarative-programming-in-iac)
|
||||
- [Benefits of Declarative Programming in IaC](#benefits-of-declarative-programming-in-iac)
|
||||
- [Concepts](#concepts)
|
||||
- [Pulumi Project](#pulumi-project)
|
||||
- [Project File](#project-file)
|
||||
- [Stacks](#stacks)
|
||||
- [Resources](#resources)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
- [Pulumi Registry](#pulumi-registry)
|
||||
|
||||
***
|
||||
|
||||
@ -22,7 +22,7 @@ With Pulumi, you can express your infrastructure requirements using the language
|
||||
|
||||
[ThreeFold Grid](https://threefold.io) is a decentralized cloud infrastructure platform that provides developers with a secure and scalable way to deploy and manage their applications. It is based on a peer-to-peer network of nodes that are distributed around the world.
|
||||
|
||||
[Pulumi](https://www.pulumi.com/) is a cloud-native infrastructure as code (IaC) platform that allows developers to manage their infrastructure using code. It supports a wide range of cloud providers, including ThreeFold Grid.
|
||||
[Pulumi](https://www.pulumi.com/) is a cloud-native infrastructure as code (IaC) platform that allows developers to manage their infrastructure using code. It supports a wide range of cloud providers, including ThreeFold Grid. Consult the official [Pulumi documentation](https://www.pulumi.com/docs/) for more information.
|
||||
|
||||
The [Pulumi plugin for ThreeFold Grid](https://github.com/threefoldtech/pulumi-provider-grid) provides developers with a way to deploy and manage their ThreeFold Grid resources using Pulumi. This means that developers can benefit from all of the features and benefits that Pulumi offers, such as cross-cloud support, type safety, preview and diff, and parallel execution -still in the works-.
|
||||
|
||||
@ -55,7 +55,7 @@ Say I want an infrastructure of two virtual machines with X disks. The following
|
||||
|
||||
As you can see, the declarative code is much simpler and easier to read. It also makes it easier to make changes to your infrastructure, as you only need to change the desired state, and the IaC tool will figure out how to achieve it.
|
||||
|
||||
### Benefits of declarative programming in IaC
|
||||
### Benefits of Declarative Programming in IaC
|
||||
|
||||
There are several benefits to using declarative programming in IaC:
|
||||
|
||||
@ -125,6 +125,6 @@ resources:
|
||||
options: ...options
|
||||
```
|
||||
|
||||
## Questions and Feedback
|
||||
## Pulumi Registry
|
||||
|
||||
If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
||||
You can visit the Pulumi registry to access the ThreeFold package [here](https://www.pulumi.com/registry/packages/threefold/).
|
@ -9,4 +9,5 @@ In this section, we will explore the dynamic world of infrastructure as code (Ia
|
||||
- [Introduction to Pulumi](pulumi_intro.md)
|
||||
- [Installing Pulumi](pulumi_install.md)
|
||||
- [Deployment Examples](pulumi_examples.md)
|
||||
- [Deployment Details](pulumi_deployment_details.md)
|
||||
- [Deployment Details](pulumi_deployment_details.md)
|
||||
- [Complete Guides](pulumi_complete_guides_toc.md)
|
@ -9,6 +9,4 @@
|
||||
- [ZDB](./terraform_zdb.html)
|
||||
- [Zlogs](./terraform_zlogs.md)
|
||||
- [Quantum Safe Filesystem](terraform_qsfs.md)
|
||||
- [QSFS on Micro VM](terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](./terraform_caprover.html)
|
||||
|
@ -1,4 +1,4 @@
|
||||
<h1>Terraform Complete Full VM Deployment</h1>
|
||||
<h1>Terraform Full VM Deployment</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
@ -22,7 +22,7 @@ This short ThreeFold Guide will teach you how to deploy a Full VM on the TFGrid
|
||||
|
||||
The steps are very simple. You first need to create the Terraform files, the variables file and the deployment file, and then deploy the full VM. After the deployment is done, you can SSH into the full VM.
|
||||
|
||||
The main goal of this guide is to show you all the necessary steps to deploy a Full VM on the TGrid using Terraform. Once you get acquainted with this first basic deployment, you should be able to explore on your own the possibilities that the TFGrid and Terraform combined provide.
|
||||
The main goal of this guide is to show you all the necessary steps to deploy a Full VM on the TFGrid using Terraform. Once you get acquainted with this first basic deployment, you should be able to explore on your own the possibilities that the TFGrid and Terraform combined provide.
|
||||
|
||||
|
||||
|
||||
@ -71,10 +71,11 @@ We show here how to find a suitable 3Node using the ThreeFold Explorer.
|
||||
- For proper understanding, we give further information on some relevant columns:
|
||||
- `ID` refers to the node ID
|
||||
- `Free Public IPs` refers to available IPv4 public IP addresses
|
||||
- `HRU` refers to HDD storage
|
||||
- `SRU` refers to SSD storage
|
||||
- `MRU` refers to RAM (memory)
|
||||
- `CRU` refers to virtual cores (vcores)
|
||||
- Resource unit codes (consult [this page](cloud@@esource_units_calc_cloudunits) for more information)
|
||||
- `HRU` is the code for the HDD unit (storage capacity in GB)
|
||||
- `SRU` is the code for the SSD unit (storage capacity in GB)
|
||||
- `MRU` is the code for the the memory unit (memory capacity in GB)
|
||||
- `CRU` is the code for the core unit (virtual cores capacity)
|
||||
- To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters:
|
||||
- At the top left of the screen, in the `Filters` box, select the parameter(s) you want.
|
||||
- For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes.
|
||||
@ -227,11 +228,9 @@ cpu = "1"
|
||||
memory = "512"
|
||||
```
|
||||
|
||||
Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the server used. Simply replace the three dots by the content.
|
||||
|
||||
We set here the minimum specs for a full VM, but you can adjust these parameters.
|
||||
|
||||
Make sure to add your own mnemonics and SSH public key. You will also need to specify the node ID of the server used. Simply replace the three dots by the content.
|
||||
|
||||
We set here the minimum specs for a full VM, but you can adjust these parameters. Here `size` is the SSD storage capacity in GB, `cpu` is the number of virtual core and `memory` is the memory capacity in MB.
|
||||
|
||||
## Deploy the Full VM with Terraform
|
||||
|
||||
@ -275,6 +274,4 @@ Make sure that you are in the Terraform directory you created for this deploymen
|
||||
|
||||
## Conclusion
|
||||
|
||||
You now have the basic knowledge and know-how to deploy on the TFGrid using Terraform.
|
||||
|
||||
As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
||||
You now have the basic knowledge and know-how to deploy on the TFGrid using Terraform.
|
@ -4,11 +4,10 @@
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Install Terraform](#install-terraform)
|
||||
- [Install Terraform on Linux](#install-terraform-on-linux)
|
||||
- [Install Terraform on MAC](#install-terraform-on-mac)
|
||||
- [Install Terraform on Windows](#install-terraform-on-windows)
|
||||
- [Linux](#linux)
|
||||
- [MacOS](#macos)
|
||||
- [Windows](#windows)
|
||||
- [ThreeFold Terraform Plugin](#threefold-terraform-plugin)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
@ -22,32 +21,28 @@ You can get Terraform from the Terraform website [download page](https://www.ter
|
||||
|
||||
We cover here the basic steps for Linux, MAC and Windows for convenience. Refer to the official Terraform documentation if needed.
|
||||
|
||||
### Install Terraform on Linux
|
||||
### Linux
|
||||
|
||||
To install Terraform on Linux, we follow the official [Terraform documentation](https://developer.hashicorp.com/terraform/downloads).
|
||||
|
||||
* [Install Terraform on Linux](../computer_it_basics/cli_scripts_basics.md#install-terraform)
|
||||
* [Install Terraform on Linux](cli_scripts_basics.md#install-terraform)
|
||||
|
||||
### Install Terraform on MAC
|
||||
### MacOS
|
||||
|
||||
To install Terraform on MAC, install Brew and then install Terraform.
|
||||
|
||||
* [Install Brew](../computer_it_basics/cli_scripts_basics.md#install-brew)
|
||||
* [Install Terraform with Brew](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-brew)
|
||||
* [Install Brew](cli_scripts_basics.md#install-brew)
|
||||
* [Install Terraform with Brew](cli_scripts_basics.md#install-terraform-with-brew)
|
||||
|
||||
### Install Terraform on Windows
|
||||
### Windows
|
||||
|
||||
To install Terraform on Windows, a quick way is to first install Chocolatey and then install Terraform.
|
||||
|
||||
* [Install Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-chocolatey)
|
||||
* [Install Terraform with Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-chocolatey)
|
||||
* [Install Chocolatey](cli_scripts_basics.md#install-chocolatey)
|
||||
* [Install Terraform with Chocolatey](cli_scripts_basics.md#install-terraform-with-chocolatey)
|
||||
|
||||
## ThreeFold Terraform Plugin
|
||||
|
||||
The ThreeFold [Terraform plugin](https://github.com/threefoldtech/terraform-provider-grid) is supported on Linux, MAC and Windows.
|
||||
|
||||
There's no need to specifically install the ThreeFold Terraform plugin. Terraform will automatically load it from an online directory according to instruction within the deployment file.
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram.
|
||||
There's no need to specifically install the ThreeFold Terraform plugin. Terraform will automatically load it from an online directory according to instruction within the deployment file.
|
@ -1,16 +1,12 @@
|
||||
|
||||
|
||||
<h1> Terraform </h1>
|
||||
|
||||
Welcome to the *Terraform* section of the ThreeFold Manual!
|
||||
|
||||
In this section, we'll embark on a journey to explore the powerful capabilities of Terraform within the ThreeFold Grid ecosystem. Terraform, a cutting-edge infrastructure as code (IaC) tool, empowers you to define and provision your infrastructure efficiently and consistently.
|
||||
<h1> Introduction to Terraform </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [What is Terraform?](#what-is-terraform)
|
||||
- [Terraform on ThreeFold Grid: Unleashing Power and Simplicity](#terraform-on-threefold-grid-unleashing-power-and-simplicity)
|
||||
- [Get Started](#get-started)
|
||||
- [Deployment Examples](#deployment-examples)
|
||||
- [Features](#features)
|
||||
- [What is Not Supported](#what-is-not-supported)
|
||||
- [OpenTofu: Alternative to Terraform](#opentofu-alternative-to-terraform)
|
||||
@ -19,32 +15,36 @@ In this section, we'll embark on a journey to explore the powerful capabilities
|
||||
|
||||
## What is Terraform?
|
||||
|
||||
Terraform is an open-source tool that enables you to describe and deploy infrastructure using a declarative configuration language. With Terraform, you can define your infrastructure components, such as virtual machines, networks, and storage, in a human-readable configuration file. This file, often referred to as the Terraform script, becomes a blueprint for your entire infrastructure.
|
||||
[Terraform](https://www.terraform.io/) is an open-source tool that enables you to describe and deploy infrastructure using a declarative configuration language. With Terraform, you can define your infrastructure components, such as virtual machines, networks, and storage, in a human-readable configuration file. This file, often referred to as the Terraform script, becomes a blueprint for your entire infrastructure.
|
||||
|
||||
The beauty of Terraform lies in its ability to automate the provisioning and management of infrastructure across various cloud providers, ensuring that your deployments are reproducible and scalable. It promotes collaboration, version control, and the ability to treat your infrastructure as code, providing a unified and seamless approach to managing complex environments.
|
||||
|
||||
## Terraform on ThreeFold Grid: Unleashing Power and Simplicity
|
||||
|
||||
Within the ThreeFold Grid ecosystem, Terraform plays a pivotal role in streamlining the deployment and orchestration of decentralized, peer-to-peer infrastructure. Leveraging the unique capabilities of the ThreeFold Grid, you can use Terraform to define and deploy your workloads, tapping into the TFGrid decentralized architecture for unparalleled scalability, reliability, and sustainability.
|
||||
|
||||
This manual will guide you through the process of setting up, configuring, and managing your infrastructure on the ThreeFold Grid using Terraform. Whether you're a seasoned developer, a DevOps professional, or someone exploring the world of decentralized computing for the first time, this guide is designed to provide clear and concise instructions to help you get started.
|
||||
This section of the manual will guide you through the process of setting up, configuring, and managing your infrastructure on the ThreeFold Grid using Terraform. Whether you're a seasoned developer, a DevOps professional, or someone exploring the world of decentralized computing for the first time, this guide is designed to provide clear and concise instructions to help you get started.
|
||||
|
||||
## Get Started
|
||||
|
||||
![ ](../terraform/img//terraform_works.png)
|
||||
To get started, [install Terraform](./terraform_install.md) and [deploy a full VM](./terraform_full_vm.md) on the grid with Terraform.
|
||||
|
||||
Threefold loves Open Source! In v3.0 we are integrating one of the most popular 'Infrastructure as Code' (IaC) tools of the cloud industry, [Terraform](https://terraform.io). Utilizing the Threefold grid v3 using Terraform gives a consistent workflow and a familiar experience for everyone coming from different background. Terraform describes the state desired of how the deployment should look like instead of imperatively describing the low level details and the mechanics of how things should be glued together.
|
||||
Once you're acquainted with the basics, you can explore different [Terraform deployment examples](https://github.com/threefoldtech/terraform-provider-grid/tree/development/examples).
|
||||
|
||||
## Deployment Examples
|
||||
|
||||
Consult the ThreeFold `terraform-provider-grid` repo for different [Terraform deployment examples](https://github.com/threefoldtech/terraform-provider-grid/tree/development/examples).
|
||||
|
||||
You can also read the [Resources](terraform_resources_readme.md) section for more details on different Terraform deployments.
|
||||
|
||||
## Features
|
||||
|
||||
- All basic primitives from ThreeFold grid can be deployed, which is a lot.
|
||||
- Terraform can destroy a deployment
|
||||
- All basic primitives from ThreeFold grid can be deployed
|
||||
- Terraform can destroy deployments
|
||||
- Terraform shows all the outputs
|
||||
|
||||
## What is Not Supported
|
||||
|
||||
- we don't support updates/upgrades, if you want a change you need to destroy a deployment & re-create your deployment this in case you want to change the current running instances properties or change the node, but adding a vm to an existing deployment this shouldn't affect other running vm and same if we need to decommission a vm from a deployment this also shouldn't affect the others
|
||||
|
||||
- We do not support updates nor upgrades.
|
||||
- If you want a change, you need to destroy a deployment & re-create your deployment in case you want to change the current running instances properties or change the node.
|
||||
- Adding a VM to an existing deployment or decommissioning a vm from a deployment shouldn't affect other running VMs.
|
||||
|
||||
## OpenTofu: Alternative to Terraform
|
||||
|
||||
[OpenTofu](https://opentofu.org/) is a fully open-source Terraform fork that is backward compatible with all prior versions of Terraform up to version 1.6. This alternative can be used instead of Terraform for the following sections. You might need to make changes depending on the version you are working with. Check the [OpenTofu Docs](https://opentofu.org/docs/) for more information.
|
@ -2,37 +2,12 @@
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Overview](terraform_readme.md)
|
||||
- [Introduction to Terraform](terraform_readme.md)
|
||||
- [Installing Terraform](terraform_install.md)
|
||||
- [Terraform Basics](terraform_basics.md)
|
||||
- [Full VM Deployment](terraform_full_vm.md)
|
||||
- [GPU Support](terraform_gpu_support.md)
|
||||
- [Resources](terraform_resources_readme.md)
|
||||
- [Using Scheduler](terraform_scheduler.md)
|
||||
- [Virtual Machine](terraform_vm.md)
|
||||
- [Web Gateway](terraform_vm_gateway.md)
|
||||
- [Kubernetes Cluster](terraform_k8s.md)
|
||||
- [ZDB](terraform_zdb.md)
|
||||
- [Zlogs](terraform_zlogs.md)
|
||||
- [Quantum Safe Filesystem](terraform_qsfs.md)
|
||||
- [QSFS on Micro VM](terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](terraform_caprover.md)
|
||||
- [QSFS on Micro VM](terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](terraform_caprover.md)
|
||||
- [Advanced](terraform_advanced_readme.md)
|
||||
- [Terraform Provider](terraform_provider.md)
|
||||
- [Terraform Provisioners](terraform_provisioners.md)
|
||||
- [Mounts](terraform_mounts.md)
|
||||
- [Capacity Planning](terraform_capacity_planning.md)
|
||||
- [Updates](terraform_updates.md)
|
||||
- [SSH Connection with Wireguard](terraform_wireguard_ssh.md)
|
||||
- [Set a Wireguard VPN](terraform_wireguard_vpn.md)
|
||||
- [Synced MariaDB Databases](terraform_mariadb_synced_databases.md)
|
||||
- [Nomad](terraform_nomad.md)
|
||||
- [Nextcloud Deployments](terraform_nextcloud_toc.md)
|
||||
- [Nextcloud All-in-One Deployment](terraform_nextcloud_aio.md)
|
||||
- [Nextcloud Single Deployment](terraform_nextcloud_single.md)
|
||||
- [Nextcloud Redundant Deployment](terraform_nextcloud_redundant.md)
|
||||
- [Nextcloud 2-Node VPN Deployment](terraform_nextcloud_vpn.md)
|
||||
- [Advanced](terraform_advanced_readme.md)
|
Loading…
Reference in New Issue
Block a user