restructured manual
@@ -8,13 +8,7 @@ In this section, we delve into sophisticated topics and powerful functionalities
|
||||
- [Cancel Contracts](./cancel_contracts.md)
|
||||
- [Contract Bills Reports](./contract_bill_report.md)
|
||||
- [Listing Free Public IPs](./list_public_ips.md)
|
||||
- [Cloud Console](./cloud_console.md)
|
||||
- [Redis](./grid3_redis.md)
|
||||
- [IPFS](./ipfs/ipfs_toc.md)
|
||||
- [IPFS on a Full VM](./ipfs/ipfs_fullvm.md)
|
||||
- [IPFS on a Micro VM](./ipfs/ipfs_microvm.md)
|
||||
- [Hummingbot](./hummingbot.md)
|
||||
- [AI & ML Workloads](./ai_ml_workloads.md)
|
||||
- [Ecommerce](./ecommerce/ecommerce.md)
|
||||
- [WooCommerce](./ecommerce/woocommerce.md)
|
||||
- [nopCommerce](./ecommerce/nopcommerce.md)
|
||||
|
@@ -1,125 +0,0 @@
|
||||
<h1> AI & ML Workloads </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Prepare the System](#prepare-the-system)
|
||||
- [Install the GPU Driver](#install-the-gpu-driver)
|
||||
- [Set a Python Virtual Environment](#set-a-python-virtual-environment)
|
||||
- [Install PyTorch and Test Cuda](#install-pytorch-and-test-cuda)
|
||||
- [Set and Access Jupyter Notebook](#set-and-access-jupyter-notebook)
|
||||
- [Run AI/ML Workloads](#run-aiml-workloads)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present a basic method to deploy artificial intelligence (AI) and machine learning (ML) on the TFGrid. For this, we make use of dedicated nodes and GPU support.
|
||||
|
||||
In the first part, we show the steps to install the Nvidia driver of a GPU card on a full VM Ubuntu 22.04 running on the TFGrid.
|
||||
|
||||
In the second part, we show how to use PyTorch to run AI/ML tasks.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need to reserve a [dedicated GPU node](../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid.
|
||||
|
||||
## Prepare the System
|
||||
|
||||
- Update the system
|
||||
```
|
||||
dpkg --add-architecture i386
|
||||
apt-get update
|
||||
apt-get dist-upgrade
|
||||
reboot
|
||||
```
|
||||
- Check the GPU info
|
||||
```
|
||||
lspci | grep VGA
|
||||
lshw -c video
|
||||
```
|
||||
|
||||
## Install the GPU Driver
|
||||
|
||||
- Download the latest Nvidia driver
|
||||
- Check which driver is recommended
|
||||
```
|
||||
apt install ubuntu-drivers-common
|
||||
ubuntu-drivers devices
|
||||
```
|
||||
- Install the recommended driver (e.g. with 535)
|
||||
```
|
||||
apt install nvidia-driver-535
|
||||
```
|
||||
- Reboot and reconnect to the VM
|
||||
- Check the GPU status
|
||||
```
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
Now that the GPU node is set, let's work on setting PyTorch to run AI/ML workloads.
|
||||
|
||||
## Set a Python Virtual Environment
|
||||
|
||||
Before installing Python package with pip, you should create a virtual environment.
|
||||
|
||||
- Install the prerequisites
|
||||
```
|
||||
apt update
|
||||
apt install python3-pip python3-dev
|
||||
pip3 install --upgrade pip
|
||||
pip3 install virtualenv
|
||||
```
|
||||
- Create a virtual environment
|
||||
```
|
||||
mkdir ~/python_project
|
||||
cd ~/python_project
|
||||
virtualenv python_project_env
|
||||
source python_project_env/bin/activate
|
||||
```
|
||||
|
||||
## Install PyTorch and Test Cuda
|
||||
|
||||
Once you've created and activated a virtual environment for Pyhton, you can install different Python packages.
|
||||
|
||||
- Install PyTorch and upgrade Numpy
|
||||
```
|
||||
pip3 install torch
|
||||
pip3 install numpy --upgrade
|
||||
```
|
||||
|
||||
Before going further, you can check if Cuda is properly installed on your machine.
|
||||
|
||||
- Check that Cuda is available on Python with PyTorch by using the following lines:
|
||||
```
|
||||
import torch
|
||||
torch.cuda.is_available()
|
||||
torch.cuda.device_count() # the output should be 1
|
||||
torch.cuda.current_device() # the output should be 0
|
||||
torch.cuda.device(0)
|
||||
torch.cuda.get_device_name(0)
|
||||
```
|
||||
|
||||
## Set and Access Jupyter Notebook
|
||||
|
||||
You can run Jupyter Notebook on the remote VM and access it on your local browser.
|
||||
|
||||
- Install Jupyter Notebook
|
||||
```
|
||||
pip3 install notebook
|
||||
```
|
||||
- Run Jupyter Notebook in no-browser mode and take note of the URL and the token
|
||||
```
|
||||
jupyter notebook --no-browser --port=8080 --ip=0.0.0.0
|
||||
```
|
||||
- On your local machine, copy and paste on a browser the given URL but make sure to change `127.0.0.1` with the WireGuard IP (here it is `10.20.4.2`) and to set the correct token.
|
||||
```
|
||||
http://10.20.4.2:8080/tree?token=<insert_token>
|
||||
```
|
||||
|
||||
## Run AI/ML Workloads
|
||||
|
||||
After following the steps above, you should now be able to run Python codes that will make use of your GPU node to compute AI and ML workloads.
|
||||
|
||||
Feel free to explore different ways to use this feature. For example, the [HuggingFace course](https://huggingface.co/learn/nlp-course/chapter1/1) on natural language processing is a good introduction to machine learning.
|
@@ -1,33 +0,0 @@
|
||||
<h1> Cloud Console </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Overview](#overview)
|
||||
- [Connect to Cloud Console](#connect-to-cloud-console)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Cloud console is a tool to view machine logging and interact with the machine you have deployed. We show the basics of cloud-console and how to access it via a browser during deployment.
|
||||
|
||||
## Overview
|
||||
|
||||
Cloud console always runs on the machine's private network ip and port number equla to `20000 +last octect` of machine private IP. For example if the machine ip is `10.20.2.2/24`, this means that `cloud-console` is running on `10.20.2.1:20002`.
|
||||
|
||||
For the cloud-console to run we need to start the cloud-hypervisor with option "--serial pty" instead of tty, this allows us to interact with the vm from another process, `cloud-console` in our case.
|
||||
|
||||
## Connect to Cloud Console
|
||||
|
||||
You can easily connect to cloud console on the TFGrid.
|
||||
|
||||
- Deploy a VM on the TFGrid with the WireGuard network
|
||||
- Set the WireGuard configuration file
|
||||
- Start the WireGuard connection:
|
||||
```
|
||||
wg-quick up wireguard.conf
|
||||
```
|
||||
- Go to your browser with the network router IP `10.20.2.1:20002` to access cloud console.
|
||||
|
||||
> Note: You might need to create a user/password in the VM first before connecting to cloud-console if the image used does not have a default user.
|
@@ -1,8 +0,0 @@
|
||||
<h1>Ecommerce</h1>
|
||||
|
||||
You can easily deploy a free and open-source ecommerce on the TFGrid. We present here two of the most popular options.
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [WooCommerce](./woocommerce.md)
|
||||
- [nopCommerce](./nopcommerce.md)
|
@@ -1 +0,0 @@
|
||||
nopcommerce_2.png
|
Before Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 765 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 126 KiB |
Before Width: | Height: | Size: 44 KiB |
Before Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 123 KiB |
@@ -1,269 +0,0 @@
|
||||
<h1>Ecommerce on the TFGrid</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy a Full VM](#deploy-a-full-vm)
|
||||
- [Create an SSH Tunnel](#create-an-ssh-tunnel)
|
||||
- [Preparing the VM](#preparing-the-vm)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Download nopCommerce](#download-nopcommerce)
|
||||
- [Access nopCommerce](#access-nopcommerce)
|
||||
- [Install nopCommerce](#install-nopcommerce)
|
||||
- [Access the Ecommerce from the Public Internet](#access-the-ecommerce-from-the-public-internet)
|
||||
- [Set a DNS Record](#set-a-dns-record)
|
||||
- [Access the Ecommerce](#access-the-ecommerce)
|
||||
- [HTTPS with Caddy](#https-with-caddy)
|
||||
- [Manage with Systemd](#manage-with-systemd)
|
||||
- [Access Admin Panel](#access-admin-panel)
|
||||
- [Manage nopCommerce with Systemd](#manage-nopcommerce-with-systemd)
|
||||
- [References](#references)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
We show how to deploy a free and open-source ecommerce on the ThreeFold Grid. We will be deploying on a full VM with an IPv4 address.
|
||||
|
||||
[nopCommerce](https://www.nopcommerce.com/en) is an open-source ecommerce platform based on Microsoft's ASP.NET Core framework and MS SQL Server 2012 (or higher) backend Database.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](../../../dashboard/wallet_connector.md)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
|
||||
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
||||
## Deploy a Full VM
|
||||
|
||||
We start by deploying a full VM on the ThreeFold Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/)
|
||||
* Deploy a full VM (Ubuntu 22.04) with an IPv4 address and at least the minimum specs for a full VM
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 1vcore
|
||||
* Minimum MB of RAM: 512MB
|
||||
* Minimum storage: 15GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
|
||||
## Create an SSH Tunnel
|
||||
|
||||
We create an SSH tunnel with port 5432:80, as it is this combination that we will set for nopCommerce on the docker-compose file.
|
||||
|
||||
- Open a terminal and create an SSH tunnel
|
||||
```
|
||||
ssh -4 -L 5432:127.0.0.1:80 root@VM_IPv4_address>
|
||||
```
|
||||
|
||||
Simply leave this window open and follow the next steps.
|
||||
|
||||
## Preparing the VM
|
||||
|
||||
We prepare the full to run nopCommerce.
|
||||
|
||||
* Connect to the VM via SSH
|
||||
```
|
||||
ssh root@VM_IPv4_address
|
||||
```
|
||||
* Update the VM
|
||||
```
|
||||
apt update
|
||||
```
|
||||
* [Install Docker](../../computer_it_basics/docker_basics.html#install-docker-desktop-and-docker-engine)
|
||||
* Install docker-compose
|
||||
```
|
||||
apt install docker-compose -y
|
||||
```
|
||||
|
||||
## Set a Firewall
|
||||
|
||||
You can set a firewall to your VM for further security. This should be used in production mode.
|
||||
|
||||
* Add the permissions
|
||||
* ```
|
||||
ufw allow 80
|
||||
ufw allow 443
|
||||
```
|
||||
* Enable the firewall
|
||||
* ```
|
||||
ufw enable
|
||||
```
|
||||
* Verify the fire wall status
|
||||
* ```
|
||||
ufw status verbose
|
||||
```
|
||||
|
||||
## Download nopCommerce
|
||||
|
||||
* Clone the repository
|
||||
```
|
||||
git clone https://github.com/nopSolutions/nopCommerce.git
|
||||
cd nopCommerce
|
||||
```
|
||||
* Build the image
|
||||
```
|
||||
cd nopCommerce
|
||||
docker-compose -f ./postgresql-docker-compose.yml build
|
||||
```
|
||||
* Run the image
|
||||
```
|
||||
docker-compose -f ./postgresql-docker-compose.yml up
|
||||
```
|
||||
|
||||
## Access nopCommerce
|
||||
|
||||
You can access the nopCommerce interface on a browser with port 5432 via the SSH tunnel:
|
||||
|
||||
```
|
||||
localhost:5432
|
||||
```
|
||||
|
||||

|
||||
|
||||
For more information on how to use nopCommerce, refer to the [nopCommerce docs](https://docs.nopcommerce.com/en/index.html).
|
||||
|
||||
## Install nopCommerce
|
||||
|
||||
You will need to set your ecommerce store and database information.
|
||||
|
||||
- Enter an email for your website (e.g. `admin@example.com`)
|
||||
- For the database, choose PostgreSQL and check both options `Create a database` and `Enter raw connection`. Enter the following information (as per the docker-compose information)
|
||||
```
|
||||
Server=nopcommerce_database;Port=5432;Database=nop;User Id=postgres;Password=nopCommerce_db_password;
|
||||
```
|
||||
- Note: For production, you will need to set your own username and password.
|
||||
|
||||
## Access the Ecommerce from the Public Internet
|
||||
|
||||
### Set a DNS Record
|
||||
|
||||
* Go to your domain name registrar
|
||||
* In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on:
|
||||
* Type: A Record
|
||||
* Host: @
|
||||
* Value: <IPv4_Address>
|
||||
* TTL: Automatic
|
||||
* It might take up to 30 minutes to set the DNS properly.
|
||||
* To check if the A record has been registered, you can use a common DNS checker:
|
||||
* ```
|
||||
https://dnschecker.org/#A/example.com
|
||||
```
|
||||
|
||||
### Access the Ecommerce
|
||||
|
||||
You can now go on a web browser and access your website via your domain, e.g. `example.com`.
|
||||
|
||||

|
||||
|
||||
### HTTPS with Caddy
|
||||
|
||||
We set HTTPS with Caddy.
|
||||
|
||||
- Install Caddy
|
||||
```
|
||||
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' > /etc/apt/sources.list.d/caddy-stable.list
|
||||
apt update
|
||||
apt install caddy
|
||||
```
|
||||
- Set a reverse proxy on port 80 with your own domain
|
||||
```
|
||||
caddy reverse-proxy -r --from example.com --to :80
|
||||
```
|
||||
|
||||
You should see in the logs that it successfully obtains an SSL certificate, and after that you can try navigating to your site's domain again to verify it's working. Using a private window or adding `https://` specifically might be necessary until your browser drops its cache.
|
||||
|
||||

|
||||
|
||||
When you're satisfied that everything looks good, hit `ctl-c` to exit Caddy and we'll proceed to making this persistent.
|
||||
|
||||
#### Manage with Systemd
|
||||
|
||||
We create a systemd service to always run the reverse proxy for port 80.
|
||||
|
||||
- Create a caddy service
|
||||
```bash
|
||||
nano /etc/systemd/system/caddy.service
|
||||
```
|
||||
- Set the service with your own domain
|
||||
```
|
||||
[Unit]
|
||||
Description=Caddy Service
|
||||
StartLimitIntervalSec=0
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
ExecStart=caddy reverse-proxy -r --from example.com --to :80
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
- Enable the service
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable caddy
|
||||
systemctl start caddy
|
||||
```
|
||||
- Verify that the Caddy service is properly running
|
||||
```
|
||||
systemctl status caddy
|
||||
```
|
||||
|
||||
Systemd will start up Caddy immediately, restart it if it ever crashes, and start it up automatically after any reboots.
|
||||
|
||||
## Access Admin Panel
|
||||
|
||||
You can access the admin panel by clicking on `Log in` and providing the admin username and password set during the nopCommerce installation.
|
||||
|
||||

|
||||
|
||||
In `Add your store info`, you can set the HTTPS address of your domain and enable SSL.
|
||||
|
||||
You will need to properly configure your ecommerce instance for your own needs and products. Read the nopCommerce docs for more information.
|
||||
|
||||
## Manage nopCommerce with Systemd
|
||||
|
||||
We create a systemd service to always run the nopCommerce docker-compose file.
|
||||
|
||||
- Create a nopcommerce service
|
||||
```bash
|
||||
nano /etc/systemd/system/nopcommerce.service
|
||||
```
|
||||
- Set the service with your own domain
|
||||
```
|
||||
[Unit]
|
||||
Description=nopCommerce Service
|
||||
StartLimitIntervalSec=0
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
StandardOutput=append:/root/nopcommerce.log
|
||||
StandardError=append:/root/nopcommerce.log
|
||||
ExecStart=docker-compose -f /root/nopCommerce/postgresql-docker-compose.yml up
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
- Enable the service
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable nopcommerce
|
||||
systemctl start nopcommerce
|
||||
```
|
||||
- Verify that the Caddy service is properly running
|
||||
```
|
||||
systemctl status nopcommerce
|
||||
```
|
||||
|
||||
Systemd will start up the nopCommerce docker-compose file, restart it if it ever crashes, and start it up automatically after any reboots.
|
||||
|
||||
## References
|
||||
|
||||
For further information on how to set nopCommerce, read the [nopCommerce documentation](https://docs.nopcommerce.com/en/index.html?showChildren=false).
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
@@ -1,157 +0,0 @@
|
||||
<h1>WooCommerce on the TFGrid</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy Wordpress](#deploy-wordpress)
|
||||
- [Set a DNS Record](#set-a-dns-record)
|
||||
- [HTTPS with Caddy](#https-with-caddy)
|
||||
- [Adjust the Firewall](#adjust-the-firewall)
|
||||
- [Manage with zinit](#manage-with-zinit)
|
||||
- [Access Admin Panel](#access-admin-panel)
|
||||
- [Install WooCommerce](#install-woocommerce)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [References](#references)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
We show how to deploy a free and open-source ecommerce on the ThreeFold Grid. We will be deploying on a micro VM with an IPv4 address.
|
||||
|
||||
[WooCommerce](https://woocommerce.com/) is the open-source ecommerce platform for [WordPress](https://wordpress.com/). The platform is free, flexible, and amplified by a global community. The freedom of open-source means you retain full ownership of your store’s content and data forever.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](../../../dashboard/wallet_connector.md)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
|
||||
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
||||
## Deploy Wordpress
|
||||
|
||||
We start by deploying Wordpress on the ThreeFold Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [Wordpress deloyment page](https://dashboard.test.grid.tf/#/deploy/applications/wordpress/)
|
||||
* Deploy a Wordpress with an IPv4 address and sufficient resources to run Wordpress
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 2vcore
|
||||
* Minimum MB of RAM: 4GB
|
||||
* Minimum storage: 50GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
|
||||
## Set a DNS Record
|
||||
|
||||
* Go to your domain name registrar
|
||||
* In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on:
|
||||
* Type: A Record
|
||||
* Host: @
|
||||
* Value: <IPv4_Address>
|
||||
* TTL: Automatic
|
||||
* It might take up to 30 minutes to set the DNS properly.
|
||||
* To check if the A record has been registered, you can use a common DNS checker:
|
||||
* ```
|
||||
https://dnschecker.org/#A/example.com
|
||||
```
|
||||
|
||||
## HTTPS with Caddy
|
||||
|
||||
We set HTTPS with Caddy.
|
||||
|
||||
- Install Caddy
|
||||
```
|
||||
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' > /etc/apt/sources.list.d/caddy-stable.list
|
||||
apt update
|
||||
apt install caddy
|
||||
```
|
||||
- Set a reverse proxy on port 80 with your own domain
|
||||
```
|
||||
caddy reverse-proxy -r --from example.com --to :80
|
||||
```
|
||||
|
||||
You should see in the logs that it successfully obtains an SSL certificate, and after that you can try navigating to your site's domain again to verify it's working. Using a private window or adding `https://` specifically might be necessary until your browser drops its cache.
|
||||
|
||||
When you're satisfied that everything looks good, hit `ctl-c` to exit Caddy and we'll proceed to making this persistent.
|
||||
|
||||
### Adjust the Firewall
|
||||
|
||||
By default, ufw is set on Wordpress application from the Dashboard. To use Caddy and set HTTPS, we want to allow port 443.
|
||||
|
||||
* Add the permissions
|
||||
* ```
|
||||
ufw allow 443
|
||||
```
|
||||
|
||||
### Manage with zinit
|
||||
|
||||
We manage Caddy with zinit.
|
||||
|
||||
- Open the file for editing
|
||||
```bash
|
||||
nano /etc/zinit/caddy.yaml
|
||||
```
|
||||
- Insert the following line with your own domain and save the file
|
||||
```
|
||||
exec: caddy reverse-proxy -r --from example.com --to :80
|
||||
```
|
||||
- Add the new Caddy file to zinit
|
||||
```bash
|
||||
zinit monitor caddy
|
||||
```
|
||||
|
||||
Zinit will start up Caddy immediately, restart it if it ever crashes, and start it up automatically after any reboots. Assuming you tested the Caddy invocation above and used the same form here, that should be all there is to it.
|
||||
|
||||
Here are some other Zinit commands that could be helpful to troubleshoot issues:
|
||||
|
||||
- See status of all services (same as "zinit list")
|
||||
```
|
||||
zinit
|
||||
```
|
||||
- Get logs for a service
|
||||
```
|
||||
zinit log caddy
|
||||
```
|
||||
- Restart a service (to test configuration changes, for example)
|
||||
```
|
||||
zinit stop caddy
|
||||
zinit start caddy
|
||||
```
|
||||
|
||||
## Access Admin Panel
|
||||
|
||||
You can access the admin panel by clicking on `Admin panel` under `Actions` on the Dashboard. You can also use the following template on a browser with your own domain:
|
||||
|
||||
```
|
||||
example.com/wp-admin
|
||||
```
|
||||
|
||||
If you've forgotten your credentials, just open the Wordpress info window on the Dashboard.
|
||||
|
||||
## Install WooCommerce
|
||||
|
||||
On the Wordpress admin panel, go to `Plugins` and search for WooCommerce.
|
||||
|
||||

|
||||
|
||||
Once this is done, you can open WooCommerce on the left-side menu.
|
||||
|
||||

|
||||
|
||||
You can then set your store and start your online business!
|
||||
|
||||

|
||||
|
||||
## Troubleshooting
|
||||
|
||||
You might need to deactivate some plugins that aren't compatible with WooCommerce, such as `MailPoet`.
|
||||
|
||||
## References
|
||||
|
||||
Make sure to read the [Wordpress and Woocommerce documentation](https://woocommerce.com/document/woocommerce-self-service-guide) to set your ecommerce.
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
@@ -1,80 +0,0 @@
|
||||
<h1> Hummingbot on a Full VM </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy a Full VM](#deploy-a-full-vm)
|
||||
- [Preparing the VM](#preparing-the-vm)
|
||||
- [Setting Hummingbot](#setting-hummingbot)
|
||||
- [References](#references)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Hummingbot is an open source platform that helps you design, backtest, and deploy fleets of automated crypto trading bots.
|
||||
|
||||
In this guide, we go through the basic steps to deploy a [Hummingbot](https://hummingbot.org/) instance on a full VM running on the TFGrid.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](../../../dashboard/wallet_connector.md)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
|
||||
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
||||
## Deploy a Full VM
|
||||
|
||||
We start by deploying a full VM on the ThreeFold Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/)
|
||||
* Deploy a full VM (Ubuntu 22.04) with an IPv4 address and at least the minimum specs for Hummingbot
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 1vcore
|
||||
* Minimum MB of RAM: 4096GB
|
||||
* Minimum storage: 15GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
* Connect to the VM via SSH
|
||||
* ```
|
||||
ssh root@VM_IPv4_address
|
||||
```
|
||||
|
||||
## Preparing the VM
|
||||
|
||||
We prepare the full to run Hummingbot.
|
||||
|
||||
* Update the VM
|
||||
```
|
||||
apt update
|
||||
```
|
||||
* [Install Docker](../computer_it_basics/docker_basics.html#install-docker-desktop-and-docker-engine)
|
||||
|
||||
## Setting Hummingbot
|
||||
|
||||
We clone the Hummingbot repo and start it via Docker.
|
||||
|
||||
* Clone the Hummingbot repository
|
||||
```
|
||||
git clone https://github.com/hummingbot/hummingbot.git
|
||||
cd hummingbot
|
||||
```
|
||||
* Start Hummingbot
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
* Attach to instance
|
||||
```
|
||||
docker attach hummingbot
|
||||
```
|
||||
|
||||
You should now see the Hummingbot page.
|
||||
|
||||

|
||||
|
||||
## References
|
||||
|
||||
The information to install Hummingbot have been taken directly from their [documentation](https://hummingbot.org/installation/docker/).
|
||||
|
||||
For any advanced configurations, you may refer to the Hummingbot documentation.
|
Before Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 48 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 55 KiB |
Before Width: | Height: | Size: 27 KiB |
Before Width: | Height: | Size: 247 KiB |
Before Width: | Height: | Size: 79 KiB |
@@ -1,112 +0,0 @@
|
||||
<h1>MinIO Operator with Helm 3</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Create an SSH Tunnel](#create-an-ssh-tunnel)
|
||||
- [Set the VM](#set-the-vm)
|
||||
- [Set MinIO](#set-minio)
|
||||
- [Access the MinIO Operator](#access-the-minio-operator)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We show how to deploy a Kubernetes cluster and set a [MinIO](https://min.io/) Operator with [Helm 3](https://helm.sh/).
|
||||
|
||||
MinIO is a high-performance, S3 compatible object store. It is built for
|
||||
large scale AI/ML, data lake and database workloads. Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- TFChain account with TFT
|
||||
- [Deploy Kubernetes cluster with one master and one worker (IPv4)](../../dashboard/solutions/k8s.md)
|
||||
- [Make sure you can connect via SSH on the terminal](../../system_administrators/getstarted/ssh_guide/ssh_openssh.md)
|
||||
|
||||
## Create an SSH Tunnel
|
||||
|
||||
To access the MinIO Operator, we need to create an SSH tunnel with the port 9090.
|
||||
|
||||
- Open a terminal and create an SSH tunnel
|
||||
```
|
||||
ssh -4 -L 9090:127.0.0.1:9090 root@<VM_IP>
|
||||
```
|
||||
|
||||
Simply leave this window open and follow the next steps.
|
||||
|
||||
## Set the VM
|
||||
|
||||
We set the Master VM to access the minIO Operator.
|
||||
|
||||
- Install the prerequisites:
|
||||
```
|
||||
apt update
|
||||
apt install git -y
|
||||
apt install wget
|
||||
apt install jq -y
|
||||
```
|
||||
- Install Helm
|
||||
```
|
||||
wget https://get.helm.sh/helm-v3.14.3-linux-amd64.tar.gz
|
||||
tar -xvf helm-v3.14.3-linux-amd64.tar.gz
|
||||
mv linux-amd64/helm /usr/local/bin/helm
|
||||
```
|
||||
- Install yq
|
||||
```
|
||||
wget https://github.com/mikefarah/yq/releases/download/v4.43.1/yq_linux_amd64.tar.gz
|
||||
tar -xvf yq_linux_amd64.tar.gz
|
||||
mv yq_linux_amd64 /usr/bin/yq
|
||||
```
|
||||
|
||||
## Set MinIO
|
||||
|
||||
We can then set the MinIO Operator. For this step, we mainly follow the MinIO documentation [here](https://min.io/docs/minio/kubernetes/upstream/operations/install-deploy-manage/deploy-operator-helm.html).
|
||||
|
||||
- Add the MinIO repo
|
||||
```
|
||||
helm repo add minio-operator https://operator.min.io
|
||||
```
|
||||
- Validate the MinIO repo content
|
||||
```
|
||||
helm search repo minio-operator
|
||||
```
|
||||
- Install the operator
|
||||
```
|
||||
helm install \
|
||||
--namespace minio-operator \
|
||||
--create-namespace \
|
||||
operator minio-operator/operator
|
||||
```
|
||||
- Verify the operator installation
|
||||
```
|
||||
kubectl get all -n minio-operator
|
||||
```
|
||||
|
||||
## Access the MinIO Operator
|
||||
|
||||
You can then access the MinIO Operator on your local browser (port 9090)
|
||||
|
||||
```
|
||||
localhost:9090
|
||||
```
|
||||
|
||||
To log in the MinIO Operator, you will need to enter the token. To see the token, run the following line:
|
||||
|
||||
```
|
||||
kubectl get secret/console-sa-secret -n minio-operator -o json | jq -r ".data.token" | base64 -d
|
||||
```
|
||||
|
||||
Enter the token on the login page:
|
||||
|
||||

|
||||
|
||||
You then have access to the MinIO Operator:
|
||||
|
||||

|
||||
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, feel free to ask for help on the [ThreeFold Forum](https://forum.threefold.io/).
|
@@ -12,5 +12,4 @@ In this section, tailored specifically for system administrators, we'll delve in
|
||||
- [Firewall Basics](./firewall_basics/firewall_basics.md)
|
||||
- [UFW Basics](./firewall_basics/ufw_basics.md)
|
||||
- [Firewalld Basics](./firewall_basics/firewalld_basics.md)
|
||||
- [File Transfer](./file_transfer.md)
|
||||
- [Screenshots](./screenshots.md)
|
||||
- [File Transfer](./file_transfer.md)
|
@@ -1,75 +0,0 @@
|
||||
<h1> Screenshots </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Linux](#linux)
|
||||
- [MAC](#mac)
|
||||
- [Windows](#windows)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this section, we show how to easily take screenshots on Linux, MAC and Windows.
|
||||
|
||||
## Linux
|
||||
|
||||
- Copy to the clipboard a full screenshot
|
||||
```
|
||||
PrintScreen
|
||||
```
|
||||
- Copy to the clipboard a screenshot of an active window
|
||||
```
|
||||
Alt + PrintScreen
|
||||
```
|
||||
- Copy to the clipboard a screenshot of an active app
|
||||
```
|
||||
Control + Alt + PrintScreen
|
||||
```
|
||||
- Copy to the clipboard a screenshot of a selected area
|
||||
```
|
||||
Shift + PrintScreen
|
||||
```
|
||||
|
||||
## MAC
|
||||
|
||||
- Save to the desktop a full screenshot
|
||||
```
|
||||
Shift + Command (⌘) + 3
|
||||
```
|
||||
- Save to the desktop a screenshot of an active window
|
||||
```
|
||||
Shift + Command (⌘) + 4 + Spacebar
|
||||
```
|
||||
- Copy to the clipboard a screenshot of an active window
|
||||
```
|
||||
Shift + Control + Command (⌘) + 3
|
||||
```
|
||||
- Save to the desktop a screenshot of a selected area
|
||||
```
|
||||
Shift + Command (⌘) + 4
|
||||
```
|
||||
- Copy to the clipboard a screenshot of a selected area
|
||||
```
|
||||
Shift + Control + Command (⌘) + 4
|
||||
```
|
||||
|
||||
## Windows
|
||||
|
||||
- Copy to the clipboard a full screenshot
|
||||
```
|
||||
PrintScreen
|
||||
```
|
||||
- Save to the pictures directory a full screenshot
|
||||
```
|
||||
Windows key + PrintScreen
|
||||
```
|
||||
- Copy to the clipboard a screenshot of an active window
|
||||
```
|
||||
Alt + PrintScreen
|
||||
```
|
||||
- Copy to the clipboard a selected area of the screen
|
||||
```
|
||||
Windows key + Shift + S
|
||||
```
|
@@ -14,6 +14,7 @@
|
||||
- [MacOS](#macos-1)
|
||||
- [Get Yggdrasil IP](#get-yggdrasil-ip)
|
||||
- [Add Peers](#add-peers)
|
||||
- [Clients](#clients)
|
||||
- [Peers](#peers)
|
||||
- [Central europe](#central-europe)
|
||||
- [Ghent](#ghent)
|
||||
@@ -140,6 +141,10 @@ You'll need this address when registering your twin on TFChain later.
|
||||
|
||||
systemctl restart yggdrasil
|
||||
|
||||
## Clients
|
||||
|
||||
- [planetary network connector](https://github.com/threefoldtech/planetary_network)
|
||||
|
||||
## Peers
|
||||
|
||||
### Central europe
|
||||
|
@@ -84,7 +84,13 @@ You now have an SSH connection on Linux with IPv4.
|
||||
|
||||
Here are the steps to SSH into a 3Node with the Planetary Network on Linux.
|
||||
|
||||
* Set a [Planetary Network connection](../planetarynetwork.md)
|
||||
* To download and connect to the Threefold Planetary Network Connector
|
||||
* Download the [.deb file](https://github.com/threefoldtech/planetary_network/releases/tag/v0.3-rc1-Linux)
|
||||
* Right-click and select `Open with other application`
|
||||
* Select `Software Install`
|
||||
* Search the `Threefold Planetary Connector` and open it
|
||||
* Disconnect your VPN if you have one
|
||||
* In the connector, click `Connect`
|
||||
* To create the SSH key pair, write in the terminal
|
||||
* ```
|
||||
ssh-keygen
|
||||
@@ -157,7 +163,12 @@ You now have an SSH connection on MAC with IPv4.
|
||||
|
||||
Here are the steps to SSH into a 3Node with the Planetary Network on MAC.
|
||||
|
||||
* Set a [Planetary Network connection](../planetarynetwork.md)
|
||||
* To download and connect to the Threefold Planetary Network Connector
|
||||
* Download the [.dmg file](https://github.com/threefoldtech/planetary_network/releases/tag/v0.3-rc1-MacOS)
|
||||
* Run the dmg installer
|
||||
* Search the Threefold Planetary Connector in `Applications` and open it
|
||||
* Disconnect your VPN if you have one
|
||||
* In the connector, click `Connect`
|
||||
* To create the SSH key pair, write in the terminal
|
||||
* ```
|
||||
ssh-keygen
|
||||
@@ -235,7 +246,12 @@ You now have an SSH connection on Window with IPv4.
|
||||
|
||||
### SSH into a 3Node with the Planetary Network on Windows
|
||||
|
||||
* Set a [Planetary Network connection](../planetarynetwork.md)
|
||||
* To download and connect to the Threefold Planetary Network Connector
|
||||
* Download the [.msi file](https://github.com/threefoldtech/planetary_network/releases/tag/v0.3-rc1-Windows10)
|
||||
* Search the `Threefold Planetary Connector`
|
||||
* Right-click and select `Install`
|
||||
* Disconnect your VPN if you have one
|
||||
* Open the TF connector and click `Connect`
|
||||
* To download OpenSSH client and OpenSSH server
|
||||
* Open the `Settings` and select `Apps`
|
||||
* Click `Apps & Features`
|
||||
|
@@ -29,7 +29,7 @@ The main steps for the whole process are the following:
|
||||
* Deploy a 3Node
|
||||
* Choose IPv4 or the Planetary Network
|
||||
* SSH into the 3Node
|
||||
* For the Planetary Network, set a [Planetary Network connection](../planetarynetwork.md)
|
||||
* For the Planetary Network, download the Planetary Network Connector
|
||||
|
||||
|
||||
|
||||
|
@@ -25,7 +25,7 @@ To use a GPU on the TFGrid, users need to rent a dedicated node. Once they have
|
||||
|
||||
## Filter and Reserve a GPU Node
|
||||
|
||||
You can filter and reserve a GPU node using the [Dedicated Nodes section](../../dashboard/deploy/node_finder.md#dedicated-nodes) of the **ThreeFold Dashboard**.
|
||||
You can filter and reserve a GPU node using the [Dedicated Nodes section](../../dashboard/deploy/dedicated_machines.md) of the **ThreeFold Dashboard**.
|
||||
|
||||
### Filter Nodes
|
||||
|
||||
|
@@ -5,7 +5,7 @@
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Connect to Other Nodes](#connect-to-other-nodes)
|
||||
- [Hosted Public Nodes](#hosted-public-nodes)
|
||||
- [Possible Peers](#possible-peers)
|
||||
- [Default Port](#default-port)
|
||||
- [Check Network Information](#check-network-information)
|
||||
- [Test the Network](#test-the-network)
|
||||
@@ -14,11 +14,6 @@
|
||||
- [API](#api)
|
||||
- [Message System](#message-system)
|
||||
- [Inspecting Node Keys](#inspecting-node-keys)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Root Access](#root-access)
|
||||
- [Enable IPv6 at the OS Level](#enable-ipv6-at-the-os-level)
|
||||
- [VPN Can Block Mycelium](#vpn-can-block-mycelium)
|
||||
- [Add Peers](#add-peers)
|
||||
|
||||
***
|
||||
|
||||
@@ -41,32 +36,18 @@ If you are using other tun inferface, e.g. utun3 (default), you can set a differ
|
||||
mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 --tun-name utun9
|
||||
```
|
||||
|
||||
## Hosted Public Nodes
|
||||
## Possible Peers
|
||||
|
||||
A couple of public nodes are provided, which can be freely connected to. This allows
|
||||
anyone to join the global network. These are hosted in 3 geographic regions, on both
|
||||
IPv4 and IPv6, and supporting both the Tcp and Quic protocols. The nodes are the
|
||||
following:
|
||||
Here are some possible peers.
|
||||
|
||||
| Node ID | Region | IPv4 | IPv6 | Tcp port | Quic port |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| 01 | DE | 188.40.132.242 | 2a01:4f8:221:1e0b::2 | 9651 | 9651 |
|
||||
| 02 | DE | 136.243.47.186 | 2a01:4f8:212:fa6::2 | 9651 | 9651 |
|
||||
| 03 | BE | 185.69.166.7 | 2a02:1802:5e:0:8478:51ff:fee2:3331 | 9651 | 9651 |
|
||||
| 04 | BE | 185.69.166.8 | 2a02:1802:5e:0:8c9e:7dff:fec9:f0d2 | 9651 | 9651 |
|
||||
| 05 | FI | 65.21.231.58 | 2a01:4f9:6a:1dc5::2 | 9651 | 9651 |
|
||||
| 06 | FI | 65.109.18.113 | 2a01:4f9:5a:1042::2 | 9651 | 9651 |
|
||||
|
||||
These nodes are all interconnected, so 2 peers who each connect to a different node
|
||||
(or set of disjoint nodes) will still be able to reach each other. For optimal performance,
|
||||
it is recommended to connect to all of the above at once however. An example connection
|
||||
string could be:
|
||||
|
||||
`--peers tcp://188.40.132.242:9651 "tcp://[2a01:4f8:212:fa6::2]:9651" quic://185.69.166.7:9651 "tcp://[2a02:1802:5e:0:8c9e:7dff:fec9:f0d2]:9651" tcp://65.21.231.58:9651 "quic://[2a01:4f9:5a:1042::2]:9651"`
|
||||
|
||||
It is up to the user to decide which peers he wants to use, over which protocol.
|
||||
Note that quotation may or may not be required, depending on which shell is being
|
||||
used.
|
||||
```
|
||||
tcp://146.185.93.83:9651
|
||||
quic://83.231.240.31:9651
|
||||
quic://185.206.122.71:9651
|
||||
tcp://[2a04:f340:c0:71:28cc:b2ff:fe63:dd1c]:9651
|
||||
tcp://[2001:728:1000:402:78d3:cdff:fe63:e07e]:9651
|
||||
quic://[2a10:b600:1:0:ec4:7aff:fe30:8235]:9651
|
||||
```
|
||||
|
||||
## Default Port
|
||||
|
||||
@@ -155,39 +136,4 @@ Where the output could be something like this:
|
||||
```sh
|
||||
Public key: a47c1d6f2a15b2c670d3a88fbe0aeb301ced12f7bcb4c8e3aa877b20f8559c02
|
||||
Address: 27f:b2c5:a944:4dad:9cb1:da4:8bf7:7e65
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Root Access
|
||||
|
||||
You might need to run Mycelium as root. Some error messages could be something like: `Error: NixError(EPERM)`.
|
||||
|
||||
### Enable IPv6 at the OS Level
|
||||
|
||||
You need to enable IPv6 at the OS level. Some error messages could be something like: `Permission denied (os error 13)`.
|
||||
|
||||
- Check if IPv66 is enabled
|
||||
- If disabled, output is 1, if enabled, output is 0
|
||||
```
|
||||
sysctl net.ipv6.conf.all.disable_ipv6
|
||||
```
|
||||
- Enable IPv6
|
||||
```
|
||||
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
|
||||
```
|
||||
|
||||
Here's some commands to troubleshoot IPv6:
|
||||
|
||||
```
|
||||
sudo ip6tables -S INPUT
|
||||
sudo ip6tables -S OUTPUT
|
||||
```
|
||||
|
||||
### VPN Can Block Mycelium
|
||||
|
||||
You might need to disconnect your VPN when using Mycelium.
|
||||
|
||||
### Add Peers
|
||||
|
||||
It can help to connect to other peers. Check the Mycelium repository for [peers](https://github.com/threefoldtech/mycelium?tab=readme-ov-file#hosted-public-nodes).
|
||||
```
|
@@ -4,28 +4,19 @@
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Considerations](#considerations)
|
||||
- [Set Mycelium](#set-mycelium)
|
||||
- [Start Mycelium](#start-mycelium)
|
||||
- [Use Mycelium](#use-mycelium)
|
||||
- [Mycelium Service (optional)](#mycelium-service-optional)
|
||||
- [Full VM Example](#full-vm-example)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this section, we cover how to install Mycelium. This guide can be done on a local machine and also on a full VM running on the TFGrid.
|
||||
In this section, we cover how to install Mycelium. For this guide, we will show the steps on a full VM running on the TFGrid.
|
||||
|
||||
Currently, Linux, macOS and Windows are supported. On Windows, you must have `wintun.dll` in the same directory you are executing the binary from.
|
||||
|
||||
## Considerations
|
||||
|
||||
You might need to run Mycelium as root, enable IPv6 at the OS level and disconnect your VPN.
|
||||
|
||||
Read the [Troubleshooting](./information.md#troubleshooting) section for more information.
|
||||
|
||||
## Set Mycelium
|
||||
## Full VM Example
|
||||
|
||||
- Deploy a Full VM with Planetary network and SSH into the VM
|
||||
- Update the system
|
||||
```
|
||||
apt update
|
||||
@@ -42,75 +33,16 @@ Read the [Troubleshooting](./information.md#troubleshooting) section for more in
|
||||
```
|
||||
mv mycelium /usr/local/bin
|
||||
```
|
||||
|
||||
## Start Mycelium
|
||||
|
||||
You can start Mycelium
|
||||
|
||||
- Start Mycelium
|
||||
```
|
||||
mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 --tun-name utun2
|
||||
```
|
||||
- Open another terminal
|
||||
- Check the Mycelium connection information (address and public key)
|
||||
- Check the Mycelium connection information (address: ...)
|
||||
```
|
||||
mycelium inspect --json
|
||||
```
|
||||
|
||||
## Use Mycelium
|
||||
|
||||
Once you've set Mycelium, you can use it to ping other addresses and also to connect into VMs running on the TFGrid.
|
||||
|
||||
- Ping the VM from another machine with IPv6
|
||||
```
|
||||
ping6 mycelium_address
|
||||
```
|
||||
- SSH into a VM running on the TFGrid
|
||||
```
|
||||
ssh root@vm_mycelium_address
|
||||
```
|
||||
|
||||
## Mycelium Service (optional)
|
||||
|
||||
You can create a systemd service to make sure Mycelium is always enabled and running.
|
||||
|
||||
- Create a Mycelium service
|
||||
```bash
|
||||
nano /etc/systemd/system/mycelium.service
|
||||
```
|
||||
- Set the service and save the file
|
||||
```
|
||||
[Unit]
|
||||
Description=End-2-end encrypted IPv6 overlay network
|
||||
Wants=network.target
|
||||
After=network.target
|
||||
Documentation=https://github.com/threefoldtech/mycelium
|
||||
|
||||
[Service]
|
||||
ProtectHome=true
|
||||
ProtectSystem=true
|
||||
SyslogIdentifier=mycelium
|
||||
CapabilityBoundingSet=CAP_NET_ADMIN
|
||||
StateDirectory=mycelium
|
||||
StateDirectoryMode=0700
|
||||
ExecStartPre=+-/sbin/modprobe tun
|
||||
ExecStart=/usr/local/bin/mycelium --tun-name mycelium -k %S/mycelium/key.bin --peers tcp://146.185.93.83:9651 quic://83.231.240.31:9651 quic://185.206.122.71:9651 tcp://[2a04:f340:c0:71:28cc:b2ff:fe63:dd1c]:9651 tcp://[2001:728:1000:402:78d3:cdff:fe63:e07e]:9651 quic://[2a10:b600:1:0:ec4:7aff:fe30:8235]:9651
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
TimeoutStopSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
- Enable the service
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable mycelium
|
||||
systemctl start mycelium
|
||||
```
|
||||
- Verify that the Mycelium service is properly running
|
||||
```
|
||||
systemctl status mycelium
|
||||
```
|
||||
|
||||
Systemd will start up the Mycelium, restart it if it ever crashes, and start it up automatically after any reboots.
|
||||
```
|
@@ -32,7 +32,6 @@ For complementary information on ThreeFold grid and its cloud component, refer t
|
||||
- [Web Gateway](./terraform/resources/terraform_vm_gateway.md)
|
||||
- [Kubernetes Cluster](./terraform/resources/terraform_k8s.md)
|
||||
- [ZDB](./terraform/resources/terraform_zdb.md)
|
||||
- [Zlogs](./terraform/resources/terraform_zlogs.md)
|
||||
- [Quantum Safe Filesystem](./terraform/resources/terraform_qsfs.md)
|
||||
- [QSFS on Micro VM](./terraform/resources/terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](./terraform/resources/terraform_qsfs_on_full_vm.md)
|
||||
@@ -73,23 +72,12 @@ For complementary information on ThreeFold grid and its cloud component, refer t
|
||||
- [UFW Basics](./computer_it_basics/firewall_basics/ufw_basics.md)
|
||||
- [Firewalld Basics](./computer_it_basics/firewall_basics/firewalld_basics.md)
|
||||
- [File Transfer](./computer_it_basics/file_transfer.md)
|
||||
- [Screenshots](./computer_it_basics/screenshots.md)
|
||||
- [Advanced](./advanced/advanced.md)
|
||||
- [Token Transfer Keygenerator](./advanced/token_transfer_keygenerator.md)
|
||||
- [Cancel Contracts](./advanced/cancel_contracts.md)
|
||||
- [Contract Bills Reports](./advanced/contract_bill_report.md)
|
||||
- [Listing Free Public IPs](./advanced/list_public_ips.md)
|
||||
- [Cloud Console](./advanced/cloud_console.md)
|
||||
- [Redis](./advanced/grid3_redis.md)
|
||||
- [IPFS](./advanced/ipfs/ipfs_toc.md)
|
||||
- [IPFS on a Full VM](./advanced/ipfs/ipfs_fullvm.md)
|
||||
- [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md)
|
||||
<<<<<<< HEAD
|
||||
- [Hummingbot](./advanced/hummingbot.md)
|
||||
- [AI & ML Workloads](./advanced/ai_ml_workloads.md)
|
||||
=======
|
||||
- [AI & ML Workloads](./advanced/ai_ml_workloads.md)
|
||||
- [Ecommerce](./advanced/ecommerce/ecommerce.md)
|
||||
- [WooCommerce](./advanced/ecommerce/woocommerce.md)
|
||||
- [nopCommerce](./advanced/ecommerce/nopcommerce.md)
|
||||
>>>>>>> development_estore
|
||||
- [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md)
|
@@ -3,12 +3,11 @@
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Using Scheduler](./terraform_scheduler.md)
|
||||
- [Virtual Machine](./terraform_vm.md)
|
||||
- [Web Gateway](./terraform_vm_gateway.md)
|
||||
- [Kubernetes Cluster](./terraform_k8s.md)
|
||||
- [ZDB](./terraform_zdb.md)
|
||||
- [Zlogs](./terraform_zlogs.md)
|
||||
- [Virtual Machine](./terraform_vm.html)
|
||||
- [Web Gateway](./terraform_vm_gateway.html)
|
||||
- [Kubernetes Cluster](./terraform_k8s.html)
|
||||
- [ZDB](./terraform_zdb.html)
|
||||
- [Quantum Safe Filesystem](./terraform_qsfs.md)
|
||||
- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](./terraform_caprover.md)
|
||||
- [CapRover](./terraform_caprover.html)
|
||||
|
@@ -1,21 +1,10 @@
|
||||
<h1> Zlogs </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Using Zlogs](#using-zlogs)
|
||||
- [Creating a server](#creating-a-server)
|
||||
- [Streaming logs](#streaming-logs)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
# Zlogs
|
||||
|
||||
Zlogs is a utility that allows you to stream VM logs to a remote location. You can find the full description [here](https://github.com/threefoldtech/zos/tree/main/docs/manual/zlogs)
|
||||
|
||||
## Using Zlogs
|
||||
|
||||
In Terraform, a vm has a zlogs field, this field should contain a list of target URLs to stream logs to.
|
||||
In terraform, a vm has a zlogs field, this field should contain a list of target URLs to stream logs to.
|
||||
|
||||
Valid protocols are: `ws`, `wss`, and `redis`.
|
||||
|
||||
|
@@ -13,11 +13,7 @@
|
||||
- [Web Gateway](./resources/terraform_vm_gateway.md)
|
||||
- [Kubernetes Cluster](./resources/terraform_k8s.md)
|
||||
- [ZDB](./resources/terraform_zdb.md)
|
||||
- [Zlogs](./resources/terraform_zlogs.md)
|
||||
- [Quantum Safe Filesystem](./resources/terraform_qsfs.md)
|
||||
- [QSFS on Micro VM](./resources/terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](./resources/terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](./resources/terraform_caprover.md)
|
||||
- [QSFS on Micro VM](./resources/terraform_qsfs_on_microvm.md)
|
||||
- [QSFS on Full VM](./resources/terraform_qsfs_on_full_vm.md)
|
||||
- [CapRover](./resources/terraform_caprover.md)
|
||||
|