updated smaller collections for manual
20
collections/system_administrators/advanced/advanced.md
Normal file
@@ -0,0 +1,20 @@
|
||||
<h1> TFGrid Advanced </h1>
|
||||
|
||||
In this section, we delve into sophisticated topics and powerful functionalities that empower you to harness the full potential of TFGrid 3.0. Whether you're an experienced user seeking to deepen your understanding or a trailblazer venturing into uncharted territories, this manual is your gateway to mastering advanced concepts on the ThreeFold Grid.
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Token Transfer Keygenerator](./token_transfer_keygenerator.md)
|
||||
- [Cancel Contracts](./cancel_contracts.md)
|
||||
- [Contract Bills Reports](./contract_bill_report.md)
|
||||
- [Listing Free Public IPs](./list_public_ips.md)
|
||||
- [Cloud Console](./cloud_console.md)
|
||||
- [Redis](./grid3_redis.md)
|
||||
- [IPFS](./ipfs/ipfs_toc.md)
|
||||
- [IPFS on a Full VM](./ipfs/ipfs_fullvm.md)
|
||||
- [IPFS on a Micro VM](./ipfs/ipfs_microvm.md)
|
||||
- [Hummingbot](./hummingbot.md)
|
||||
- [AI & ML Workloads](./ai_ml_workloads.md)
|
||||
- [Ecommerce](./ecommerce/ecommerce.md)
|
||||
- [WooCommerce](./ecommerce/woocommerce.md)
|
||||
- [nopCommerce](./ecommerce/nopcommerce.md)
|
125
collections/system_administrators/advanced/ai_ml_workloads.md
Normal file
@@ -0,0 +1,125 @@
|
||||
<h1> AI & ML Workloads </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Prepare the System](#prepare-the-system)
|
||||
- [Install the GPU Driver](#install-the-gpu-driver)
|
||||
- [Set a Python Virtual Environment](#set-a-python-virtual-environment)
|
||||
- [Install PyTorch and Test Cuda](#install-pytorch-and-test-cuda)
|
||||
- [Set and Access Jupyter Notebook](#set-and-access-jupyter-notebook)
|
||||
- [Run AI/ML Workloads](#run-aiml-workloads)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present a basic method to deploy artificial intelligence (AI) and machine learning (ML) on the TFGrid. For this, we make use of dedicated nodes and GPU support.
|
||||
|
||||
In the first part, we show the steps to install the Nvidia driver of a GPU card on a full VM Ubuntu 22.04 running on the TFGrid.
|
||||
|
||||
In the second part, we show how to use PyTorch to run AI/ML tasks.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need to reserve a [dedicated GPU node](../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid.
|
||||
|
||||
## Prepare the System
|
||||
|
||||
- Update the system
|
||||
```
|
||||
dpkg --add-architecture i386
|
||||
apt-get update
|
||||
apt-get dist-upgrade
|
||||
reboot
|
||||
```
|
||||
- Check the GPU info
|
||||
```
|
||||
lspci | grep VGA
|
||||
lshw -c video
|
||||
```
|
||||
|
||||
## Install the GPU Driver
|
||||
|
||||
- Download the latest Nvidia driver
|
||||
- Check which driver is recommended
|
||||
```
|
||||
apt install ubuntu-drivers-common
|
||||
ubuntu-drivers devices
|
||||
```
|
||||
- Install the recommended driver (e.g. with 535)
|
||||
```
|
||||
apt install nvidia-driver-535
|
||||
```
|
||||
- Reboot and reconnect to the VM
|
||||
- Check the GPU status
|
||||
```
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
Now that the GPU node is set, let's work on setting PyTorch to run AI/ML workloads.
|
||||
|
||||
## Set a Python Virtual Environment
|
||||
|
||||
Before installing Python package with pip, you should create a virtual environment.
|
||||
|
||||
- Install the prerequisites
|
||||
```
|
||||
apt update
|
||||
apt install python3-pip python3-dev
|
||||
pip3 install --upgrade pip
|
||||
pip3 install virtualenv
|
||||
```
|
||||
- Create a virtual environment
|
||||
```
|
||||
mkdir ~/python_project
|
||||
cd ~/python_project
|
||||
virtualenv python_project_env
|
||||
source python_project_env/bin/activate
|
||||
```
|
||||
|
||||
## Install PyTorch and Test Cuda
|
||||
|
||||
Once you've created and activated a virtual environment for Pyhton, you can install different Python packages.
|
||||
|
||||
- Install PyTorch and upgrade Numpy
|
||||
```
|
||||
pip3 install torch
|
||||
pip3 install numpy --upgrade
|
||||
```
|
||||
|
||||
Before going further, you can check if Cuda is properly installed on your machine.
|
||||
|
||||
- Check that Cuda is available on Python with PyTorch by using the following lines:
|
||||
```
|
||||
import torch
|
||||
torch.cuda.is_available()
|
||||
torch.cuda.device_count() # the output should be 1
|
||||
torch.cuda.current_device() # the output should be 0
|
||||
torch.cuda.device(0)
|
||||
torch.cuda.get_device_name(0)
|
||||
```
|
||||
|
||||
## Set and Access Jupyter Notebook
|
||||
|
||||
You can run Jupyter Notebook on the remote VM and access it on your local browser.
|
||||
|
||||
- Install Jupyter Notebook
|
||||
```
|
||||
pip3 install notebook
|
||||
```
|
||||
- Run Jupyter Notebook in no-browser mode and take note of the URL and the token
|
||||
```
|
||||
jupyter notebook --no-browser --port=8080 --ip=0.0.0.0
|
||||
```
|
||||
- On your local machine, copy and paste on a browser the given URL but make sure to change `127.0.0.1` with the WireGuard IP (here it is `10.20.4.2`) and to set the correct token.
|
||||
```
|
||||
http://10.20.4.2:8080/tree?token=<insert_token>
|
||||
```
|
||||
|
||||
## Run AI/ML Workloads
|
||||
|
||||
After following the steps above, you should now be able to run Python codes that will make use of your GPU node to compute AI and ML workloads.
|
||||
|
||||
Feel free to explore different ways to use this feature. For example, the [HuggingFace course](https://huggingface.co/learn/nlp-course/chapter1/1) on natural language processing is a good introduction to machine learning.
|
@@ -0,0 +1,48 @@
|
||||
<h1> Cancel Contracts </h1>
|
||||
|
||||
<h2>Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Using the Dashboard](#using-the-dashboard)
|
||||
- [Using GraphQL and Polkadot UI](#using-graphql-and-polkadot-ui)
|
||||
- [Using grid3\_client\_ts](#using-grid3_client_ts)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We present different methods to delete contracts on the TFGrid.
|
||||
|
||||
## Using the Dashboard
|
||||
|
||||
To cancel contracts with the Dashboard, consult the [Contracts List](../../dashboard/deploy/your_contracts.md) documentation.
|
||||
|
||||
## Using GraphQL and Polkadot UI
|
||||
|
||||
From the QraphQL service execute the following query.
|
||||
|
||||
```
|
||||
query MyQuery {
|
||||
|
||||
nodeContracts(where: {twinId_eq: TWIN_ID, state_eq: Created}) {
|
||||
contractId
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
replace `TWIN_ID` with your twin id. The information should be available on the [Dashboard](../../dashboard/dashboard.md).
|
||||
|
||||
Then from [polkadot UI](https://polkadot.js.org/apps/), add the tfchain endpoint to development.
|
||||
|
||||

|
||||
|
||||
Go to `Extrinsics`, choose the `smartContract` module and `cancelContract` extrinsic and use the IDs from GraphQL to execute the cancelation.
|
||||
|
||||

|
||||
|
||||
## Using grid3_client_ts
|
||||
|
||||
In order to use the `grid3_client_ts` module, it is essential to first clone our official mono-repo containing the module and then navigate to it. If you are looking for a quick and efficient way to cancel contracts, we offer a code-based solution that can be found [here](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/scripts/delete_all_contracts.ts).
|
||||
|
||||
To make the most of `grid_client`, we highly recommend following our [Grid-Client guide](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/README.md) for a comprehensive overview of the many advanced capabilities offered by this powerful tool. With features like contract creation, modification, and retrieval, `grid_client` provides an intuitive and easy-to-use solution for managing your contracts effectively.
|
33
collections/system_administrators/advanced/cloud_console.md
Normal file
@@ -0,0 +1,33 @@
|
||||
<h1> Cloud Console </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Overview](#overview)
|
||||
- [Connect to Cloud Console](#connect-to-cloud-console)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Cloud console is a tool to view machine logging and interact with the machine you have deployed. We show the basics of cloud-console and how to access it via a browser during deployment.
|
||||
|
||||
## Overview
|
||||
|
||||
Cloud console always runs on the machine's private network ip and port number equla to `20000 +last octect` of machine private IP. For example if the machine ip is `10.20.2.2/24`, this means that `cloud-console` is running on `10.20.2.1:20002`.
|
||||
|
||||
For the cloud-console to run we need to start the cloud-hypervisor with option "--serial pty" instead of tty, this allows us to interact with the vm from another process, `cloud-console` in our case.
|
||||
|
||||
## Connect to Cloud Console
|
||||
|
||||
You can easily connect to cloud console on the TFGrid.
|
||||
|
||||
- Deploy a VM on the TFGrid with the WireGuard network
|
||||
- Set the WireGuard configuration file
|
||||
- Start the WireGuard connection:
|
||||
```
|
||||
wg-quick up wireguard.conf
|
||||
```
|
||||
- Go to your browser with the network router IP `10.20.2.1:20002` to access cloud console.
|
||||
|
||||
> Note: You might need to create a user/password in the VM first before connecting to cloud-console if the image used does not have a default user.
|
@@ -0,0 +1,63 @@
|
||||
<h1> Contract Bills Reports </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Contract Billing Report (GraphQL)](#contract-billing-report-graphql)
|
||||
- [Consumption](#consumption)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
Now you can check the billing rate of your contracts directly from the `Contracts` tab in the Dashboard.
|
||||
|
||||
> It takes an hour for the contract to display the billing rate (Until it reaches the first billing cycle).
|
||||
|
||||
The `Billing Rate` is displayed in `TFT/Hour`
|
||||
|
||||

|
||||
|
||||
## Contract Billing Report (GraphQL)
|
||||
|
||||
- you need to find the contract ID
|
||||
- ask the graphql for the consumption
|
||||
|
||||
> example query for all contracts
|
||||
|
||||
```graphql
|
||||
query MyQuery {
|
||||
contractBillReports {
|
||||
contractId
|
||||
amountBilled
|
||||
discountReceived
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
And for a specific contract
|
||||
|
||||
```graphql
|
||||
query MyQuery {
|
||||
contractBillReports(where: { contractId_eq: 10 }) {
|
||||
amountBilled
|
||||
discountReceived
|
||||
contractId
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Consumption
|
||||
|
||||
```graphql
|
||||
query MyQuery {
|
||||
consumptions(where: { contractId_eq: 10 }) {
|
||||
contractId
|
||||
cru
|
||||
sru
|
||||
mru
|
||||
hru
|
||||
nru
|
||||
}
|
||||
}
|
||||
```
|
@@ -0,0 +1,8 @@
|
||||
<h1>Ecommerce</h1>
|
||||
|
||||
You can easily deploy a free and open-source ecommerce on the TFGrid. We present here two of the most popular options.
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [WooCommerce](./woocommerce.md)
|
||||
- [nopCommerce](./nopcommerce.md)
|
After Width: | Height: | Size: 72 KiB |
After Width: | Height: | Size: 765 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 126 KiB |
After Width: | Height: | Size: 44 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 123 KiB |
@@ -0,0 +1,269 @@
|
||||
<h1>Ecommerce on the TFGrid</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy a Full VM](#deploy-a-full-vm)
|
||||
- [Create an SSH Tunnel](#create-an-ssh-tunnel)
|
||||
- [Preparing the VM](#preparing-the-vm)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Download nopCommerce](#download-nopcommerce)
|
||||
- [Access nopCommerce](#access-nopcommerce)
|
||||
- [Install nopCommerce](#install-nopcommerce)
|
||||
- [Access the Ecommerce from the Public Internet](#access-the-ecommerce-from-the-public-internet)
|
||||
- [Set a DNS Record](#set-a-dns-record)
|
||||
- [Access the Ecommerce](#access-the-ecommerce)
|
||||
- [HTTPS with Caddy](#https-with-caddy)
|
||||
- [Manage with Systemd](#manage-with-systemd)
|
||||
- [Access Admin Panel](#access-admin-panel)
|
||||
- [Manage nopCommerce with Systemd](#manage-nopcommerce-with-systemd)
|
||||
- [References](#references)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
We show how to deploy a free and open-source ecommerce on the ThreeFold Grid. We will be deploying on a full VM with an IPv4 address.
|
||||
|
||||
[nopCommerce](https://www.nopcommerce.com/en) is an open-source ecommerce platform based on Microsoft's ASP.NET Core framework and MS SQL Server 2012 (or higher) backend Database.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](../../../dashboard/wallet_connector.md)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
|
||||
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
||||
## Deploy a Full VM
|
||||
|
||||
We start by deploying a full VM on the ThreeFold Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/)
|
||||
* Deploy a full VM (Ubuntu 22.04) with an IPv4 address and at least the minimum specs for a full VM
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 1vcore
|
||||
* Minimum MB of RAM: 512MB
|
||||
* Minimum storage: 15GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
|
||||
## Create an SSH Tunnel
|
||||
|
||||
We create an SSH tunnel with port 5432:80, as it is this combination that we will set for nopCommerce on the docker-compose file.
|
||||
|
||||
- Open a terminal and create an SSH tunnel
|
||||
```
|
||||
ssh -4 -L 5432:127.0.0.1:80 root@VM_IPv4_address>
|
||||
```
|
||||
|
||||
Simply leave this window open and follow the next steps.
|
||||
|
||||
## Preparing the VM
|
||||
|
||||
We prepare the full to run nopCommerce.
|
||||
|
||||
* Connect to the VM via SSH
|
||||
```
|
||||
ssh root@VM_IPv4_address
|
||||
```
|
||||
* Update the VM
|
||||
```
|
||||
apt update
|
||||
```
|
||||
* [Install Docker](../../computer_it_basics/docker_basics.html#install-docker-desktop-and-docker-engine)
|
||||
* Install docker-compose
|
||||
```
|
||||
apt install docker-compose -y
|
||||
```
|
||||
|
||||
## Set a Firewall
|
||||
|
||||
You can set a firewall to your VM for further security. This should be used in production mode.
|
||||
|
||||
* Add the permissions
|
||||
* ```
|
||||
ufw allow 80
|
||||
ufw allow 443
|
||||
```
|
||||
* Enable the firewall
|
||||
* ```
|
||||
ufw enable
|
||||
```
|
||||
* Verify the fire wall status
|
||||
* ```
|
||||
ufw status verbose
|
||||
```
|
||||
|
||||
## Download nopCommerce
|
||||
|
||||
* Clone the repository
|
||||
```
|
||||
git clone https://github.com/nopSolutions/nopCommerce.git
|
||||
cd nopCommerce
|
||||
```
|
||||
* Build the image
|
||||
```
|
||||
cd nopCommerce
|
||||
docker-compose -f ./postgresql-docker-compose.yml build
|
||||
```
|
||||
* Run the image
|
||||
```
|
||||
docker-compose -f ./postgresql-docker-compose.yml up
|
||||
```
|
||||
|
||||
## Access nopCommerce
|
||||
|
||||
You can access the nopCommerce interface on a browser with port 5432 via the SSH tunnel:
|
||||
|
||||
```
|
||||
localhost:5432
|
||||
```
|
||||
|
||||

|
||||
|
||||
For more information on how to use nopCommerce, refer to the [nopCommerce docs](https://docs.nopcommerce.com/en/index.html).
|
||||
|
||||
## Install nopCommerce
|
||||
|
||||
You will need to set your ecommerce store and database information.
|
||||
|
||||
- Enter an email for your website (e.g. `admin@example.com`)
|
||||
- For the database, choose PostgreSQL and check both options `Create a database` and `Enter raw connection`. Enter the following information (as per the docker-compose information)
|
||||
```
|
||||
Server=nopcommerce_database;Port=5432;Database=nop;User Id=postgres;Password=nopCommerce_db_password;
|
||||
```
|
||||
- Note: For production, you will need to set your own username and password.
|
||||
|
||||
## Access the Ecommerce from the Public Internet
|
||||
|
||||
### Set a DNS Record
|
||||
|
||||
* Go to your domain name registrar
|
||||
* In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on:
|
||||
* Type: A Record
|
||||
* Host: @
|
||||
* Value: <IPv4_Address>
|
||||
* TTL: Automatic
|
||||
* It might take up to 30 minutes to set the DNS properly.
|
||||
* To check if the A record has been registered, you can use a common DNS checker:
|
||||
* ```
|
||||
https://dnschecker.org/#A/example.com
|
||||
```
|
||||
|
||||
### Access the Ecommerce
|
||||
|
||||
You can now go on a web browser and access your website via your domain, e.g. `example.com`.
|
||||
|
||||

|
||||
|
||||
### HTTPS with Caddy
|
||||
|
||||
We set HTTPS with Caddy.
|
||||
|
||||
- Install Caddy
|
||||
```
|
||||
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' > /etc/apt/sources.list.d/caddy-stable.list
|
||||
apt update
|
||||
apt install caddy
|
||||
```
|
||||
- Set a reverse proxy on port 80 with your own domain
|
||||
```
|
||||
caddy reverse-proxy -r --from example.com --to :80
|
||||
```
|
||||
|
||||
You should see in the logs that it successfully obtains an SSL certificate, and after that you can try navigating to your site's domain again to verify it's working. Using a private window or adding `https://` specifically might be necessary until your browser drops its cache.
|
||||
|
||||

|
||||
|
||||
When you're satisfied that everything looks good, hit `ctl-c` to exit Caddy and we'll proceed to making this persistent.
|
||||
|
||||
#### Manage with Systemd
|
||||
|
||||
We create a systemd service to always run the reverse proxy for port 80.
|
||||
|
||||
- Create a caddy service
|
||||
```bash
|
||||
nano /etc/systemd/system/caddy.service
|
||||
```
|
||||
- Set the service with your own domain
|
||||
```
|
||||
[Unit]
|
||||
Description=Caddy Service
|
||||
StartLimitIntervalSec=0
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
ExecStart=caddy reverse-proxy -r --from example.com --to :80
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
- Enable the service
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable caddy
|
||||
systemctl start caddy
|
||||
```
|
||||
- Verify that the Caddy service is properly running
|
||||
```
|
||||
systemctl status caddy
|
||||
```
|
||||
|
||||
Systemd will start up Caddy immediately, restart it if it ever crashes, and start it up automatically after any reboots.
|
||||
|
||||
## Access Admin Panel
|
||||
|
||||
You can access the admin panel by clicking on `Log in` and providing the admin username and password set during the nopCommerce installation.
|
||||
|
||||

|
||||
|
||||
In `Add your store info`, you can set the HTTPS address of your domain and enable SSL.
|
||||
|
||||
You will need to properly configure your ecommerce instance for your own needs and products. Read the nopCommerce docs for more information.
|
||||
|
||||
## Manage nopCommerce with Systemd
|
||||
|
||||
We create a systemd service to always run the nopCommerce docker-compose file.
|
||||
|
||||
- Create a nopcommerce service
|
||||
```bash
|
||||
nano /etc/systemd/system/nopcommerce.service
|
||||
```
|
||||
- Set the service with your own domain
|
||||
```
|
||||
[Unit]
|
||||
Description=nopCommerce Service
|
||||
StartLimitIntervalSec=0
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
StandardOutput=append:/root/nopcommerce.log
|
||||
StandardError=append:/root/nopcommerce.log
|
||||
ExecStart=docker-compose -f /root/nopCommerce/postgresql-docker-compose.yml up
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
- Enable the service
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable nopcommerce
|
||||
systemctl start nopcommerce
|
||||
```
|
||||
- Verify that the Caddy service is properly running
|
||||
```
|
||||
systemctl status nopcommerce
|
||||
```
|
||||
|
||||
Systemd will start up the nopCommerce docker-compose file, restart it if it ever crashes, and start it up automatically after any reboots.
|
||||
|
||||
## References
|
||||
|
||||
For further information on how to set nopCommerce, read the [nopCommerce documentation](https://docs.nopcommerce.com/en/index.html?showChildren=false).
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
@@ -0,0 +1,157 @@
|
||||
<h1>WooCommerce on the TFGrid</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy Wordpress](#deploy-wordpress)
|
||||
- [Set a DNS Record](#set-a-dns-record)
|
||||
- [HTTPS with Caddy](#https-with-caddy)
|
||||
- [Adjust the Firewall](#adjust-the-firewall)
|
||||
- [Manage with zinit](#manage-with-zinit)
|
||||
- [Access Admin Panel](#access-admin-panel)
|
||||
- [Install WooCommerce](#install-woocommerce)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [References](#references)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
We show how to deploy a free and open-source ecommerce on the ThreeFold Grid. We will be deploying on a micro VM with an IPv4 address.
|
||||
|
||||
[WooCommerce](https://woocommerce.com/) is the open-source ecommerce platform for [WordPress](https://wordpress.com/). The platform is free, flexible, and amplified by a global community. The freedom of open-source means you retain full ownership of your store’s content and data forever.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](../../../dashboard/wallet_connector.md)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
|
||||
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
||||
## Deploy Wordpress
|
||||
|
||||
We start by deploying Wordpress on the ThreeFold Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [Wordpress deloyment page](https://dashboard.test.grid.tf/#/deploy/applications/wordpress/)
|
||||
* Deploy a Wordpress with an IPv4 address and sufficient resources to run Wordpress
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 2vcore
|
||||
* Minimum MB of RAM: 4GB
|
||||
* Minimum storage: 50GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
|
||||
## Set a DNS Record
|
||||
|
||||
* Go to your domain name registrar
|
||||
* In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on:
|
||||
* Type: A Record
|
||||
* Host: @
|
||||
* Value: <IPv4_Address>
|
||||
* TTL: Automatic
|
||||
* It might take up to 30 minutes to set the DNS properly.
|
||||
* To check if the A record has been registered, you can use a common DNS checker:
|
||||
* ```
|
||||
https://dnschecker.org/#A/example.com
|
||||
```
|
||||
|
||||
## HTTPS with Caddy
|
||||
|
||||
We set HTTPS with Caddy.
|
||||
|
||||
- Install Caddy
|
||||
```
|
||||
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' > /etc/apt/sources.list.d/caddy-stable.list
|
||||
apt update
|
||||
apt install caddy
|
||||
```
|
||||
- Set a reverse proxy on port 80 with your own domain
|
||||
```
|
||||
caddy reverse-proxy -r --from example.com --to :80
|
||||
```
|
||||
|
||||
You should see in the logs that it successfully obtains an SSL certificate, and after that you can try navigating to your site's domain again to verify it's working. Using a private window or adding `https://` specifically might be necessary until your browser drops its cache.
|
||||
|
||||
When you're satisfied that everything looks good, hit `ctl-c` to exit Caddy and we'll proceed to making this persistent.
|
||||
|
||||
### Adjust the Firewall
|
||||
|
||||
By default, ufw is set on Wordpress application from the Dashboard. To use Caddy and set HTTPS, we want to allow port 443.
|
||||
|
||||
* Add the permissions
|
||||
* ```
|
||||
ufw allow 443
|
||||
```
|
||||
|
||||
### Manage with zinit
|
||||
|
||||
We manage Caddy with zinit.
|
||||
|
||||
- Open the file for editing
|
||||
```bash
|
||||
nano /etc/zinit/caddy.yaml
|
||||
```
|
||||
- Insert the following line with your own domain and save the file
|
||||
```
|
||||
exec: caddy reverse-proxy -r --from example.com --to :80
|
||||
```
|
||||
- Add the new Caddy file to zinit
|
||||
```bash
|
||||
zinit monitor caddy
|
||||
```
|
||||
|
||||
Zinit will start up Caddy immediately, restart it if it ever crashes, and start it up automatically after any reboots. Assuming you tested the Caddy invocation above and used the same form here, that should be all there is to it.
|
||||
|
||||
Here are some other Zinit commands that could be helpful to troubleshoot issues:
|
||||
|
||||
- See status of all services (same as "zinit list")
|
||||
```
|
||||
zinit
|
||||
```
|
||||
- Get logs for a service
|
||||
```
|
||||
zinit log caddy
|
||||
```
|
||||
- Restart a service (to test configuration changes, for example)
|
||||
```
|
||||
zinit stop caddy
|
||||
zinit start caddy
|
||||
```
|
||||
|
||||
## Access Admin Panel
|
||||
|
||||
You can access the admin panel by clicking on `Admin panel` under `Actions` on the Dashboard. You can also use the following template on a browser with your own domain:
|
||||
|
||||
```
|
||||
example.com/wp-admin
|
||||
```
|
||||
|
||||
If you've forgotten your credentials, just open the Wordpress info window on the Dashboard.
|
||||
|
||||
## Install WooCommerce
|
||||
|
||||
On the Wordpress admin panel, go to `Plugins` and search for WooCommerce.
|
||||
|
||||

|
||||
|
||||
Once this is done, you can open WooCommerce on the left-side menu.
|
||||
|
||||

|
||||
|
||||
You can then set your store and start your online business!
|
||||
|
||||

|
||||
|
||||
## Troubleshooting
|
||||
|
||||
You might need to deactivate some plugins that aren't compatible with WooCommerce, such as `MailPoet`.
|
||||
|
||||
## References
|
||||
|
||||
Make sure to read the [Wordpress and Woocommerce documentation](https://woocommerce.com/document/woocommerce-self-service-guide) to set your ecommerce.
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
46
collections/system_administrators/advanced/grid3_redis.md
Normal file
@@ -0,0 +1,46 @@
|
||||
<h1> Redis </h1>
|
||||
|
||||
<h2> Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Install Redis](#install-redis)
|
||||
- [Linux](#linux)
|
||||
- [MacOS](#macos)
|
||||
- [Run Redis](#run-redis)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
Redis is an open-source, in-memory data structure store that is widely used as a caching layer, message broker, and database. It is known for its speed, versatility, and support for a wide range of data structures. Redis is designed to deliver high-performance data access by storing data in memory, which allows for fast read and write operations. It supports various data types, including strings, lists, sets, hashes, and more, and provides a rich set of commands for manipulating and querying the data.
|
||||
|
||||
Redis is widely used in various use cases, including caching, session management, real-time analytics, leaderboards, task queues, and more. Its simplicity, speed, and flexibility make it a popular choice for developers who need a fast and reliable data store for their applications. In Threefold's ecosystem context, Redis can be used as a backend mechanism to communicate with the nodes on the ThreeFold Grid using the Reliable Message Bus.
|
||||
|
||||
|
||||
|
||||
## Install Redis
|
||||
|
||||
### Linux
|
||||
|
||||
If you don't find Redis in your Linux distro's package manager, check the [Redis downloads](https://redis.io/download) page for the source code and installation instructions.
|
||||
|
||||
### MacOS
|
||||
|
||||
On MacOS, [Homebrew](https://brew.sh/) can be used to install Redis. The steps are as follow:
|
||||
|
||||
```
|
||||
brew update
|
||||
brew install redis
|
||||
```
|
||||
|
||||
Alternatively, it can be built from source, using the same [download page](https://redis.io/download/) as shown above.
|
||||
|
||||
|
||||
|
||||
## Run Redis
|
||||
|
||||
You can launch the Redis server with the following command:
|
||||
|
||||
```
|
||||
redis-server
|
||||
```
|
@@ -0,0 +1,39 @@
|
||||
<h1> Transferring TFT Between Stellar and TFChain</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Usage](#usage)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Stellar to TFChain](#stellar-to-tfchain)
|
||||
- [TFChain to Stellar](#tfchain-to-stellar)
|
||||
|
||||
***
|
||||
|
||||
## Usage
|
||||
|
||||
This document will explain how you can transfer TFT from Tfchain to Stellar and back.
|
||||
|
||||
For more information on TFT bridges, read [this documentation](../threefold_token/tft_bridges/tft_bridges.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Stellar wallet](../threefold_token/storing_tft/storing_tft.md)
|
||||
|
||||
- [Account on TFChain (use TF Dashboard to create one)](../dashboard/wallet_connector.md)
|
||||
|
||||

|
||||
|
||||
## Stellar to TFChain
|
||||
|
||||
You can deposit to Tfchain using the bridge page on the TF Dashboard, click deposit:
|
||||
|
||||

|
||||
|
||||
## TFChain to Stellar
|
||||
|
||||
You can bridge back to stellar using the bridge page on the dashboard, click withdraw:
|
||||
|
||||

|
||||
|
||||
A withdrawfee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT.
|
||||
The amount withdrawn from TFChain will be sent to your Stellar wallet.
|
80
collections/system_administrators/advanced/hummingbot.md
Normal file
@@ -0,0 +1,80 @@
|
||||
<h1> Hummingbot on a Full VM </h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy a Full VM](#deploy-a-full-vm)
|
||||
- [Preparing the VM](#preparing-the-vm)
|
||||
- [Setting Hummingbot](#setting-hummingbot)
|
||||
- [References](#references)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Hummingbot is an open source platform that helps you design, backtest, and deploy fleets of automated crypto trading bots.
|
||||
|
||||
In this guide, we go through the basic steps to deploy a [Hummingbot](https://hummingbot.org/) instance on a full VM running on the TFGrid.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A TFChain account](../../../dashboard/wallet_connector.md)
|
||||
- TFT in your TFChain account
|
||||
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
|
||||
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
|
||||
|
||||
## Deploy a Full VM
|
||||
|
||||
We start by deploying a full VM on the ThreeFold Dashboard.
|
||||
|
||||
* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/)
|
||||
* Deploy a full VM (Ubuntu 22.04) with an IPv4 address and at least the minimum specs for Hummingbot
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 1vcore
|
||||
* Minimum MB of RAM: 4096GB
|
||||
* Minimum storage: 15GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
* Connect to the VM via SSH
|
||||
* ```
|
||||
ssh root@VM_IPv4_address
|
||||
```
|
||||
|
||||
## Preparing the VM
|
||||
|
||||
We prepare the full to run Hummingbot.
|
||||
|
||||
* Update the VM
|
||||
```
|
||||
apt update
|
||||
```
|
||||
* [Install Docker](../computer_it_basics/docker_basics.html#install-docker-desktop-and-docker-engine)
|
||||
|
||||
## Setting Hummingbot
|
||||
|
||||
We clone the Hummingbot repo and start it via Docker.
|
||||
|
||||
* Clone the Hummingbot repository
|
||||
```
|
||||
git clone https://github.com/hummingbot/hummingbot.git
|
||||
cd hummingbot
|
||||
```
|
||||
* Start Hummingbot
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
* Attach to instance
|
||||
```
|
||||
docker attach hummingbot
|
||||
```
|
||||
|
||||
You should now see the Hummingbot page.
|
||||
|
||||

|
||||
|
||||
## References
|
||||
|
||||
The information to install Hummingbot have been taken directly from their [documentation](https://hummingbot.org/installation/docker/).
|
||||
|
||||
For any advanced configurations, you may refer to the Hummingbot documentation.
|
BIN
collections/system_administrators/advanced/img/advanced_.png
Normal file
After Width: | Height: | Size: 224 KiB |
BIN
collections/system_administrators/advanced/img/billing_rate.png
Normal file
After Width: | Height: | Size: 31 KiB |
BIN
collections/system_administrators/advanced/img/bridge.png
Normal file
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 55 KiB |
BIN
collections/system_administrators/advanced/img/hummingbot.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
collections/system_administrators/advanced/img/ipfs_logo.png
Normal file
After Width: | Height: | Size: 9.1 KiB |
BIN
collections/system_administrators/advanced/img/minio_1.png
Normal file
After Width: | Height: | Size: 247 KiB |
BIN
collections/system_administrators/advanced/img/minio_2.png
Normal file
After Width: | Height: | Size: 79 KiB |
After Width: | Height: | Size: 121 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 151 KiB |
190
collections/system_administrators/advanced/ipfs/ipfs_fullvm.md
Normal file
@@ -0,0 +1,190 @@
|
||||
<h1> IPFS on a Full VM</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Deploy a Full VM](#deploy-a-full-vm)
|
||||
- [Create a Root-Access User](#create-a-root-access-user)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Additional Ports](#additional-ports)
|
||||
- [Install IPFS](#install-ipfs)
|
||||
- [Set IPFS](#set-ipfs)
|
||||
- [Final Verification](#final-verification)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this ThreeFold guide, we explore how to set an IPFS node on a Full VM using the ThreeFold Playground.
|
||||
|
||||
## Deploy a Full VM
|
||||
|
||||
We start by deploying a full VM on the ThreeFold Playground.
|
||||
|
||||
* Go to the [Threefold Playground](https://playground.grid.tf/#/)
|
||||
* Deploy a full VM (Ubuntu 20.04) with an IPv4 address and at least the minimum specs
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 1vcore
|
||||
* Minimum MB of RAM: 1024GB
|
||||
* Minimum storage: 50GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
* Connect to the VM via SSH
|
||||
* ```
|
||||
ssh root@VM_IPv4_address
|
||||
```
|
||||
|
||||
## Create a Root-Access User
|
||||
|
||||
We create a root-access user. Note that this step is optional.
|
||||
|
||||
* Once connected, create a new user with root access (for this guide we use "newuser")
|
||||
* ```
|
||||
adduser newuser
|
||||
```
|
||||
* You should now see the new user directory
|
||||
* ```
|
||||
ls /home
|
||||
```
|
||||
* Give sudo capacity to the new user
|
||||
* ```
|
||||
usermod -aG sudo newuser
|
||||
```
|
||||
* Switch to the new user
|
||||
* ```
|
||||
su - newuser
|
||||
```
|
||||
* Create a directory to store the public key
|
||||
* ```
|
||||
mkdir ~/.ssh
|
||||
```
|
||||
* Give read, write and execute permissions for the directory to the new user
|
||||
* ```
|
||||
chmod 700 ~/.ssh
|
||||
```
|
||||
* Add the SSH public key in the file **authorized_keys** and save it
|
||||
* ```
|
||||
nano ~/.ssh/authorized_keys
|
||||
```
|
||||
* Exit the VM
|
||||
* ```
|
||||
exit
|
||||
```
|
||||
* Reconnect with the new user
|
||||
* ```
|
||||
ssh newuser@VM_IPv4_address
|
||||
```
|
||||
|
||||
## Set a Firewall
|
||||
|
||||
We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
|
||||
For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443).
|
||||
We thus add the following rules:
|
||||
* Allow SSH (port 22)
|
||||
* ```
|
||||
sudo ufw allow ssh
|
||||
```
|
||||
* Allow port 4001
|
||||
* ```
|
||||
sudo ufw allow 4001
|
||||
```
|
||||
* To enable the firewall, write the following:
|
||||
* ```
|
||||
sudo ufw enable
|
||||
```
|
||||
* To see the current security rules, write the following:
|
||||
* ```
|
||||
sudo ufw status verbose
|
||||
```
|
||||
You now have enabled the firewall with proper security rules for your IPFS deployment.
|
||||
|
||||
### Additional Ports
|
||||
|
||||
We provided the basic firewall ports for your IPFS instance. There are other more advanced configurations possible.
|
||||
If you want to access your IPFS node remotely, you can allow **port 5001**. This will allow anyone to access your IPFS node. Make sure that you know what you are doing if you go this route. You should, for example, restrict which external IP address can access port 5001.
|
||||
If you want to run your deployment as a gateway node, you should allow **port 8080**. Read the IPFS documentation for more information on this.
|
||||
If you want to run pubsub capabilities, you need to allow **port 8081**. For more information, read the [IPFS documentation](https://blog.ipfs.tech/25-pubsub/).
|
||||
|
||||
## Install IPFS
|
||||
|
||||
We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions).
|
||||
* Download the binary
|
||||
* ```
|
||||
wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz
|
||||
```
|
||||
* Unzip the file
|
||||
* ```
|
||||
tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz
|
||||
```
|
||||
* Change directory
|
||||
* ```
|
||||
cd kubo
|
||||
```
|
||||
* Run the install script
|
||||
* ```
|
||||
sudo bash install.sh
|
||||
```
|
||||
* Verify that IPFS Kubo is properly installed
|
||||
* ```
|
||||
ipfs --version
|
||||
```
|
||||
|
||||
## Set IPFS
|
||||
|
||||
We initialize IPFS and run the IPFS daemon.
|
||||
|
||||
* Initialize IPFS
|
||||
* ```
|
||||
ipfs init --profile server
|
||||
```
|
||||
* Increase the storage capacity (optional)
|
||||
* ```
|
||||
ipfs config Datastore.StorageMax 30GB
|
||||
```
|
||||
* Run the IPFS daemon
|
||||
* ```
|
||||
ipfs daemon
|
||||
```
|
||||
* Set an Ubuntu systemd service to keep the IPFS daemon running after exiting the VM
|
||||
* ```
|
||||
sudo nano /etc/systemd/system/ipfs.service
|
||||
```
|
||||
* Enter the systemd info
|
||||
* ```
|
||||
[Unit]
|
||||
Description=IPFS Daemon
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/local/bin/ipfs daemon --enable-gc
|
||||
Group=newuser
|
||||
Restart=always
|
||||
Environment="IPFS_PATH=/home/newuser/.ipfs"
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
* Enable the service
|
||||
* ```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable ipfs
|
||||
sudo systemctl start ipfs
|
||||
```
|
||||
* Verify that the IPFS daemon is properly running
|
||||
* ```
|
||||
sudo systemctl status ipfs
|
||||
```
|
||||
## Final Verification
|
||||
We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification.
|
||||
* Reboot the VM
|
||||
* ```
|
||||
sudo reboot
|
||||
```
|
||||
* Reconnect to the VM
|
||||
* ```
|
||||
ssh newuser@VM_IPv4_address
|
||||
```
|
||||
* Check that the IPFS daemon is running
|
||||
* ```
|
||||
ipfs swarm peers
|
||||
```
|
||||
## Questions and Feedback
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
167
collections/system_administrators/advanced/ipfs/ipfs_microvm.md
Normal file
@@ -0,0 +1,167 @@
|
||||
<h1> IPFS on a Micro VM</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Deploy a Micro VM](#deploy-a-micro-vm)
|
||||
- [Install the Prerequisites](#install-the-prerequisites)
|
||||
- [Set a Firewall](#set-a-firewall)
|
||||
- [Additional Ports](#additional-ports)
|
||||
- [Install IPFS](#install-ipfs)
|
||||
- [Set IPFS](#set-ipfs)
|
||||
- [Set IPFS with zinit](#set-ipfs-with-zinit)
|
||||
- [Final Verification](#final-verification)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
In this ThreeFold guide, we explore how to set an IPFS node on a micro VM using the ThreeFold Playground.
|
||||
|
||||
## Deploy a Micro VM
|
||||
|
||||
We start by deploying a micro VM on the ThreeFold Playground.
|
||||
|
||||
* Go to the [Threefold Playground](https://playground.grid.tf/#/)
|
||||
* Deploy a micro VM (Ubuntu 22.04) with an IPv4 address
|
||||
* IPv4 Address
|
||||
* Minimum vcores: 1vcore
|
||||
* Minimum MB of RAM: 1024MB
|
||||
* Minimum storage: 50GB
|
||||
* After deployment, note the VM IPv4 address
|
||||
* Connect to the VM via SSH
|
||||
* ```
|
||||
ssh root@VM_IPv4_address
|
||||
```
|
||||
|
||||
## Install the Prerequisites
|
||||
|
||||
We install the prerequisites before installing and setting IPFS.
|
||||
|
||||
* Update Ubuntu
|
||||
* ```
|
||||
apt update
|
||||
```
|
||||
* Install nano and ufw
|
||||
* ```
|
||||
apt install nano && apt install ufw -y
|
||||
```
|
||||
|
||||
## Set a Firewall
|
||||
|
||||
We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw).
|
||||
|
||||
For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443).
|
||||
|
||||
We thus add the following rules:
|
||||
|
||||
* Allow SSH (port 22)
|
||||
* ```
|
||||
ufw allow ssh
|
||||
```
|
||||
* Allow port 4001
|
||||
* ```
|
||||
ufw allow 4001
|
||||
```
|
||||
* To enable the firewall, write the following:
|
||||
* ```
|
||||
ufw enable
|
||||
```
|
||||
|
||||
* To see the current security rules, write the following:
|
||||
* ```
|
||||
ufw status verbose
|
||||
```
|
||||
|
||||
You have enabled the firewall with proper security rules for your IPFS deployment.
|
||||
|
||||
### Additional Ports
|
||||
|
||||
We provided the basic firewall ports for your IPFS instance. There are other more advanced configurations possible.
|
||||
|
||||
If you want to access your IPFS node remotely, you can allow **port 5001**. This will allow anyone to access your IPFS node. Make sure that you know what you are doing if you go this route. You should, for example, restrict which external IP address can access port 5001.
|
||||
|
||||
If you want to run your deployment as a gateway node, you should allow **port 8080**. Read the IPFS documentation for more information on this.
|
||||
|
||||
If you want to run pubsub capabilities, you need to allow **port 8081**. For more information, read the [IPFS documentation](https://blog.ipfs.tech/25-pubsub/).
|
||||
|
||||
## Install IPFS
|
||||
|
||||
We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions).
|
||||
|
||||
* Download the binary
|
||||
* ```
|
||||
wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz
|
||||
```
|
||||
* Unzip the file
|
||||
* ```
|
||||
tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz
|
||||
```
|
||||
* Change directory
|
||||
* ```
|
||||
cd kubo
|
||||
```
|
||||
* Run the install script
|
||||
* ```
|
||||
bash install.sh
|
||||
```
|
||||
* Verify that IPFS Kubo is properly installed
|
||||
* ```
|
||||
ipfs --version
|
||||
```
|
||||
|
||||
## Set IPFS
|
||||
|
||||
We initialize IPFS and run the IPFS daemon.
|
||||
|
||||
* Initialize IPFS
|
||||
* ```
|
||||
ipfs init --profile server
|
||||
```
|
||||
* Increase the storage capacity (optional)
|
||||
* ```
|
||||
ipfs config Datastore.StorageMax 30GB
|
||||
```
|
||||
* Run the IPFS daemon
|
||||
* ```
|
||||
ipfs daemon
|
||||
```
|
||||
|
||||
## Set IPFS with zinit
|
||||
|
||||
We set the IPFS daemon with zinit. This will make sure that the IPFS daemon starts at each VM reboot or if it stops functioning momentarily.
|
||||
|
||||
* Create the yaml file
|
||||
* ```
|
||||
nano /etc/zinit/ipfs.yaml
|
||||
```
|
||||
* Set the execution command
|
||||
* ```
|
||||
exec: /usr/local/bin/ipfs daemon
|
||||
```
|
||||
* Run the IPFS daemon with the zinit monitor command
|
||||
* ```
|
||||
zinit monitor ipfs
|
||||
```
|
||||
* Verify that the IPFS daemon is running
|
||||
* ```
|
||||
ipfs swarm peers
|
||||
```
|
||||
|
||||
## Final Verification
|
||||
|
||||
We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification.
|
||||
|
||||
* Reboot the VM
|
||||
* ```
|
||||
reboot -f
|
||||
```
|
||||
* Reconnect to the VM and verify that the IPFS daemon is running
|
||||
* ```
|
||||
ipfs swarm peers
|
||||
```
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel.
|
@@ -0,0 +1,6 @@
|
||||
<h1>IPFS and ThreeFold</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [IPFS on a Full VM](./ipfs_fullvm.md)
|
||||
- [IPFS on a Micro VM](./ipfs_microvm.md)
|
@@ -0,0 +1,22 @@
|
||||
<h1> Listing Public IPs </h1>
|
||||
|
||||
<h2>Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Example](#example)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
Listing public IPs can be done by asking graphQL for all IPs that has `contractId = 0`
|
||||
|
||||
## Example
|
||||
|
||||
```graphql
|
||||
query MyQuery {
|
||||
publicIps(where: {contractId_eq: 0}) {
|
||||
ip
|
||||
}
|
||||
}
|
||||
```
|
112
collections/system_administrators/advanced/minio_helm3.md
Normal file
@@ -0,0 +1,112 @@
|
||||
<h1>MinIO Operator with Helm 3</h1>
|
||||
|
||||
<h2>Table of Contents</h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Create an SSH Tunnel](#create-an-ssh-tunnel)
|
||||
- [Set the VM](#set-the-vm)
|
||||
- [Set MinIO](#set-minio)
|
||||
- [Access the MinIO Operator](#access-the-minio-operator)
|
||||
- [Questions and Feedback](#questions-and-feedback)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
We show how to deploy a Kubernetes cluster and set a [MinIO](https://min.io/) Operator with [Helm 3](https://helm.sh/).
|
||||
|
||||
MinIO is a high-performance, S3 compatible object store. It is built for
|
||||
large scale AI/ML, data lake and database workloads. Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- TFChain account with TFT
|
||||
- [Deploy Kubernetes cluster with one master and one worker (IPv4)](../../dashboard/solutions/k8s.md)
|
||||
- [Make sure you can connect via SSH on the terminal](../../system_administrators/getstarted/ssh_guide/ssh_openssh.md)
|
||||
|
||||
## Create an SSH Tunnel
|
||||
|
||||
To access the MinIO Operator, we need to create an SSH tunnel with the port 9090.
|
||||
|
||||
- Open a terminal and create an SSH tunnel
|
||||
```
|
||||
ssh -4 -L 9090:127.0.0.1:9090 root@<VM_IP>
|
||||
```
|
||||
|
||||
Simply leave this window open and follow the next steps.
|
||||
|
||||
## Set the VM
|
||||
|
||||
We set the Master VM to access the minIO Operator.
|
||||
|
||||
- Install the prerequisites:
|
||||
```
|
||||
apt update
|
||||
apt install git -y
|
||||
apt install wget
|
||||
apt install jq -y
|
||||
```
|
||||
- Install Helm
|
||||
```
|
||||
wget https://get.helm.sh/helm-v3.14.3-linux-amd64.tar.gz
|
||||
tar -xvf helm-v3.14.3-linux-amd64.tar.gz
|
||||
mv linux-amd64/helm /usr/local/bin/helm
|
||||
```
|
||||
- Install yq
|
||||
```
|
||||
wget https://github.com/mikefarah/yq/releases/download/v4.43.1/yq_linux_amd64.tar.gz
|
||||
tar -xvf yq_linux_amd64.tar.gz
|
||||
mv yq_linux_amd64 /usr/bin/yq
|
||||
```
|
||||
|
||||
## Set MinIO
|
||||
|
||||
We can then set the MinIO Operator. For this step, we mainly follow the MinIO documentation [here](https://min.io/docs/minio/kubernetes/upstream/operations/install-deploy-manage/deploy-operator-helm.html).
|
||||
|
||||
- Add the MinIO repo
|
||||
```
|
||||
helm repo add minio-operator https://operator.min.io
|
||||
```
|
||||
- Validate the MinIO repo content
|
||||
```
|
||||
helm search repo minio-operator
|
||||
```
|
||||
- Install the operator
|
||||
```
|
||||
helm install \
|
||||
--namespace minio-operator \
|
||||
--create-namespace \
|
||||
operator minio-operator/operator
|
||||
```
|
||||
- Verify the operator installation
|
||||
```
|
||||
kubectl get all -n minio-operator
|
||||
```
|
||||
|
||||
## Access the MinIO Operator
|
||||
|
||||
You can then access the MinIO Operator on your local browser (port 9090)
|
||||
|
||||
```
|
||||
localhost:9090
|
||||
```
|
||||
|
||||
To log in the MinIO Operator, you will need to enter the token. To see the token, run the following line:
|
||||
|
||||
```
|
||||
kubectl get secret/console-sa-secret -n minio-operator -o json | jq -r ".data.token" | base64 -d
|
||||
```
|
||||
|
||||
Enter the token on the login page:
|
||||
|
||||

|
||||
|
||||
You then have access to the MinIO Operator:
|
||||
|
||||

|
||||
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
If you have any questions, feel free to ask for help on the [ThreeFold Forum](https://forum.threefold.io/).
|
@@ -0,0 +1,88 @@
|
||||
|
||||
<h1> Transfer TFT Between Networks by Using the Keygenerator </h1>
|
||||
|
||||
<h2>Table of Contents </h2>
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Keypair](#keypair)
|
||||
- [Stellar to TFChain](#stellar-to-tfchain)
|
||||
- [Alternative Transfer to TF Chain](#alternative-transfer-to-tf-chain)
|
||||
- [TFChain to Stellar](#tfchain-to-stellar)
|
||||
|
||||
***
|
||||
|
||||
## Introduction
|
||||
|
||||
Using this method, only transfer is possible between accounts that are generated in the same manner and that are yours. Please find the keygen tooling for it below.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Keypair
|
||||
|
||||
- ed25519 keypair
|
||||
- Go installed on your local computer
|
||||
|
||||
Create a keypair with the following tool: <https://github.com/threefoldtech/tfchain_tft/tree/main/tfchain_bridge/tools/keygen>
|
||||
|
||||
```sh
|
||||
go build .
|
||||
./keygen
|
||||
```
|
||||
|
||||
### Stellar to TFChain
|
||||
|
||||
Create a Stellar wallet from the key that you generated.
|
||||
Transfer the TFT from your wallet to the bridge address. A deposit fee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT.
|
||||
|
||||
Bridge addresses :
|
||||
|
||||
- On Mainnet: `GBNOTAYUMXVO5QDYWYO2SOCOYIJ3XFIP65GKOQN7H65ZZSO6BK4SLWSC` on [Stellar Mainnet](https://stellar.expert/explorer/public).
|
||||
- On testnet: `GA2CWNBUHX7NZ3B5GR4I23FMU7VY5RPA77IUJTIXTTTGKYSKDSV6LUA4` on [Stellar MAINnet](https://stellar.expert/explorer/public)
|
||||
|
||||
The amount deposited on TF Chain minus 1 TFT will be transferred over the bridge to the TFChain account.
|
||||
|
||||
Effect will be the following :
|
||||
|
||||
- Transferred TFTs from Stellar will be sent to a Stellar vault account representing all tokens on TFChain
|
||||
- TFTs will be minted on the TFChain for the transferred amount
|
||||
|
||||
### Alternative Transfer to TF Chain
|
||||
|
||||
We also enabled deposits to TF Grid objects. Following objects can be deposited to:
|
||||
|
||||
- Twin
|
||||
- Farm
|
||||
- Node
|
||||
- Entity
|
||||
|
||||
To deposit to any of these objects, a memo text in format `object_objectID` must be passed on the deposit to the bridge wallet. Example: `twin_1`.
|
||||
|
||||
To deposit to a TF Grid object, this object **must** exists. If the object is not found on chain, a refund is issued.
|
||||
|
||||
## TFChain to Stellar
|
||||
|
||||
Create a TFChain account from the key that you generated. (TF Chain raw seed).
|
||||
Browse to :
|
||||
|
||||
- For mainnet: <https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/accounts>
|
||||
- For testnet: <https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/accounts>
|
||||
- For Devnet: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/accounts
|
||||
|
||||
-> Add Account -> Click on mnemonic and select `Raw Seed` -> Paste raw TF Chain seed.
|
||||
|
||||
Select `Advanced creation options` -> Change `keypair crypto type` to `Edwards (ed25519)`. Click `I have saved my mnemonic seed safely` and proceed.
|
||||
|
||||
Choose a name and password and proceed.
|
||||
|
||||
Browse to the [extrinsics](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/extrinsics) <!--- or [Devnet](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) -->, select tftBridgeModule and extrinsic: `swap_to_stellar`. Provide your Bridge substrate address and the amount to transfer. Sign using your password.
|
||||
Again, a withdrawfee of 1 TFT will be taken, so make sure you send an amount larger than 1 TFT.
|
||||
|
||||
The amount withdrawn from TFChain will be sent to your Stellar wallet.
|
||||
|
||||
Behind the scenes, following will happen:
|
||||
|
||||
- Transferred TFTs from Stellar will be sent from the Stellar vault account to the user's Stellar account
|
||||
- TFTs will be burned on the TFChain for the transferred amount
|
||||
|
||||
Example: 
|